Sample records for model predictions matched

  1. Predicting Football Matches Results using Bayesian Networks for English Premier League (EPL)

    NASA Astrophysics Data System (ADS)

    Razali, Nazim; Mustapha, Aida; Yatim, Faiz Ahmad; Aziz, Ruhaya Ab

    2017-08-01

    The issues of modeling asscoiation football prediction model has become increasingly popular in the last few years and many different approaches of prediction models have been proposed with the point of evaluating the attributes that lead a football team to lose, draw or win the match. There are three types of approaches has been considered for predicting football matches results which include statistical approaches, machine learning approaches and Bayesian approaches. Lately, many studies regarding football prediction models has been produced using Bayesian approaches. This paper proposes a Bayesian Networks (BNs) to predict the results of football matches in term of home win (H), away win (A) and draw (D). The English Premier League (EPL) for three seasons of 2010-2011, 2011-2012 and 2012-2013 has been selected and reviewed. K-fold cross validation has been used for testing the accuracy of prediction model. The required information about the football data is sourced from a legitimate site at http://www.football-data.co.uk. BNs achieved predictive accuracy of 75.09% in average across three seasons. It is hoped that the results could be used as the benchmark output for future research in predicting football matches results.

  2. Prediction of relative and absolute permeabilities for gas and water from soil water retention curves using a pore-scale network model

    NASA Astrophysics Data System (ADS)

    Fischer, Ulrich; Celia, Michael A.

    1999-04-01

    Functional relationships for unsaturated flow in soils, including those between capillary pressure, saturation, and relative permeabilities, are often described using analytical models based on the bundle-of-tubes concept. These models are often limited by, for example, inherent difficulties in prediction of absolute permeabilities, and in incorporation of a discontinuous nonwetting phase. To overcome these difficulties, an alternative approach may be formulated using pore-scale network models. In this approach, the pore space of the network model is adjusted to match retention data, and absolute and relative permeabilities are then calculated. A new approach that allows more general assignments of pore sizes within the network model provides for greater flexibility to match measured data. This additional flexibility is especially important for simultaneous modeling of main imbibition and drainage branches. Through comparisons between the network model results, analytical model results, and measured data for a variety of both undisturbed and repacked soils, the network model is seen to match capillary pressure-saturation data nearly as well as the analytical model, to predict water phase relative permeabilities equally well, and to predict gas phase relative permeabilities significantly better than the analytical model. The network model also provides very good estimates for intrinsic permeability and thus for absolute permeabilities. Both the network model and the analytical model lost accuracy in predicting relative water permeabilities for soils characterized by a van Genuchten exponent n≲3. Overall, the computational results indicate that reliable predictions of both relative and absolute permeabilities are obtained with the network model when the model matches the capillary pressure-saturation data well. The results also indicate that measured imbibition data are crucial to good predictions of the complete hysteresis loop.

  3. Quick probabilistic binary image matching: changing the rules of the game

    NASA Astrophysics Data System (ADS)

    Mustafa, Adnan A. Y.

    2016-09-01

    A Probabilistic Matching Model for Binary Images (PMMBI) is presented that predicts the probability of matching binary images with any level of similarity. The model relates the number of mappings, the amount of similarity between the images and the detection confidence. We show the advantage of using a probabilistic approach to matching in similarity space as opposed to a linear search in size space. With PMMBI a complete model is available to predict the quick detection of dissimilar binary images. Furthermore, the similarity between the images can be measured to a good degree if the images are highly similar. PMMBI shows that only a few pixels need to be compared to detect dissimilarity between images, as low as two pixels in some cases. PMMBI is image size invariant; images of any size can be matched at the same quick speed. Near-duplicate images can also be detected without much difficulty. We present tests on real images that show the prediction accuracy of the model.

  4. Dynamic Modeling and Very Short-term Prediction of Wind Power Output Using Box-Cox Transformation

    NASA Astrophysics Data System (ADS)

    Urata, Kengo; Inoue, Masaki; Murayama, Dai; Adachi, Shuichi

    2016-09-01

    We propose a statistical modeling method of wind power output for very short-term prediction. The modeling method with a nonlinear model has cascade structure composed of two parts. One is a linear dynamic part that is driven by a Gaussian white noise and described by an autoregressive model. The other is a nonlinear static part that is driven by the output of the linear part. This nonlinear part is designed for output distribution matching: we shape the distribution of the model output to match with that of the wind power output. The constructed model is utilized for one-step ahead prediction of the wind power output. Furthermore, we study the relation between the prediction accuracy and the prediction horizon.

  5. A comparison of low back kinetic estimates obtained through posture matching, rigid link modeling and an EMG-assisted model.

    PubMed

    Parkinson, R J; Bezaire, M; Callaghan, J P

    2011-07-01

    This study examined errors introduced by a posture matching approach (3DMatch) relative to dynamic three-dimensional rigid link and EMG-assisted models. Eighty-eight lifting trials of various combinations of heights (floor, 0.67, 1.2 m), asymmetry (left, right and center) and mass (7.6 and 9.7 kg) were videotaped while spine postures, ground reaction forces, segment orientations and muscle activations were documented and used to estimate joint moments and forces (L5/S1). Posture matching over predicted peak and cumulative extension moment (p < 0.0001 for all variables). There was no difference between peak compression estimates obtained with posture matching or EMG-assisted approaches (p = 0.7987). Posture matching over predicted cumulative (p < 0.0001) compressive loading due to a bias in standing, however, individualized bias correction eliminated the differences. Therefore, posture matching provides a method to analyze industrial lifting exposures that will predict kinetic values similar to those of more sophisticated models, provided necessary corrections are applied. Copyright © 2010 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  6. Moral Attitudes Predict Cheating and Gamesmanship Behaviors Among Competitive Tennis Players

    PubMed Central

    Lucidi, Fabio; Zelli, Arnaldo; Mallia, Luca; Nicolais, Giampaolo; Lazuras, Lambros; Hagger, Martin S.

    2017-01-01

    Background: The present study tested Lee et al.’s (2008) model of moral attitudes and cheating behavior in sports in an Italian sample of young tennis players and extended it to predict behavior in actual match play. In the first phase of the study we proposed that moral, competence and status values would predict prosocial and antisocial moral attitudes directly, and indirectly through athletes’ goal orientations. In the second phase, we hypothesized that moral attitudes would directly predict actual cheating behavior observed during match play. Method: Adolescent competitive tennis players (N = 314, 76.75% males, M age = 14.36 years, SD = 1.50) completed measures of values, goal orientations, and moral attitudes. A sub-sample (n = 90) was observed in 45 competitive tennis matches by trained observers who recorded their cheating and gamesmanship behaviors on a validated checklist. Results: Consistent with hypotheses, athletes’ values predicted their moral attitudes through the effects of goal orientations. Anti-social attitudes directly predicted cheating behavior in actual match play providing support for a direct link between moral attitude and actual behavior. Conclusion: The present study findings support key propositions of Lee and colleagues’ model, and extended its application to competitive athletes in actual match play. PMID:28446891

  7. Scientist role models in the classroom: how important is gender matching?

    NASA Astrophysics Data System (ADS)

    Conner, Laura D. Carsten; Danielson, Jennifer

    2016-10-01

    Gender-matched role models are often proposed as a mechanism to increase identification with science among girls, with the ultimate aim of broadening participation in science. While there is a great deal of evidence suggesting that role models can be effective, there is mixed support in the literature for the importance of gender matching. We used the Eccles Expectancy Value model as a framework to explore how female science role models impact a suite of factors that might predict future career choice among elementary students. We predicted that impacts of female scientist role models would be more pronounced among girls than among boys, as such role models have the potential to normalise what is often perceived as a gender-deviant role. Using a mixed-methods approach, we found that ideas about scientists, self-concept towards science, and level of science participation changed equally across both genders, contrary to our prediction. Our results suggest that engaging in authentic science and viewing the female scientist as personable were keys to changes among students, rather than gender matching between the role model and student. These results imply that scientists in the schools programmes should focus on preparing the visiting scientists in these areas.

  8. Integrating data from randomized controlled trials and observational studies to predict the response to pregabalin in patients with painful diabetic peripheral neuropathy.

    PubMed

    Alexander, Joe; Edwards, Roger A; Savoldelli, Alberto; Manca, Luigi; Grugni, Roberto; Emir, Birol; Whalen, Ed; Watt, Stephen; Brodsky, Marina; Parsons, Bruce

    2017-07-20

    More patient-specific medical care is expected as more is learned about variations in patient responses to medical treatments. Analytical tools enable insights by linking treatment responses from different types of studies, such as randomized controlled trials (RCTs) and observational studies. Given the importance of evidence from both types of studies, our goal was to integrate these types of data into a single predictive platform to help predict response to pregabalin in individual patients with painful diabetic peripheral neuropathy (pDPN). We utilized three pivotal RCTs of pregabalin (398 North American patients) and the largest observational study of pregabalin (3159 German patients). We implemented a hierarchical cluster analysis to identify patient clusters in the Observational Study to which RCT patients could be matched using the coarsened exact matching (CEM) technique, thereby creating a matched dataset. We then developed autoregressive moving average models (ARMAXs) to estimate weekly pain scores for pregabalin-treated patients in each cluster in the matched dataset using the maximum likelihood method. Finally, we validated ARMAX models using Observational Study patients who had not matched with RCT patients, using t tests between observed and predicted pain scores. Cluster analysis yielded six clusters (287-777 patients each) with the following clustering variables: gender, age, pDPN duration, body mass index, depression history, pregabalin monotherapy, prior gabapentin use, baseline pain score, and baseline sleep interference. CEM yielded 1528 unique patients in the matched dataset. The reduction in global imbalance scores for the clusters after adding the RCT patients (ranging from 6 to 63% depending on the cluster) demonstrated that the process reduced the bias of covariates in five of the six clusters. ARMAX models of pain score performed well (R 2 : 0.85-0.91; root mean square errors: 0.53-0.57). t tests did not show differences between observed and predicted pain scores in the 1955 patients who had not matched with RCT patients. The combination of cluster analyses, CEM, and ARMAX modeling enabled strong predictive capabilities with respect to pain scores. Integrating RCT and Observational Study data using CEM enabled effective use of Observational Study data to predict patient responses.

  9. Job Preferences in the Anticipatory Socialization Phase: A Comparison of Two Matching Models.

    ERIC Educational Resources Information Center

    Moss, Mira K.; Frieze, Irene Hanson

    1993-01-01

    Responses from 86 business administration graduate students tested (1) a model matching self-concept to development of job preferences and (2) an expectancy-value model. Both models significantly predicted job preferences; a higher proportion of variance was explained by the expectancy-value model. (SK)

  10. Predictive equations for lung volumes from computed tomography for size matching in pulmonary transplantation.

    PubMed

    Konheim, Jeremy A; Kon, Zachary N; Pasrija, Chetan; Luo, Qingyang; Sanchez, Pablo G; Garcia, Jose P; Griffith, Bartley P; Jeudy, Jean

    2016-04-01

    Size matching for lung transplantation is widely accomplished using height comparisons between donors and recipients. This gross approximation allows for wide variation in lung size and, potentially, size mismatch. Three-dimensional computed tomography (3D-CT) volumetry comparisons could offer more accurate size matching. Although recipient CT scans are universally available, donor CT scans are rarely performed. Therefore, predicted donor lung volumes could be used for comparison to measured recipient lung volumes, but no such predictive equations exist. We aimed to use 3D-CT volumetry measurements from a normal patient population to generate equations for predicted total lung volume (pTLV), predicted right lung volume (pRLV), and predicted left lung volume (pLLV), for size-matching purposes. Chest CT scans of 400 normal patients were retrospectively evaluated. 3D-CT volumetry was performed to measure total lung volume, right lung volume, and left lung volume of each patient, and predictive equations were generated. The fitted model was tested in a separate group of 100 patients. The model was externally validated by comparison of total lung volume with total lung capacity from pulmonary function tests in a subset of those patients. Age, gender, height, and race were independent predictors of lung volume. In the test group, there were strong linear correlations between predicted and actual lung volumes measured by 3D-CT volumetry for pTLV (r = 0.72), pRLV (r = 0.72), and pLLV (r = 0.69). A strong linear correlation was also observed when comparing pTLV and total lung capacity (r = 0.82). We successfully created a predictive model for pTLV, pRLV, and pLLV. These may serve as reference standards and predict donor lung volume for size matching in lung transplantation. Copyright © 2016 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.

  11. The predictive consequences of parameterization

    NASA Astrophysics Data System (ADS)

    White, J.; Hughes, J. D.; Doherty, J. E.

    2013-12-01

    In numerical groundwater modeling, parameterization is the process of selecting the aspects of a computer model that will be allowed to vary during history matching. This selection process is dependent on professional judgment and is, therefore, inherently subjective. Ideally, a robust parameterization should be commensurate with the spatial and temporal resolution of the model and should include all uncertain aspects of the model. Limited computing resources typically require reducing the number of adjustable parameters so that only a subset of the uncertain model aspects are treated as estimable parameters; the remaining aspects are treated as fixed parameters during history matching. We use linear subspace theory to develop expressions for the predictive error incurred by fixing parameters. The predictive error is comprised of two terms. The first term arises directly from the sensitivity of a prediction to fixed parameters. The second term arises from prediction-sensitive adjustable parameters that are forced to compensate for fixed parameters during history matching. The compensation is accompanied by inappropriate adjustment of otherwise uninformed, null-space parameter components. Unwarranted adjustment of null-space components away from prior maximum likelihood values may produce bias if a prediction is sensitive to those components. The potential for subjective parameterization choices to corrupt predictions is examined using a synthetic model. Several strategies are evaluated, including use of piecewise constant zones, use of pilot points with Tikhonov regularization and use of the Karhunen-Loeve transformation. The best choice of parameterization (as defined by minimum error variance) is strongly dependent on the types of predictions to be made by the model.

  12. Rebar: Reinforcing a Matching Estimator with Predictions from High-Dimensional Covariates

    ERIC Educational Resources Information Center

    Sales, Adam C.; Hansen, Ben B.; Rowan, Brian

    2018-01-01

    In causal matching designs, some control subjects are often left unmatched, and some covariates are often left unmodeled. This article introduces "rebar," a method using high-dimensional modeling to incorporate these commonly discarded data without sacrificing the integrity of the matching design. After constructing a match, a researcher…

  13. Muscle Synergies May Improve Optimization Prediction of Knee Contact Forces During Walking

    PubMed Central

    Walter, Jonathan P.; Kinney, Allison L.; Banks, Scott A.; D'Lima, Darryl D.; Besier, Thor F.; Lloyd, David G.; Fregly, Benjamin J.

    2014-01-01

    The ability to predict patient-specific joint contact and muscle forces accurately could improve the treatment of walking-related disorders. Muscle synergy analysis, which decomposes a large number of muscle electromyographic (EMG) signals into a small number of synergy control signals, could reduce the dimensionality and thus redundancy of the muscle and contact force prediction process. This study investigated whether use of subject-specific synergy controls can improve optimization prediction of knee contact forces during walking. To generate the predictions, we performed mixed dynamic muscle force optimizations (i.e., inverse skeletal dynamics with forward muscle activation and contraction dynamics) using data collected from a subject implanted with a force-measuring knee replacement. Twelve optimization problems (three cases with four subcases each) that minimized the sum of squares of muscle excitations were formulated to investigate how synergy controls affect knee contact force predictions. The three cases were: (1) Calibrate+Match where muscle model parameter values were calibrated and experimental knee contact forces were simultaneously matched, (2) Precalibrate+Predict where experimental knee contact forces were predicted using precalibrated muscle model parameters values from the first case, and (3) Calibrate+Predict where muscle model parameter values were calibrated and experimental knee contact forces were simultaneously predicted, all while matching inverse dynamic loads at the hip, knee, and ankle. The four subcases used either 44 independent controls or five synergy controls with and without EMG shape tracking. For the Calibrate+Match case, all four subcases closely reproduced the measured medial and lateral knee contact forces (R2 ≥ 0.94, root-mean-square (RMS) error < 66 N), indicating sufficient model fidelity for contact force prediction. For the Precalibrate+Predict and Calibrate+Predict cases, synergy controls yielded better contact force predictions (0.61 < R2 < 0.90, 83 N < RMS error < 161 N) than did independent controls (-0.15 < R2 < 0.79, 124 N < RMS error < 343 N) for corresponding subcases. For independent controls, contact force predictions improved when precalibrated model parameter values or EMG shape tracking was used. For synergy controls, contact force predictions were relatively insensitive to how model parameter values were calibrated, while EMG shape tracking made lateral (but not medial) contact force predictions worse. For the subject and optimization cost function analyzed in this study, use of subject-specific synergy controls improved the accuracy of knee contact force predictions, especially for lateral contact force when EMG shape tracking was omitted, and reduced prediction sensitivity to uncertainties in muscle model parameter values. PMID:24402438

  14. Muscle synergies may improve optimization prediction of knee contact forces during walking.

    PubMed

    Walter, Jonathan P; Kinney, Allison L; Banks, Scott A; D'Lima, Darryl D; Besier, Thor F; Lloyd, David G; Fregly, Benjamin J

    2014-02-01

    The ability to predict patient-specific joint contact and muscle forces accurately could improve the treatment of walking-related disorders. Muscle synergy analysis, which decomposes a large number of muscle electromyographic (EMG) signals into a small number of synergy control signals, could reduce the dimensionality and thus redundancy of the muscle and contact force prediction process. This study investigated whether use of subject-specific synergy controls can improve optimization prediction of knee contact forces during walking. To generate the predictions, we performed mixed dynamic muscle force optimizations (i.e., inverse skeletal dynamics with forward muscle activation and contraction dynamics) using data collected from a subject implanted with a force-measuring knee replacement. Twelve optimization problems (three cases with four subcases each) that minimized the sum of squares of muscle excitations were formulated to investigate how synergy controls affect knee contact force predictions. The three cases were: (1) Calibrate+Match where muscle model parameter values were calibrated and experimental knee contact forces were simultaneously matched, (2) Precalibrate+Predict where experimental knee contact forces were predicted using precalibrated muscle model parameters values from the first case, and (3) Calibrate+Predict where muscle model parameter values were calibrated and experimental knee contact forces were simultaneously predicted, all while matching inverse dynamic loads at the hip, knee, and ankle. The four subcases used either 44 independent controls or five synergy controls with and without EMG shape tracking. For the Calibrate+Match case, all four subcases closely reproduced the measured medial and lateral knee contact forces (R2 ≥ 0.94, root-mean-square (RMS) error < 66 N), indicating sufficient model fidelity for contact force prediction. For the Precalibrate+Predict and Calibrate+Predict cases, synergy controls yielded better contact force predictions (0.61 < R2 < 0.90, 83 N < RMS error < 161 N) than did independent controls (-0.15 < R2 < 0.79, 124 N < RMS error < 343 N) for corresponding subcases. For independent controls, contact force predictions improved when precalibrated model parameter values or EMG shape tracking was used. For synergy controls, contact force predictions were relatively insensitive to how model parameter values were calibrated, while EMG shape tracking made lateral (but not medial) contact force predictions worse. For the subject and optimization cost function analyzed in this study, use of subject-specific synergy controls improved the accuracy of knee contact force predictions, especially for lateral contact force when EMG shape tracking was omitted, and reduced prediction sensitivity to uncertainties in muscle model parameter values.

  15. Is a top-heavy initial mass function needed to reproduce the submillimetre galaxy number counts?

    NASA Astrophysics Data System (ADS)

    Safarzadeh, Mohammadtaher; Lu, Yu; Hayward, Christopher C.

    2017-12-01

    Matching the number counts and redshift distribution of submillimetre galaxies (SMGs) without invoking modifications to the initial mass ffunction (IMF) has proved challenging for semi-analytic models (SAMs) of galaxy formation. We adopt a previously developed SAM that is constrained to match the z = 0 galaxy stellar mass function and makes various predictions which agree well with observational constraints; we do not recalibrate the SAM for this work. We implement three prescriptions to predict the submillimetre flux densities of the model galaxies; two depend solely on star formation rate, whereas the other also depends on the dust mass. By comparing the predictions of the models, we find that taking into account the dust mass, which affects the dust temperature and thus influences the far-infrared spectral energy distribution, is crucial for matching the number counts and redshift distribution of SMGs. Moreover, despite using a standard IMF, our model can match the observed SMG number counts and redshift distribution reasonably well, which contradicts the conclusions of some previous studies that a top-heavy IMF, in addition to taking into account the effect of dust mass, is needed to match these observations. Although we have not identified the key ingredient that is responsible for our model matching the observed SMG number counts and redshift distribution without IMF variation - which is challenging given the different prescriptions for physical processes employed in the SAMs of interest - our results demonstrate that in SAMs, IMF variation is degenerate with other physical processes, such as stellar feedback.

  16. Assessing the accuracy of improved force-matched water models derived from Ab initio molecular dynamics simulations.

    PubMed

    Köster, Andreas; Spura, Thomas; Rutkai, Gábor; Kessler, Jan; Wiebeler, Hendrik; Vrabec, Jadran; Kühne, Thomas D

    2016-07-15

    The accuracy of water models derived from ab initio molecular dynamics simulations by means on an improved force-matching scheme is assessed for various thermodynamic, transport, and structural properties. It is found that although the resulting force-matched water models are typically less accurate than fully empirical force fields in predicting thermodynamic properties, they are nevertheless much more accurate than generally appreciated in reproducing the structure of liquid water and in fact superseding most of the commonly used empirical water models. This development demonstrates the feasibility to routinely parametrize computationally efficient yet predictive potential energy functions based on accurate ab initio molecular dynamics simulations for a large variety of different systems. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  17. The dark side of galaxy colour: evidence from new SDSS measurements of galaxy clustering and lensing

    NASA Astrophysics Data System (ADS)

    Hearin, Andrew P.; Watson, Douglas F.; Becker, Matthew R.; Reyes, Reinabelle; Berlind, Andreas A.; Zentner, Andrew R.

    2014-10-01

    The age-matching model has recently been shown to predict correctly the luminosity L and g - r colour of galaxies residing within dark matter haloes. The central tenet of the model is intuitive: older haloes tend to host galaxies with older stellar populations. In this paper, we demonstrate that age matching also correctly predicts the g - r colour trends exhibited in a wide variety of statistics of the galaxy distribution for stellar mass M* threshold samples. In particular, we present new Sloan Digital Sky Survey (SDSS) measurements of galaxy clustering and the galaxy-galaxy lensing signal ΔΣ as a function of M* and g - r colour, and show that age matching exhibits remarkable agreement with these and other statistics of low-redshift galaxies. In so doing, we also demonstrate good agreement between the galaxy-galaxy lensing observed by SDSS and the ΔΣ signal predicted by abundance matching, a new success of this model. We describe how age matching is a specific example of a larger class of conditional abundance matching models (CAM), a theoretical framework we introduce here for the first time. CAM provides a general formalism to study correlations at fixed mass between any galaxy property and any halo property. The striking success of our simple implementation of CAM suggests that this technique has the potential to describe the same set of data as alternative models, but with a dramatic reduction in the required number of parameters. CAM achieves this reduction by exploiting the capability of contemporary N-body simulations to determine dark matter halo properties other than mass alone, which distinguishes our model from conventional approaches to the galaxy-halo connection.

  18. Post-Modeling Histogram Matching of Maps Produced Using Regression Trees

    Treesearch

    Andrew J. Lister; Tonya W. Lister

    2006-01-01

    Spatial predictive models often use statistical techniques that in some way rely on averaging of values. Estimates from linear modeling are known to be susceptible to truncation of variance when the independent (predictor) variables are measured with error. A straightforward post-processing technique (histogram matching) for attempting to mitigate this effect is...

  19. The galaxy-dark matter halo connection: which galaxy properties are correlated with the host halo mass?

    NASA Astrophysics Data System (ADS)

    Contreras, S.; Baugh, C. M.; Norberg, P.; Padilla, N.

    2015-09-01

    We demonstrate how the properties of a galaxy depend on the mass of its host dark matter subhalo, using two independent models of galaxy formation. For the cases of stellar mass and black hole mass, the median property value displays a monotonic dependence on subhalo mass. The slope of the relation changes for subhalo masses for which heating by active galactic nuclei becomes important. The median property values are predicted to be remarkably similar for central and satellite galaxies. The two models predict considerable scatter around the median property value, though the size of the scatter is model dependent. There is only modest evolution with redshift in the median galaxy property at a fixed subhalo mass. Properties such as cold gas mass and star formation rate, however, are predicted to have a complex dependence on subhalo mass. In these cases, subhalo mass is not a good indicator of the value of the galaxy property. We illustrate how the predictions in the galaxy property-subhalo mass plane differ from the assumptions made in some empirical models of galaxy clustering by reconstructing the model output using a basic subhalo abundance matching scheme. In its simplest form, abundance matching generally does not reproduce the clustering predicted by the models, typically resulting in an overprediction of the clustering signal. Using the predictions of the galaxy formation model for the correlations between pairs of galaxy properties, the basic abundance matching scheme can be extended to reproduce the model predictions more faithfully for a wider range of galaxy properties. Our results have implications for the analysis of galaxy clustering, particularly for low abundance samples.

  20. Polarity Comparison Between the Coronal PFSS Model Field and the Heliospheric Magnetic Field at 1 AU Over Solar Cycles 21-24

    NASA Astrophysics Data System (ADS)

    Koskela, J. S.; Virtanen, I. I.; Mursula, K.

    2015-12-01

    The solar coronal magnetic field forms an important link between the underlying source in the solar photosphere and the heliospheric magnetic field (HMF). The coronal field has traditionally been calculated from the photospheric observations using various magnetic field models between the photosphere and the corona, in particular the potential field source surface (PFSS) model. Despite its simplicity, the predictions of the PFSS model generally agree quite well with the heliospheric observations and match very well with the predictions of more elaborate models. We make here a detailed comparison between the predictions of the PFSS model with the HMF field observed at 1 AU. We use the photospheric field measured at the Wilcox Solar Observatory, SDO/HMI, SOHO/MDI and SOLIS, and the heliospheric magnetic field measurements at 1 AU collected within the OMNI 2 dataset. This database covers the solar cycles 21-24. We use different source surface distances and different numbers of harmonic components for the PFSS model. We find an optimum polarity match between the coronal field and the HMF for source surface distance of 3.5 Rs. Increasing the number of harmonic components beyond the quadrupole does not essentially improve polarity agreement, indicating that the large scale structure of the HMF at 1 AU is responsible for the agreement while the small scale structure is greatly modified between corona and 1 AU. We also discuss the solar cycle evolution of polarity match and find that the PFSS model prediction is most reliable during the declining phase of the solar cycle. We also find large differences in match percentage between northern and southern hemispheres during the times of systematic southward shift of the heliospheric current sheet (the Bashful ballerina).

  1. The application of computer color matching techniques to the matching of target colors in a food substrate: a first step in the development of foods with customized appearance.

    PubMed

    Kim, Sandra; Golding, Matt; Archer, Richard H

    2012-06-01

    A predictive color matching model based on the colorimetric technique was developed and used to calculate the concentrations of primary food dyes needed in a model food substrate to match a set of standard tile colors. This research is the first stage in the development of novel three-dimensional (3D) foods in which color images or designs can be rapidly reproduced in 3D form. Absorption coefficients were derived for each dye, from a concentration series in the model substrate, a microwave-baked cake. When used in a linear, additive blending model these coefficients were able to predict cake color from selected dye blends to within 3 ΔE*(ab,10) color difference units, or within the limit of a visually acceptable match. Absorption coefficients were converted to pseudo X₁₀, Y₁₀, and Z₁₀ tri-stimulus values (X₁₀(P), Y₁₀(P), Z₁₀(P)) for colorimetric matching. The Allen algorithm was used to calculate dye concentrations to match the X₁₀(P), Y₁₀(P), and Z₁₀(P) values of each tile color. Several recipes for each color were computed with the tile specular component included or excluded, and tested in the cake. Some tile colors proved out-of-gamut, limited by legal dye concentrations; these were scaled to within legal range. Actual differences suggest reasonable visual matches could be achieved for within-gamut tile colors. The Allen algorithm, with appropriate adjustments of concentration outputs, could provide a sufficiently rapid and accurate calculation tool for 3D color food printing. The predictive color matching approach shows potential for use in a novel embodiment of 3D food printing in which a color image or design could be rendered within a food matrix through the selective blending of primary dyes to reproduce each color element. The on-demand nature of this food application requires rapid color outputs which could be provided by the color matching technique, currently used in nonfood industries, rather than by empirical food industry methods. © 2012 Institute of Food Technologists®

  2. Longitudinal development of match-running performance in elite male youth soccer players.

    PubMed

    Saward, C; Morris, J G; Nevill, M E; Nevill, A M; Sunderland, C

    2016-08-01

    This study longitudinally examined age-related changes in the match-running performance of retained and released elite youth soccer players aged 8-18 years. The effect of playing position on age-related changes was also considered. Across three seasons, 263 elite youth soccer players were assessed in 1-29 competitive matches (988 player-matches). For each player-match, total distance and distances covered at age group-specific speed zones (low-speed, high-speed, sprinting) were calculated using 1 Hz or 5 Hz GPS. Mixed modeling predicted that match-running performance developed nonlinearly, with age-related changes best described with quadratic age terms. Modeling predicted that playing position significantly modified age-related changes (P < 0.05) and retained players covered significantly more low-speed distance compared with released players (P < 0.05), by 75 ± 71 m/h (mean ± 95% CI; effect size ± 95% CI: 0.35 ± 0.34). Model intercepts randomly varied, indicating differences between players in match-running performance unexplained by age, playing position or status. These findings may assist experts in developing training programs specific to the match play demands of players of different ages and playing positions. Although retained players covered more low-speed distance than released players, further study of the actions comprising low-speed distance during match play is warranted to better understand factors differentiating retained and released players. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  3. Match statistics related to winning in the group stage of 2014 Brazil FIFA World Cup.

    PubMed

    Liu, Hongyou; Gomez, Miguel-Ángel; Lago-Peñas, Carlos; Sampaio, Jaime

    2015-01-01

    Identifying match statistics that strongly contribute to winning in football matches is a very important step towards a more predictive and prescriptive performance analysis. The current study aimed to determine relationships between 24 match statistics and the match outcome (win, loss and draw) in all games and close games of the group stage of FIFA World Cup (2014, Brazil) by employing the generalised linear model. The cumulative logistic regression was run in the model taking the value of each match statistic as independent variable to predict the logarithm of the odds of winning. Relationships were assessed as effects of a two-standard-deviation increase in the value of each variable on the change in the probability of a team winning a match. Non-clinical magnitude-based inferences were employed and were evaluated by using the smallest worthwhile change. Results showed that for all the games, nine match statistics had clearly positive effects on the probability of winning (Shot, Shot on Target, Shot from Counter Attack, Shot from Inside Area, Ball Possession, Short Pass, Average Pass Streak, Aerial Advantage and Tackle), four had clearly negative effects (Shot Blocked, Cross, Dribble and Red Card), other 12 statistics had either trivial or unclear effects. While for the close games, the effects of Aerial Advantage and Yellow Card turned to trivial and clearly negative, respectively. Information from the tactical modelling can provide a more thorough and objective match understanding to coaches and performance analysts for evaluating post-match performances and for scouting upcoming oppositions.

  4. New Prediction Model for Probe Specificity in an Allele-Specific Extension Reaction for Haplotype-Specific Extraction (HSE) of Y Chromosome Mixtures

    PubMed Central

    Rothe, Jessica; Watkins, Norman E.; Nagy, Marion

    2012-01-01

    Allele-specific extension reactions (ASERs) use 3′ terminus-specific primers for the selective extension of completely annealed matches by polymerase. The ability of the polymerase to extend non-specific 3′ terminal mismatches leads to a failure of the reaction, a process that is only partly understood and predictable, and often requires time-consuming assay design. In our studies we investigated haplotype-specific extraction (HSE) for the separation of male DNA mixtures. HSE is an ASER and provides the ability to distinguish between diploid chromosomes from one or more individuals. Here, we show that the success of HSE and allele-specific extension depend strongly on the concentration difference between complete match and 3′ terminal mismatch. Using the oligonucleotide-modeling platform Visual Omp, we demonstrated the dependency of the discrimination power of the polymerase on match- and mismatch-target hybridization between different probe lengths. Therefore, the probe specificity in HSE could be predicted by performing a relative comparison of different probe designs with their simulated differences between the duplex concentration of target-probe match and mismatches. We tested this new model for probe design in more than 300 HSE reactions with 137 different probes and obtained an accordance of 88%. PMID:23049901

  5. New prediction model for probe specificity in an allele-specific extension reaction for haplotype-specific extraction (HSE) of Y chromosome mixtures.

    PubMed

    Rothe, Jessica; Watkins, Norman E; Nagy, Marion

    2012-01-01

    Allele-specific extension reactions (ASERs) use 3' terminus-specific primers for the selective extension of completely annealed matches by polymerase. The ability of the polymerase to extend non-specific 3' terminal mismatches leads to a failure of the reaction, a process that is only partly understood and predictable, and often requires time-consuming assay design. In our studies we investigated haplotype-specific extraction (HSE) for the separation of male DNA mixtures. HSE is an ASER and provides the ability to distinguish between diploid chromosomes from one or more individuals. Here, we show that the success of HSE and allele-specific extension depend strongly on the concentration difference between complete match and 3' terminal mismatch. Using the oligonucleotide-modeling platform Visual Omp, we demonstrated the dependency of the discrimination power of the polymerase on match- and mismatch-target hybridization between different probe lengths. Therefore, the probe specificity in HSE could be predicted by performing a relative comparison of different probe designs with their simulated differences between the duplex concentration of target-probe match and mismatches. We tested this new model for probe design in more than 300 HSE reactions with 137 different probes and obtained an accordance of 88%.

  6. TargetSpy: a supervised machine learning approach for microRNA target prediction.

    PubMed

    Sturm, Martin; Hackenberg, Michael; Langenberger, David; Frishman, Dmitrij

    2010-05-28

    Virtually all currently available microRNA target site prediction algorithms require the presence of a (conserved) seed match to the 5' end of the microRNA. Recently however, it has been shown that this requirement might be too stringent, leading to a substantial number of missed target sites. We developed TargetSpy, a novel computational approach for predicting target sites regardless of the presence of a seed match. It is based on machine learning and automatic feature selection using a wide spectrum of compositional, structural, and base pairing features covering current biological knowledge. Our model does not rely on evolutionary conservation, which allows the detection of species-specific interactions and makes TargetSpy suitable for analyzing unconserved genomic sequences.In order to allow for an unbiased comparison of TargetSpy to other methods, we classified all algorithms into three groups: I) no seed match requirement, II) seed match requirement, and III) conserved seed match requirement. TargetSpy predictions for classes II and III are generated by appropriate postfiltering. On a human dataset revealing fold-change in protein production for five selected microRNAs our method shows superior performance in all classes. In Drosophila melanogaster not only our class II and III predictions are on par with other algorithms, but notably the class I (no-seed) predictions are just marginally less accurate. We estimate that TargetSpy predicts between 26 and 112 functional target sites without a seed match per microRNA that are missed by all other currently available algorithms. Only a few algorithms can predict target sites without demanding a seed match and TargetSpy demonstrates a substantial improvement in prediction accuracy in that class. Furthermore, when conservation and the presence of a seed match are required, the performance is comparable with state-of-the-art algorithms. TargetSpy was trained on mouse and performs well in human and drosophila, suggesting that it may be applicable to a broad range of species. Moreover, we have demonstrated that the application of machine learning techniques in combination with upcoming deep sequencing data results in a powerful microRNA target site prediction tool http://www.targetspy.org.

  7. TargetSpy: a supervised machine learning approach for microRNA target prediction

    PubMed Central

    2010-01-01

    Background Virtually all currently available microRNA target site prediction algorithms require the presence of a (conserved) seed match to the 5' end of the microRNA. Recently however, it has been shown that this requirement might be too stringent, leading to a substantial number of missed target sites. Results We developed TargetSpy, a novel computational approach for predicting target sites regardless of the presence of a seed match. It is based on machine learning and automatic feature selection using a wide spectrum of compositional, structural, and base pairing features covering current biological knowledge. Our model does not rely on evolutionary conservation, which allows the detection of species-specific interactions and makes TargetSpy suitable for analyzing unconserved genomic sequences. In order to allow for an unbiased comparison of TargetSpy to other methods, we classified all algorithms into three groups: I) no seed match requirement, II) seed match requirement, and III) conserved seed match requirement. TargetSpy predictions for classes II and III are generated by appropriate postfiltering. On a human dataset revealing fold-change in protein production for five selected microRNAs our method shows superior performance in all classes. In Drosophila melanogaster not only our class II and III predictions are on par with other algorithms, but notably the class I (no-seed) predictions are just marginally less accurate. We estimate that TargetSpy predicts between 26 and 112 functional target sites without a seed match per microRNA that are missed by all other currently available algorithms. Conclusion Only a few algorithms can predict target sites without demanding a seed match and TargetSpy demonstrates a substantial improvement in prediction accuracy in that class. Furthermore, when conservation and the presence of a seed match are required, the performance is comparable with state-of-the-art algorithms. TargetSpy was trained on mouse and performs well in human and drosophila, suggesting that it may be applicable to a broad range of species. Moreover, we have demonstrated that the application of machine learning techniques in combination with upcoming deep sequencing data results in a powerful microRNA target site prediction tool http://www.targetspy.org. PMID:20509939

  8. Journal Article: Infant Exposure to Dioxin-Like Compounds in Breast Milk

    EPA Science Inventory

    A simple, one-compartment, first-order pharmacokinetic model is used to predict the infant body burden of dioxin-like compounds that results from breast-feeding. Validation testing of the model showed a good match between predictions and measurements of dioxin toxic equivalents ...

  9. Comparison of four modeling tools for the prediction of potential distribution for non-indigenous weeds in the United States

    USGS Publications Warehouse

    Magarey, Roger; Newton, Leslie; Hong, Seung C.; Takeuchi, Yu; Christie, Dave; Jarnevich, Catherine S.; Kohl, Lisa; Damus, Martin; Higgins, Steven I.; Miller, Leah; Castro, Karen; West, Amanda; Hastings, John; Cook, Gericke; Kartesz, John; Koop, Anthony

    2018-01-01

    This study compares four models for predicting the potential distribution of non-indigenous weed species in the conterminous U.S. The comparison focused on evaluating modeling tools and protocols as currently used for weed risk assessment or for predicting the potential distribution of invasive weeds. We used six weed species (three highly invasive and three less invasive non-indigenous species) that have been established in the U.S. for more than 75 years. The experiment involved providing non-U. S. location data to users familiar with one of the four evaluated techniques, who then developed predictive models that were applied to the United States without knowing the identity of the species or its U.S. distribution. We compared a simple GIS climate matching technique known as Proto3, a simple climate matching tool CLIMEX Match Climates, the correlative model MaxEnt, and a process model known as the Thornley Transport Resistance (TTR) model. Two experienced users ran each modeling tool except TTR, which had one user. Models were trained with global species distribution data excluding any U.S. data, and then were evaluated using the current known U.S. distribution. The influence of weed species identity and modeling tool on prevalence and sensitivity effects was compared using a generalized linear mixed model. Each modeling tool itself had a low statistical significance, while weed species alone accounted for 69.1 and 48.5% of the variance for prevalence and sensitivity, respectively. These results suggest that simple modeling tools might perform as well as complex ones in the case of predicting potential distribution for a weed not yet present in the United States. Considerations of model accuracy should also be balanced with those of reproducibility and ease of use. More important than the choice of modeling tool is the construction of robust protocols and testing both new and experienced users under blind test conditions that approximate operational conditions.

  10. Foveated model observers to predict human performance in 3D images

    NASA Astrophysics Data System (ADS)

    Lago, Miguel A.; Abbey, Craig K.; Eckstein, Miguel P.

    2017-03-01

    We evaluate 3D search requires model observers that take into account the peripheral human visual processing (foveated models) to predict human observer performance. We show that two different 3D tasks, free search and location-known detection, influence the relative human visual detectability of two signals of different sizes in synthetic backgrounds mimicking the noise found in 3D digital breast tomosynthesis. One of the signals resembled a microcalcification (a small and bright sphere), while the other one was designed to look like a mass (a larger Gaussian blob). We evaluated current standard models observers (Hotelling; Channelized Hotelling; non-prewhitening matched filter with eye filter, NPWE; and non-prewhitening matched filter model, NPW) and showed that they incorrectly predict the relative detectability of the two signals in 3D search. We propose a new model observer (3D Foveated Channelized Hotelling Observer) that incorporates the properties of the visual system over a large visual field (fovea and periphery). We show that the foveated model observer can accurately predict the rank order of detectability of the signals in 3D images for each task. Together, these results motivate the use of a new generation of foveated model observers for predicting image quality for search tasks in 3D imaging modalities such as digital breast tomosynthesis or computed tomography.

  11. An observer's guide to the (Local Group) dwarf galaxies: predictions for their own dwarf satellite populations

    NASA Astrophysics Data System (ADS)

    Dooley, Gregory A.; Peter, Annika H. G.; Yang, Tianyi; Willman, Beth; Griffen, Brendan F.; Frebel, Anna

    2017-11-01

    A recent surge in the discovery of new ultrafaint dwarf satellites of the Milky Way has inspired the idea of searching for faint satellites, 103 M⊙ 99 per cent chance that at least one satellite with stellar mass M* > 105 M⊙ exists around the combined five Local Group field dwarf galaxies with the largest stellar mass. When considering satellites with M* > 104 M⊙, we predict a combined 5-25 satellites for the five largest field dwarfs, and 10-50 for the whole Local Group field dwarf population. Because of the relatively small number of predicted dwarfs, and their extended spatial distribution, a large fraction each Local Group dwarf's virial volume will need to be surveyed to guarantee discoveries. We compute the predicted number of satellites in a given field of view of specific Local Group galaxies, as a function of minimum satellite luminosity, and explicitly obtain such values for the Solitary Local dwarfs survey. Uncertainties in abundance-matching and reionization models are large, implying that comprehensive searches could lead to refinements of both models.

  12. Changing the approach to treatment choice in epilepsy using big data.

    PubMed

    Devinsky, Orrin; Dilley, Cynthia; Ozery-Flato, Michal; Aharonov, Ranit; Goldschmidt, Ya'ara; Rosen-Zvi, Michal; Clark, Chris; Fritz, Patty

    2016-03-01

    A UCB-IBM collaboration explored the application of machine learning to large claims databases to construct an algorithm for antiepileptic drug (AED) choice for individual patients. Claims data were collected between January 2006 and September 2011 for patients with epilepsy > 16 years of age. A subset of patient claims with a valid index date of AED treatment change (new, add, or switch) were used to train the AED prediction model by retrospectively evaluating an index date treatment for subsequent treatment change. Based on the trained model, a model-predicted AED regimen with the lowest likelihood of treatment change was assigned to each patient in the group of test claims, and outcomes were evaluated to test model validity. The model had 72% area under receiver operator characteristic curve, indicating good predictive power. Patients who were given the model-predicted AED regimen had significantly longer survival rates (time until a treatment change event) and lower expected health resource utilization on average than those who received another treatment. The actual prescribed AED regimen at the index date matched the model-predicted AED regimen in only 13% of cases; there were large discrepancies in the frequency of use of certain AEDs/combinations between model-predicted AED regimens and those actually prescribed. Chances of treatment success were improved if patients received the model-predicted treatment. Using the model's prediction system may enable personalized, evidence-based epilepsy care, accelerating the match between patients and their ideal therapy, thereby delivering significantly better health outcomes for patients and providing health-care savings by applying resources more efficiently. Our goal will be to strengthen the predictive power of the model by integrating diverse data sets and potentially moving to prospective data collection. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.

  13. CFD Simulation and Experimental Validation of Fluid Flow and Particle Transport in a Model of Alveolated Airways

    PubMed Central

    Ma, Baoshun; Ruwet, Vincent; Corieri, Patricia; Theunissen, Raf; Riethmuller, Michel; Darquenne, Chantal

    2009-01-01

    Accurate modeling of air flow and aerosol transport in the alveolated airways is essential for quantitative predictions of pulmonary aerosol deposition. However, experimental validation of such modeling studies has been scarce. The objective of this study is to validate CFD predictions of flow field and particle trajectory with experiments within a scaled-up model of alveolated airways. Steady flow (Re = 0.13) of silicone oil was captured by particle image velocimetry (PIV), and the trajectories of 0.5 mm and 1.2 mm spherical iron beads (representing 0.7 to 14.6 μm aerosol in vivo) were obtained by particle tracking velocimetry (PTV). At twelve selected cross sections, the velocity profiles obtained by CFD matched well with those by PIV (within 1.7% on average). The CFD predicted trajectories also matched well with PTV experiments. These results showed that air flow and aerosol transport in models of human alveolated airways can be simulated by CFD techniques with reasonable accuracy. PMID:20161301

  14. CFD Simulation and Experimental Validation of Fluid Flow and Particle Transport in a Model of Alveolated Airways.

    PubMed

    Ma, Baoshun; Ruwet, Vincent; Corieri, Patricia; Theunissen, Raf; Riethmuller, Michel; Darquenne, Chantal

    2009-05-01

    Accurate modeling of air flow and aerosol transport in the alveolated airways is essential for quantitative predictions of pulmonary aerosol deposition. However, experimental validation of such modeling studies has been scarce. The objective of this study is to validate CFD predictions of flow field and particle trajectory with experiments within a scaled-up model of alveolated airways. Steady flow (Re = 0.13) of silicone oil was captured by particle image velocimetry (PIV), and the trajectories of 0.5 mm and 1.2 mm spherical iron beads (representing 0.7 to 14.6 mum aerosol in vivo) were obtained by particle tracking velocimetry (PTV). At twelve selected cross sections, the velocity profiles obtained by CFD matched well with those by PIV (within 1.7% on average). The CFD predicted trajectories also matched well with PTV experiments. These results showed that air flow and aerosol transport in models of human alveolated airways can be simulated by CFD techniques with reasonable accuracy.

  15. The proposed 'concordance-statistic for benefit' provided a useful metric when modeling heterogeneous treatment effects.

    PubMed

    van Klaveren, David; Steyerberg, Ewout W; Serruys, Patrick W; Kent, David M

    2018-02-01

    Clinical prediction models that support treatment decisions are usually evaluated for their ability to predict the risk of an outcome rather than treatment benefit-the difference between outcome risk with vs. without therapy. We aimed to define performance metrics for a model's ability to predict treatment benefit. We analyzed data of the Synergy between Percutaneous Coronary Intervention with Taxus and Cardiac Surgery (SYNTAX) trial and of three recombinant tissue plasminogen activator trials. We assessed alternative prediction models with a conventional risk concordance-statistic (c-statistic) and a novel c-statistic for benefit. We defined observed treatment benefit by the outcomes in pairs of patients matched on predicted benefit but discordant for treatment assignment. The 'c-for-benefit' represents the probability that from two randomly chosen matched patient pairs with unequal observed benefit, the pair with greater observed benefit also has a higher predicted benefit. Compared to a model without treatment interactions, the SYNTAX score II had improved ability to discriminate treatment benefit (c-for-benefit 0.590 vs. 0.552), despite having similar risk discrimination (c-statistic 0.725 vs. 0.719). However, for the simplified stroke-thrombolytic predictive instrument (TPI) vs. the original stroke-TPI, the c-for-benefit (0.584 vs. 0.578) was similar. The proposed methodology has the potential to measure a model's ability to predict treatment benefit not captured with conventional performance metrics. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Top-of-Climb Matching Method for Reducing Aircraft Trajectory Prediction Errors.

    PubMed

    Thipphavong, David P

    2016-09-01

    The inaccuracies of the aircraft performance models utilized by trajectory predictors with regard to takeoff weight, thrust, climb profile, and other parameters result in altitude errors during the climb phase that often exceed the vertical separation standard of 1000 feet. This study investigates the potential reduction in altitude trajectory prediction errors that could be achieved for climbing flights if just one additional parameter is made available: top-of-climb (TOC) time. The TOC-matching method developed and evaluated in this paper is straightforward: a set of candidate trajectory predictions is generated using different aircraft weight parameters, and the one that most closely matches TOC in terms of time is selected. This algorithm was tested using more than 1000 climbing flights in Fort Worth Center. Compared to the baseline trajectory predictions of a real-time research prototype (Center/TRACON Automation System), the TOC-matching method reduced the altitude root mean square error (RMSE) for a 5-minute prediction time by 38%. It also decreased the percentage of flights with absolute altitude error greater than the vertical separation standard of 1000 ft for the same look-ahead time from 55% to 30%.

  17. Top-of-Climb Matching Method for Reducing Aircraft Trajectory Prediction Errors

    PubMed Central

    Thipphavong, David P.

    2017-01-01

    The inaccuracies of the aircraft performance models utilized by trajectory predictors with regard to takeoff weight, thrust, climb profile, and other parameters result in altitude errors during the climb phase that often exceed the vertical separation standard of 1000 feet. This study investigates the potential reduction in altitude trajectory prediction errors that could be achieved for climbing flights if just one additional parameter is made available: top-of-climb (TOC) time. The TOC-matching method developed and evaluated in this paper is straightforward: a set of candidate trajectory predictions is generated using different aircraft weight parameters, and the one that most closely matches TOC in terms of time is selected. This algorithm was tested using more than 1000 climbing flights in Fort Worth Center. Compared to the baseline trajectory predictions of a real-time research prototype (Center/TRACON Automation System), the TOC-matching method reduced the altitude root mean square error (RMSE) for a 5-minute prediction time by 38%. It also decreased the percentage of flights with absolute altitude error greater than the vertical separation standard of 1000 ft for the same look-ahead time from 55% to 30%. PMID:28684883

  18. Top-of-Climb Matching Method for Reducing Aircraft Trajectory Prediction Errors

    NASA Technical Reports Server (NTRS)

    Thipphavong, David P.

    2016-01-01

    The inaccuracies of the aircraft performance models utilized by trajectory predictors with regard to takeoff weight, thrust, climb profile, and other parameters result in altitude errors during the climb phase that often exceed the vertical separation standard of 1000 feet. This study investigates the potential reduction in altitude trajectory prediction errors that could be achieved for climbing flights if just one additional parameter is made available: top-of-climb (TOC) time. The TOC-matching method developed and evaluated in this paper is straightforward: a set of candidate trajectory predictions is generated using different aircraft weight parameters, and the one that most closely matches TOC in terms of time is selected. This algorithm was tested using more than 1000 climbing flights in Fort Worth Center. Compared to the baseline trajectory predictions of a real-time research prototype (Center/TRACON Automation System), the TOC-matching method reduced the altitude root mean square error (RMSE) for a 5-minute prediction time by 38%. It also decreased the percentage of flights with absolute altitude error greater than the vertical separation standard of 1000 ft for the same look-ahead time from 55% to 30%.

  19. Hebbian Learning in a Random Network Captures Selectivity Properties of the Prefrontal Cortex.

    PubMed

    Lindsay, Grace W; Rigotti, Mattia; Warden, Melissa R; Miller, Earl K; Fusi, Stefano

    2017-11-08

    Complex cognitive behaviors, such as context-switching and rule-following, are thought to be supported by the prefrontal cortex (PFC). Neural activity in the PFC must thus be specialized to specific tasks while retaining flexibility. Nonlinear "mixed" selectivity is an important neurophysiological trait for enabling complex and context-dependent behaviors. Here we investigate (1) the extent to which the PFC exhibits computationally relevant properties, such as mixed selectivity, and (2) how such properties could arise via circuit mechanisms. We show that PFC cells recorded from male and female rhesus macaques during a complex task show a moderate level of specialization and structure that is not replicated by a model wherein cells receive random feedforward inputs. While random connectivity can be effective at generating mixed selectivity, the data show significantly more mixed selectivity than predicted by a model with otherwise matched parameters. A simple Hebbian learning rule applied to the random connectivity, however, increases mixed selectivity and enables the model to match the data more accurately. To explain how learning achieves this, we provide analysis along with a clear geometric interpretation of the impact of learning on selectivity. After learning, the model also matches the data on measures of noise, response density, clustering, and the distribution of selectivities. Of two styles of Hebbian learning tested, the simpler and more biologically plausible option better matches the data. These modeling results provide clues about how neural properties important for cognition can arise in a circuit and make clear experimental predictions regarding how various measures of selectivity would evolve during animal training. SIGNIFICANCE STATEMENT The prefrontal cortex is a brain region believed to support the ability of animals to engage in complex behavior. How neurons in this area respond to stimuli-and in particular, to combinations of stimuli ("mixed selectivity")-is a topic of interest. Even though models with random feedforward connectivity are capable of creating computationally relevant mixed selectivity, such a model does not match the levels of mixed selectivity seen in the data analyzed in this study. Adding simple Hebbian learning to the model increases mixed selectivity to the correct level and makes the model match the data on several other relevant measures. This study thus offers predictions on how mixed selectivity and other properties evolve with training. Copyright © 2017 the authors 0270-6474/17/3711021-16$15.00/0.

  20. Hebbian Learning in a Random Network Captures Selectivity Properties of the Prefrontal Cortex

    PubMed Central

    Lindsay, Grace W.

    2017-01-01

    Complex cognitive behaviors, such as context-switching and rule-following, are thought to be supported by the prefrontal cortex (PFC). Neural activity in the PFC must thus be specialized to specific tasks while retaining flexibility. Nonlinear “mixed” selectivity is an important neurophysiological trait for enabling complex and context-dependent behaviors. Here we investigate (1) the extent to which the PFC exhibits computationally relevant properties, such as mixed selectivity, and (2) how such properties could arise via circuit mechanisms. We show that PFC cells recorded from male and female rhesus macaques during a complex task show a moderate level of specialization and structure that is not replicated by a model wherein cells receive random feedforward inputs. While random connectivity can be effective at generating mixed selectivity, the data show significantly more mixed selectivity than predicted by a model with otherwise matched parameters. A simple Hebbian learning rule applied to the random connectivity, however, increases mixed selectivity and enables the model to match the data more accurately. To explain how learning achieves this, we provide analysis along with a clear geometric interpretation of the impact of learning on selectivity. After learning, the model also matches the data on measures of noise, response density, clustering, and the distribution of selectivities. Of two styles of Hebbian learning tested, the simpler and more biologically plausible option better matches the data. These modeling results provide clues about how neural properties important for cognition can arise in a circuit and make clear experimental predictions regarding how various measures of selectivity would evolve during animal training. SIGNIFICANCE STATEMENT The prefrontal cortex is a brain region believed to support the ability of animals to engage in complex behavior. How neurons in this area respond to stimuli—and in particular, to combinations of stimuli (“mixed selectivity”)—is a topic of interest. Even though models with random feedforward connectivity are capable of creating computationally relevant mixed selectivity, such a model does not match the levels of mixed selectivity seen in the data analyzed in this study. Adding simple Hebbian learning to the model increases mixed selectivity to the correct level and makes the model match the data on several other relevant measures. This study thus offers predictions on how mixed selectivity and other properties evolve with training. PMID:28986463

  1. Accuracy of three-dimensional facial soft tissue simulation in post-traumatic zygoma reconstruction.

    PubMed

    Li, P; Zhou, Z W; Ren, J Y; Zhang, Y; Tian, W D; Tang, W

    2016-12-01

    The aim of this study was to evaluate the accuracy of novel software-CMF-preCADS-for the prediction of soft tissue changes following repositioning surgery for zygomatic fractures. Twenty patients who had sustained an isolated zygomatic fracture accompanied by facial deformity and who were treated with repositioning surgery participated in this study. Cone beam computed tomography (CBCT) scans and three-dimensional (3D) stereophotographs were acquired preoperatively and postoperatively. The 3D skeletal model from the preoperative CBCT data was matched with the postoperative one, and the fractured zygomatic fragments were segmented and aligned to the postoperative position for prediction. Then, the predicted model was matched with the postoperative 3D stereophotograph for quantification of the simulation error. The mean absolute error in the zygomatic soft tissue region between the predicted model and the real one was 1.42±1.56mm for all cases. The accuracy of the prediction (mean absolute error ≤2mm) was 87%. In the subjective assessment it was found that the majority of evaluators considered the predicted model and the postoperative model to be 'very similar'. CMF-preCADS software can provide a realistic, accurate prediction of the facial soft tissue appearance after repositioning surgery for zygomatic fractures. The reliability of this software for other types of repositioning surgery for maxillofacial fractures should be validated in the future. Copyright © 2016. Published by Elsevier Ltd.

  2. Calculations of single crystal elastic constants for yttria partially stabilised zirconia from powder diffraction data

    NASA Astrophysics Data System (ADS)

    Lunt, A. J. G.; Xie, M. Y.; Baimpas, N.; Zhang, S. Y.; Kabra, S.; Kelleher, J.; Neo, T. K.; Korsunsky, A. M.

    2014-08-01

    Yttria Stabilised Zirconia (YSZ) is a tough, phase-transforming ceramic that finds use in a wide range of commercial applications from dental prostheses to thermal barrier coatings. Micromechanical modelling of phase transformation can deliver reliable predictions in terms of the influence of temperature and stress. However, models must rely on the accurate knowledge of single crystal elastic stiffness constants. Some techniques for elastic stiffness determination are well-established. The most popular of these involve exploiting frequency shifts and phase velocities of acoustic waves. However, the application of these techniques to YSZ can be problematic due to the micro-twinning observed in larger crystals. Here, we propose an alternative approach based on selective elastic strain sampling (e.g., by diffraction) of grain ensembles sharing certain orientation, and the prediction of the same quantities by polycrystalline modelling, for example, the Reuss or Voigt average. The inverse problem arises consisting of adjusting the single crystal stiffness matrix to match the polycrystal predictions to observations. In the present model-matching study, we sought to determine the single crystal stiffness matrix of tetragonal YSZ using the results of time-of-flight neutron diffraction obtained from an in situ compression experiment and Finite Element modelling of the deformation of polycrystalline tetragonal YSZ. The best match between the model predictions and observations was obtained for the optimized stiffness values of C11 = 451, C33 = 302, C44 = 39, C66 = 82, C12 = 240, and C13 = 50 (units: GPa). Considering the significant amount of scatter in the published literature data, our result appears reasonably consistent.

  3. Parameter Prediction of Hydraulic Fracture for Tight Reservoir Based on Micro-Seismic and History Matching

    NASA Astrophysics Data System (ADS)

    Zhang, Kai; Ma, Xiaopeng; Li, Yanlai; Wu, Haiyang; Cui, Chenyu; Zhang, Xiaoming; Zhang, Hao; Yao, Jun

    Hydraulic fracturing is an important measure for the development of tight reservoirs. In order to describe the distribution of hydraulic fractures, micro-seismic diagnostic was introduced into petroleum fields. Micro-seismic events may reveal important information about static characteristics of hydraulic fracturing. However, this method is limited to reflect the distribution area of the hydraulic fractures and fails to provide specific parameters. Therefore, micro-seismic technology is integrated with history matching to predict the hydraulic fracture parameters in this paper. Micro-seismic source location is used to describe the basic shape of hydraulic fractures. After that, secondary modeling is considered to calibrate the parameters information of hydraulic fractures by using DFM (discrete fracture model) and history matching method. In consideration of fractal feature of hydraulic fracture, fractal fracture network model is established to evaluate this method in numerical experiment. The results clearly show the effectiveness of the proposed approach to estimate the parameters of hydraulic fractures.

  4. A Kinetic Model Describing Injury-Burden in Team Sports.

    PubMed

    Fuller, Colin W

    2017-12-01

    Injuries in team sports are normally characterised by the incidence, severity, and location and type of injuries sustained: these measures, however, do not provide an insight into the variable injury-burden experienced during a season. Injury burden varies according to the team's match and training loads, the rate at which injuries are sustained and the time taken for these injuries to resolve. At the present time, this time-based variation of injury burden has not been modelled. To develop a kinetic model describing the time-based injury burden experienced by teams in elite team sports and to demonstrate the model's utility. Rates of injury were quantified using a large eight-season database of rugby injuries (5253) and exposure (60,085 player-match-hours) in English professional rugby. Rates of recovery from injury were quantified using time-to-recovery analysis of the injuries. The kinetic model proposed for predicting a team's time-based injury burden is based on a composite rate equation developed from the incidence of injury, a first-order rate of recovery from injury and the team's playing load. The utility of the model was demonstrated by examining common scenarios encountered in elite rugby. The kinetic model developed describes and predicts the variable injury-burden arising from match play during a season of rugby union based on the incidence of match injuries, the rate of recovery from injury and the playing load. The model is equally applicable to other team sports and other scenarios.

  5. Psychophysics of remembering.

    PubMed Central

    White, K G; Wixted, J T

    1999-01-01

    We present a new model of remembering in the context of conditional discrimination. For procedures such as delayed matching to sample, the effect of the sample stimuli at the time of remembering is represented by a pair of Thurstonian (normal) distributions of effective stimulus values. The critical assumption of the model is that, based on prior experience, each effective stimulus value is associated with a ratio of reinforcers obtained for previous correct choices of the comparison stimuli. That ratio determines the choice that is made on the basis of the matching law. The standard deviations of the distributions are assumed to increase with increasing retention-interval duration, and the distance between their means is assumed to be a function of other factors that influence overall difficulty of the discrimination. It is a behavioral model in that choice is determined by its reinforcement history. The model predicts that the biasing effects of the reinforcer differential increase with decreasing discriminability and with increasing retention-interval duration. Data from several conditions using a delayed matching-to-sample procedure with pigeons support the predictions. PMID:10028693

  6. Image matching as a data source for forest inventory - Comparison of Semi-Global Matching and Next-Generation Automatic Terrain Extraction algorithms in a typical managed boreal forest environment

    NASA Astrophysics Data System (ADS)

    Kukkonen, M.; Maltamo, M.; Packalen, P.

    2017-08-01

    Image matching is emerging as a compelling alternative to airborne laser scanning (ALS) as a data source for forest inventory and management. There is currently an open discussion in the forest inventory community about whether, and to what extent, the new method can be applied to practical inventory campaigns. This paper aims to contribute to this discussion by comparing two different image matching algorithms (Semi-Global Matching [SGM] and Next-Generation Automatic Terrain Extraction [NGATE]) and ALS in a typical managed boreal forest environment in southern Finland. Spectral features from unrectified aerial images were included in the modeling and the potential of image matching in areas without a high resolution digital terrain model (DTM) was also explored. Plot level predictions for total volume, stem number, basal area, height of basal area median tree and diameter of basal area median tree were modeled using an area-based approach. Plot level dominant tree species were predicted using a random forest algorithm, also using an area-based approach. The statistical difference between the error rates from different datasets was evaluated using a bootstrap method. Results showed that ALS outperformed image matching with every forest attribute, even when a high resolution DTM was used for height normalization and spectral information from images was included. Dominant tree species classification with image matching achieved accuracy levels similar to ALS regardless of the resolution of the DTM when spectral metrics were used. Neither of the image matching algorithms consistently outperformed the other, but there were noticeably different error rates depending on the parameter configuration, spectral band, resolution of DTM, or response variable. This study showed that image matching provides reasonable point cloud data for forest inventory purposes, especially when a high resolution DTM is available and information from the understory is redundant.

  7. Probabilistic model for quick detection of dissimilar binary images

    NASA Astrophysics Data System (ADS)

    Mustafa, Adnan A. Y.

    2015-09-01

    We present a quick method to detect dissimilar binary images. The method is based on a "probabilistic matching model" for image matching. The matching model is used to predict the probability of occurrence of distinct-dissimilar image pairs (completely different images) when matching one image to another. Based on this model, distinct-dissimilar images can be detected by matching only a few points between two images with high confidence, namely 11 points for a 99.9% successful detection rate. For image pairs that are dissimilar but not distinct-dissimilar, more points need to be mapped. The number of points required to attain a certain successful detection rate or confidence depends on the amount of similarity between the compared images. As this similarity increases, more points are required. For example, images that differ by 1% can be detected by mapping fewer than 70 points on average. More importantly, the model is image size invariant; so, images of any sizes will produce high confidence levels with a limited number of matched points. As a result, this method does not suffer from the image size handicap that impedes current methods. We report on extensive tests conducted on real images of different sizes.

  8. Using Speculative Execution to Automatically Hide I/O Latency

    DTIC Science & Technology

    2001-12-07

    sion of the Lempel - Ziv algorithm and the Finite multi-order context models (FMOC) that originated from prediction-by-partial-match data compressors...allowed the cancellation of a single hint at a time.) 2.2.4 Predicting future data needs In order to take advantage of any of the algorithms described...modelling techniques generally used for data compression to perform probabilistic prediction of an application’s next page fault (or, in an object-oriented

  9. Calculations of single crystal elastic constants for yttria partially stabilised zirconia from powder diffraction data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lunt, A. J. G., E-mail: alexander.lunt@eng.ox.ac.uk; Xie, M. Y.; Baimpas, N.

    2014-08-07

    Yttria Stabilised Zirconia (YSZ) is a tough, phase-transforming ceramic that finds use in a wide range of commercial applications from dental prostheses to thermal barrier coatings. Micromechanical modelling of phase transformation can deliver reliable predictions in terms of the influence of temperature and stress. However, models must rely on the accurate knowledge of single crystal elastic stiffness constants. Some techniques for elastic stiffness determination are well-established. The most popular of these involve exploiting frequency shifts and phase velocities of acoustic waves. However, the application of these techniques to YSZ can be problematic due to the micro-twinning observed in larger crystals.more » Here, we propose an alternative approach based on selective elastic strain sampling (e.g., by diffraction) of grain ensembles sharing certain orientation, and the prediction of the same quantities by polycrystalline modelling, for example, the Reuss or Voigt average. The inverse problem arises consisting of adjusting the single crystal stiffness matrix to match the polycrystal predictions to observations. In the present model-matching study, we sought to determine the single crystal stiffness matrix of tetragonal YSZ using the results of time-of-flight neutron diffraction obtained from an in situ compression experiment and Finite Element modelling of the deformation of polycrystalline tetragonal YSZ. The best match between the model predictions and observations was obtained for the optimized stiffness values of C11 = 451, C33 = 302, C44 = 39, C66 = 82, C12 = 240, and C13 = 50 (units: GPa). Considering the significant amount of scatter in the published literature data, our result appears reasonably consistent.« less

  10. Updated numerical model with uncertainty assessment of 1950-56 drought conditions on brackish-water movement within the Edwards aquifer, San Antonio, Texas

    USGS Publications Warehouse

    Brakefield, Linzy K.; White, Jeremy T.; Houston, Natalie A.; Thomas, Jonathan V.

    2015-01-01

    Predictive results of total spring discharge during the 7-year period, as well as head predictions at Bexar County index well J-17, were much different than the dissolved-solids concentration change results at the production wells. These upper bounds are an order of magnitude larger than the actual prediction which implies that (1) the predictions of total spring discharge at Comal and San Marcos Springs and head at Bexar County index well J-17 made with this model are not reliable, and (2) parameters that control these predictions are not informed well by the observation dataset during historymatching, even though the history-matching process yielded parameters to reproduce spring discharges and heads at these locations during the history-matching period. Furthermore, because spring discharges at these two springs and heads at Bexar County index well J-17 represent more of a cumulative effect of upstream conditions over a larger distance (and longer time), many more parameters (with their own uncertainties) are potentially controlling these predictions than the prediction of dissolved-solids concentration change at the prediction wells, and therefore contributing to a large posterior uncertainty.

  11. Loss model for off-design performance analysis of radial turbines with pivoting-vane, variable-area stators

    NASA Technical Reports Server (NTRS)

    Meitner, P. L.; Glassman, A. J.

    1980-01-01

    An off-design performance loss model is developed for variable-area (pivoted vane) radial turbines. The variation in stator loss with stator area is determined by a viscous loss model while the variation in rotor loss due to stator area variation (for no stator end-clearance gap) is determined through analytical matching of experimental data. An incidence loss model is also based on matching of the experimental data. A stator vane end-clearance leakage model is developed and sample calculations are made to show the predicted effects of stator vane end-clearance leakage on performance.

  12. Predicting the High Redshift Galaxy Population for JWST

    NASA Astrophysics Data System (ADS)

    Flynn, Zoey; Benson, Andrew

    2017-01-01

    The James Webb Space Telescope will be launched in Oct 2018 with the goal of observing galaxies in the redshift range of z = 10 - 15. As redshift increases, the age of the Universe decreases, allowing us to study objects formed only a few hundred million years after the Big Bang. This will provide a valuable opportunity to test and improve current galaxy formation theory by comparing predictions for mass, luminosity, and number density to the observed data. We have made testable predictions with the semi-analytical galaxy formation model Galacticus. The code uses Markov Chain Monte Carlo methods to determine viable sets of model parameters that match current astronomical data. The resulting constrained model was then set to match the specifications of the JWST Ultra Deep Field Imaging Survey. Predictions utilizing up to 100 viable parameter sets were calculated, allowing us to assess the uncertainty in current theoretical expectations. We predict that the planned UDF will be able to observe a significant number of objects past redshift z > 9 but nothing at redshift z > 11. In order to detect these faint objects at redshifts z = 11-15 we need to increase exposure time by at least a factor of 1.66.

  13. Adjustments of individual-tree survival and diameter-growth equations to match whole-stand attributes

    Treesearch

    Quang V. Cao

    2010-01-01

    Individual-tree models are flexible and can perform well in predicting tree survival and diameter growth for a certain growing period. However, the resulting stand-level outputs often suffer from accumulation of errors and subsequently cannot compete with predictions from whole-stand models, especially when the projection period lengthens. Evaluated in this study were...

  14. Football league win prediction based on online and league table data

    NASA Astrophysics Data System (ADS)

    Par, Prateek; Gupt, Ankit Kumar; Singh, Samarth; Khare, Neelu; Bhattachrya, Sweta

    2017-11-01

    As we are proceeding towards an internet driven world, the impact of internet is increasing in our day to lives. This not only gives impact on the virtual world but also leave a mark in the real world. The social media sites contains huge amount of information, the only thing is to collect the relevant data and analyse the data to form a real world prediction and it can do far more than that. In this paper we study the relationship between the twitter data and the normal data analysis to predict the winning team in the NFL (National Football League).The prediction is based on the data collected on the on-going league which includes performance of each player and their previous statistics. Alongside with the data available online we are combining the twitter data which we extracted by the tweets pertaining to specific teams and games in the NFL season and use them alongside statistical game data to build predictive models for future or the outcome of the game i.e. which team will lose or win depending upon the statistical data available. Specifically the tweets within the 24 hours of match will be considered and the main focus of twitter data will be upon the last hours of tweets i.e. pre-match twitter data and post-match twitter data. We are experimenting on the data and using twitter data we are trying to increase the performance of the existing predictive models that uses only the game stats to predict the future.

  15. Evaluating the utility of mid-infrared spectral subspaces for predicting soil properties.

    PubMed

    Sila, Andrew M; Shepherd, Keith D; Pokhariyal, Ganesh P

    2016-04-15

    We propose four methods for finding local subspaces in large spectral libraries. The proposed four methods include (a) cosine angle spectral matching; (b) hit quality index spectral matching; (c) self-organizing maps and (d) archetypal analysis methods. Then evaluate prediction accuracies for global and subspaces calibration models. These methods were tested on a mid-infrared spectral library containing 1907 soil samples collected from 19 different countries under the Africa Soil Information Service project. Calibration models for pH, Mehlich-3 Ca, Mehlich-3 Al, total carbon and clay soil properties were developed for the whole library and for the subspace. Root mean square error of prediction was used to evaluate predictive performance of subspace and global models. The root mean square error of prediction was computed using a one-third-holdout validation set. Effect of pretreating spectra with different methods was tested for 1st and 2nd derivative Savitzky-Golay algorithm, multiplicative scatter correction, standard normal variate and standard normal variate followed by detrending methods. In summary, the results show that global models outperformed the subspace models. We, therefore, conclude that global models are more accurate than the local models except in few cases. For instance, sand and clay root mean square error values from local models from archetypal analysis method were 50% poorer than the global models except for subspace models obtained using multiplicative scatter corrected spectra with which were 12% better. However, the subspace approach provides novel methods for discovering data pattern that may exist in large spectral libraries.

  16. Dodecahedral space topology as an explanation for weak wide-angle temperature correlations in the cosmic microwave background.

    PubMed

    Luminet, Jean-Pierre; Weeks, Jeffrey R; Riazuelo, Alain; Lehoucq, Roland; Uzan, Jean-Philippe

    2003-10-09

    The current 'standard model' of cosmology posits an infinite flat universe forever expanding under the pressure of dark energy. First-year data from the Wilkinson Microwave Anisotropy Probe (WMAP) confirm this model to spectacular precision on all but the largest scales. Temperature correlations across the microwave sky match expectations on angular scales narrower than 60 degrees but, contrary to predictions, vanish on scales wider than 60 degrees. Several explanations have been proposed. One natural approach questions the underlying geometry of space--namely, its curvature and topology. In an infinite flat space, waves from the Big Bang would fill the universe on all length scales. The observed lack of temperature correlations on scales beyond 60 degrees means that the broadest waves are missing, perhaps because space itself is not big enough to support them. Here we present a simple geometrical model of a finite space--the Poincaré dodecahedral space--which accounts for WMAP's observations with no fine-tuning required. The predicted density is Omega(0) approximately 1.013 > 1, and the model also predicts temperature correlations in matching circles on the sky.

  17. Shape-matching soft mechanical metamaterials.

    PubMed

    Mirzaali, M J; Janbaz, S; Strano, M; Vergani, L; Zadpoor, A A

    2018-01-17

    Architectured materials with rationally designed geometries could be used to create mechanical metamaterials with unprecedented or rare properties and functionalities. Here, we introduce "shape-matching" metamaterials where the geometry of cellular structures comprising auxetic and conventional unit cells is designed so as to achieve a pre-defined shape upon deformation. We used computational models to forward-map the space of planar shapes to the space of geometrical designs. The validity of the underlying computational models was first demonstrated by comparing their predictions with experimental observations on specimens fabricated with indirect additive manufacturing. The forward-maps were then used to devise the geometry of cellular structures that approximate the arbitrary shapes described by random Fourier's series. Finally, we show that the presented metamaterials could match the contours of three real objects including a scapula model, a pumpkin, and a Delft Blue pottery piece. Shape-matching materials have potential applications in soft robotics and wearable (medical) devices.

  18. Epigenome-wide cross-tissue predictive modeling and comparison of cord blood and placental methylation in a birth cohort

    PubMed Central

    De Carli, Margherita M; Baccarelli, Andrea A; Trevisi, Letizia; Pantic, Ivan; Brennan, Kasey JM; Hacker, Michele R; Loudon, Holly; Brunst, Kelly J; Wright, Robert O; Wright, Rosalind J; Just, Allan C

    2017-01-01

    Aim: We compared predictive modeling approaches to estimate placental methylation using cord blood methylation. Materials & methods: We performed locus-specific methylation prediction using both linear regression and support vector machine models with 174 matched pairs of 450k arrays. Results: At most CpG sites, both approaches gave poor predictions in spite of a misleading improvement in array-wide correlation. CpG islands and gene promoters, but not enhancers, were the genomic contexts where the correlation between measured and predicted placental methylation levels achieved higher values. We provide a list of 714 sites where both models achieved an R2 ≥0.75. Conclusion: The present study indicates the need for caution in interpreting cross-tissue predictions. Few methylation sites can be predicted between cord blood and placenta. PMID:28234020

  19. A contrast-sensitive channelized-Hotelling observer to predict human performance in a detection task using lumpy backgrounds and Gaussian signals

    NASA Astrophysics Data System (ADS)

    Park, Subok; Badano, Aldo; Gallas, Brandon D.; Myers, Kyle J.

    2007-03-01

    Previously, a non-prewhitening matched filter (NPWMF) incorporating a model for the contrast sensitivity of the human visual system was introduced for modeling human performance in detection tasks with different viewing angles and white-noise backgrounds by Badano et al. But NPWMF observers do not perform well detection tasks involving complex backgrounds since they do not account for random backgrounds. A channelized-Hotelling observer (CHO) using difference-of-Gaussians (DOG) channels has been shown to track human performance well in detection tasks using lumpy backgrounds. In this work, a CHO with DOG channels, incorporating the model of the human contrast sensitivity, was developed similarly. We call this new observer a contrast-sensitive CHO (CS-CHO). The Barten model was the basis of our human contrast sensitivity model. A scalar was multiplied to the Barten model and varied to control the thresholding effect of the contrast sensitivity on luminance-valued images and hence the performance-prediction ability of the CS-CHO. The performance of the CS-CHO was compared to the average human performance from the psychophysical study by Park et al., where the task was to detect a known Gaussian signal in non-Gaussian distributed lumpy backgrounds. Six different signal-intensity values were used in this study. We chose the free parameter of our model to match the mean human performance in the detection experiment at the strongest signal intensity. Then we compared the model to the human at five different signal-intensity values in order to see if the performance of the CS-CHO matched human performance. Our results indicate that the CS-CHO with the chosen scalar for the contrast sensitivity predicts human performance closely as a function of signal intensity.

  20. In vivo serial MRI-based models and statistical methods to quantify sensitivity and specificity of mechanical predictors for carotid plaque rupture: location and beyond.

    PubMed

    Wu, Zheyang; Yang, Chun; Tang, Dalin

    2011-06-01

    It has been hypothesized that mechanical risk factors may be used to predict future atherosclerotic plaque rupture. Truly predictive methods for plaque rupture and methods to identify the best predictor(s) from all the candidates are lacking in the literature. A novel combination of computational and statistical models based on serial magnetic resonance imaging (MRI) was introduced to quantify sensitivity and specificity of mechanical predictors to identify the best candidate for plaque rupture site prediction. Serial in vivo MRI data of carotid plaque from one patient was acquired with follow-up scan showing ulceration. 3D computational fluid-structure interaction (FSI) models using both baseline and follow-up data were constructed and plaque wall stress (PWS) and strain (PWSn) and flow maximum shear stress (FSS) were extracted from all 600 matched nodal points (100 points per matched slice, baseline matching follow-up) on the lumen surface for analysis. Each of the 600 points was marked "ulcer" or "nonulcer" using follow-up scan. Predictive statistical models for each of the seven combinations of PWS, PWSn, and FSS were trained using the follow-up data and applied to the baseline data to assess their sensitivity and specificity using the 600 data points for ulcer predictions. Sensitivity of prediction is defined as the proportion of the true positive outcomes that are predicted to be positive. Specificity of prediction is defined as the proportion of the true negative outcomes that are correctly predicted to be negative. Using probability 0.3 as a threshold to infer ulcer occurrence at the prediction stage, the combination of PWS and PWSn provided the best predictive accuracy with (sensitivity, specificity) = (0.97, 0.958). Sensitivity and specificity given by PWS, PWSn, and FSS individually were (0.788, 0.968), (0.515, 0.968), and (0.758, 0.928), respectively. The proposed computational-statistical process provides a novel method and a framework to assess the sensitivity and specificity of various risk indicators and offers the potential to identify the optimized predictor for plaque rupture using serial MRI with follow-up scan showing ulceration as the gold standard for method validation. While serial MRI data with actual rupture are hard to acquire, this single-case study suggests that combination of multiple predictors may provide potential improvement to existing plaque assessment schemes. With large-scale patient studies, this predictive modeling process may provide more solid ground for rupture predictor selection strategies and methods for image-based plaque vulnerability assessment.

  1. A computer model of the pediatric circulatory system for testing pediatric assist devices.

    PubMed

    Giridharan, Guruprasad A; Koenig, Steven C; Mitchell, Michael; Gartner, Mark; Pantalos, George M

    2007-01-01

    Lumped parameter computer models of the pediatric circulatory systems for 1- and 4-year-olds were developed to predict hemodynamic responses to mechanical circulatory support devices. Model parameters, including resistance, compliance and volume, were adjusted to match hemodynamic pressure and flow waveforms, pressure-volume loops, percent systole, and heart rate of pediatric patients (n = 6) with normal ventricles. Left ventricular failure was modeled by adjusting the time-varying compliance curve of the left heart to produce aortic pressures and cardiac outputs consistent with those observed clinically. Models of pediatric continuous flow (CF) and pulsatile flow (PF) ventricular assist devices (VAD) and intraaortic balloon pump (IABP) were developed and integrated into the heart failure pediatric circulatory system models. Computer simulations were conducted to predict acute hemodynamic responses to PF and CF VAD operating at 50%, 75% and 100% support and 2.5 and 5 ml IABP operating at 1:1 and 1:2 support modes. The computer model of the pediatric circulation matched the human pediatric hemodynamic waveform morphology to within 90% and cardiac function parameters with 95% accuracy. The computer model predicted PF VAD and IABP restore aortic pressure pulsatility and variation in end-systolic and end-diastolic volume, but diminish with increasing CF VAD support.

  2. Discrete Address Beacon System (DABS) Software System Reliability Modeling and Prediction.

    DTIC Science & Technology

    1981-06-01

    Service ( ATARS ) module because of its interim status. Reliability prediction models for software modules were derived and then verified by matching...System (A’iCR3BS) and thus can be introduced gradually and economically without ma jor olper- ational or procedural change. Since DABS uses monopulse...lineanaly- sis tools or are ured during maintenance or pre-initialization were not modeled because they are not part of the mission software. The ATARS

  3. Regional Models for Sediment Toxicity Assessment

    EPA Science Inventory

    This paper investigates the use of empirical models to predict the toxicity of sediment samples within a region to laboratory test organisms based on sediment chemistry. In earlier work, we used a large nationwide database of matching sediment chemistry and marine amphipod sedim...

  4. Longitudinal wave propagation in multi cylindrical viscoelastic matching layers of airborne ultrasonic transducer: new method to consider the matching layer's diameter (frequency <100 kHz).

    PubMed

    Saffar, Saber; Abdullah, Amir

    2013-08-01

    Wave propagation in viscoelastic disk layers is encountered in many applications including studies of airborne ultrasonic transducers. For viscoelastic materials, both material and geometric dispersion are possible when the diameter of the matching layer is of the same order as the wavelength. Lateral motions of the matching layer(s) that result from the Poisson effect are accounted by using a new concept called the "effective-density". A new wave equation is derived for both metallic and non-metallic (polymeric) materials, usually employed for the matching layers of airborne ultrasonic transducer. The material properties are modeled by using the Kelvin model for metals and Linear Solid Standard model for non-metallic (polymeric) matching layers. The utilized model of the material of the matching layers has influence on amount and trend of variation in speed ratio. In this regard, 60% reduction in speed ratio is observed for Kelvin model for aluminum with diameter of 80 mm at 100 kHz while for a similar diameter but Standard Linear Model, the speed ratio increase to twice value at 15 kHz, and then reduced until 70% at 67 kHz for Polypropylene. The new wave theory simplifies to the one-dimensional solution for waves in metallic or polymeric matching layers if the Poisson ratio is set to zero. The predictions simplify to Love's equation for stress waves in elastic disks when loss term is removed from equations for both models. Afterwards, the new wave theory is employed to determine the airborne ultrasonic matching layers to maximize the energy transmission to the air. The optimal matching layers are determined by using genetic algorithm theory for 1, 2 and 3 airborne matching layers. It has been shown that 1-D equation is useless at frequencies less than 100 kHz and the effect of diameter of the matching layers must be considered to determine the acoustic impedances (matching layers) to design airborne ultrasonic transducers. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Coupling Matched Molecular Pairs with Machine Learning for Virtual Compound Optimization.

    PubMed

    Turk, Samo; Merget, Benjamin; Rippmann, Friedrich; Fulle, Simone

    2017-12-26

    Matched molecular pair (MMP) analyses are widely used in compound optimization projects to gain insights into structure-activity relationships (SAR). The analysis is traditionally done via statistical methods but can also be employed together with machine learning (ML) approaches to extrapolate to novel compounds. The here introduced MMP/ML method combines a fragment-based MMP implementation with different machine learning methods to obtain automated SAR decomposition and prediction. To test the prediction capabilities and model transferability, two different compound optimization scenarios were designed: (1) "new fragments" which occurs when exploring new fragments for a defined compound series and (2) "new static core and transformations" which resembles for instance the identification of a new compound series. Very good results were achieved by all employed machine learning methods especially for the new fragments case, but overall deep neural network models performed best, allowing reliable predictions also for the new static core and transformations scenario, where comprehensive SAR knowledge of the compound series is missing. Furthermore, we show that models trained on all available data have a higher generalizability compared to models trained on focused series and can extend beyond chemical space covered in the training data. Thus, coupling MMP with deep neural networks provides a promising approach to make high quality predictions on various data sets and in different compound optimization scenarios.

  6. Using Biowin, Bayes, and batteries to predict ready biodegradability.

    PubMed

    Boethling, Robert S; Lynch, David G; Jaworska, Joanna S; Tunkel, Jay L; Thom, Gary C; Webb, Simon

    2004-04-01

    Whether or not a given chemical substance is readily biodegradable is an important piece of information in risk screening for both new and existing chemicals. Despite the relatively low cost of Organization for Economic Cooperation and Development tests, data are often unavailable and biodegradability must be estimated. In this paper, we focus on the predictive value of selected Biowin models and model batteries using Bayesian analysis. Posterior probabilities, calculated based on performance with the model training sets using Bayes' theorem, were closely matched by actual performance with an expanded set of 374 premanufacture notice (PMN) substances. Further analysis suggested that a simple battery consisting of Biowin3 (survey ultimate biodegradation model) and Biowin5 (Ministry of International Trade and Industry [MITI] linear model) would have enhanced predictive power in comparison to individual models. Application of the battery to PMN substances showed that performance matched expectation. This approach significantly reduced both false positives for ready biodegradability and the overall misclassification rate. Similar results were obtained for a set of 63 pharmaceuticals using a battery consisting of Biowin3 and Biowin6 (MITI nonlinear model). Biodegradation data for PMNs tested in multiple ready tests or both inherent and ready biodegradation tests yielded additional insights that may be useful in risk screening.

  7. Model-based influences on humans’ choices and striatal prediction errors

    PubMed Central

    Daw, Nathaniel D.; Gershman, Samuel J.; Seymour, Ben; Dayan, Peter; Dolan, Raymond J.

    2011-01-01

    Summary The mesostriatal dopamine system is prominently implicated in model-free reinforcement learning, with fMRI BOLD signals in ventral striatum notably covarying with model-free prediction errors. However, latent learning and devaluation studies show that behavior also shows hallmarks of model-based planning, and the interaction between model-based and model-free values, prediction errors and preferences is underexplored. We designed a multistep decision task in which model-based and model-free influences on human choice behavior could be distinguished. By showing that choices reflected both influences we could then test the purity of the ventral striatal BOLD signal as a model-free report. Contrary to expectations, the signal reflected both model-free and model-based predictions in proportions matching those that best explained choice behavior. These results challenge the notion of a separate model-free learner and suggest a more integrated computational architecture for high-level human decision-making. PMID:21435563

  8. Prediction of wastewater treatment plants performance based on artificial fish school neural network

    NASA Astrophysics Data System (ADS)

    Zhang, Ruicheng; Li, Chong

    2011-10-01

    A reliable model for wastewater treatment plant is essential in providing a tool for predicting its performance and to form a basis for controlling the operation of the process. This would minimize the operation costs and assess the stability of environmental balance. For the multi-variable, uncertainty, non-linear characteristics of the wastewater treatment system, an artificial fish school neural network prediction model is established standing on actual operation data in the wastewater treatment system. The model overcomes several disadvantages of the conventional BP neural network. The results of model calculation show that the predicted value can better match measured value, played an effect on simulating and predicting and be able to optimize the operation status. The establishment of the predicting model provides a simple and practical way for the operation and management in wastewater treatment plant, and has good research and engineering practical value.

  9. Category-length and category-strength effects using images of scenes.

    PubMed

    Baumann, Oliver; Vromen, Joyce M G; Boddy, Adam C; Crawshaw, Eloise; Humphreys, Michael S

    2018-06-21

    Global matching models have provided an important theoretical framework for recognition memory. Key predictions of this class of models are that (1) increasing the number of occurrences in a study list of some items affects the performance on other items (list-strength effect) and that (2) adding new items results in a deterioration of performance on the other items (list-length effect). Experimental confirmation of these predictions has been difficult, and the results have been inconsistent. A review of the existing literature, however, suggests that robust length and strength effects do occur when sufficiently similar hard-to-label items are used. In an effort to investigate this further, we had participants study lists containing one or more members of visual scene categories (bathrooms, beaches, etc.). Experiments 1 and 2 replicated and extended previous findings showing that the study of additional category members decreased accuracy, providing confirmation of the category-length effect. Experiment 3 showed that repeating some category members decreased the accuracy of nonrepeated members, providing evidence for a category-strength effect. Experiment 4 eliminated a potential challenge to these results. Taken together, these findings provide robust support for global matching models of recognition memory. The overall list lengths, the category sizes, and the number of repetitions used demonstrated that scene categories are well-suited to testing the fundamental assumptions of global matching models. These include (A) interference from memories for similar items and contexts, (B) nondestructive interference, and (C) that conjunctive information is made available through a matching operation.

  10. A matched-peak inversion approach for ocean acoustic travel-time tomography

    PubMed

    Skarsoulis

    2000-03-01

    A new approach for the inversion of travel-time data is proposed, based on the matching between model arrivals and observed peaks. Using the linearized model relations between sound-speed and arrival-time perturbations about a set of background states, arrival times and associated errors are calculated on a fine grid of model states discretizing the sound-speed parameter space. Each model state can explain (identify) a number of observed peaks in a particular reception lying within the uncertainty intervals of the corresponding predicted arrival times. The model states that explain the maximum number of observed peaks are considered as the more likely parametric descriptions of the reception; these model states can be described in terms of mean values and variances providing a statistical answer (matched-peak solution) to the inversion problem. A basic feature of the matched-peak inversion approach is that each reception can be treated independently, i.e., no constraints are posed from previous-reception identification or inversion results. Accordingly, there is no need for initialization of the inversion procedure and, furthermore, discontinuous travel-time data can be treated. The matched-peak inversion method is demonstrated by application to 9-month-long travel-time data from the Thetis-2 tomography experiment in the western Mediterranean sea.

  11. AMP: Assembly Matching Pursuit.

    PubMed

    Biswas, S; Jojic, V

    2013-01-01

    Metagenomics, the study of the total genetic material isolated from a biological host, promises to reveal host-microbe or microbe-microbe interactions that may help to personalize medicine or improve agronomic practice. We introduce a method that discovers metagenomic units (MGUs) relevant for phenotype prediction through sequence-based dictionary learning. The method aggregates patient-specific dictionaries and estimates MGU abundances in order to summarize a whole population and yield universally predictive biomarkers. We analyze the impact of Gaussian, Poisson, and Negative Binomial read count models in guiding dictionary construction by examining classification efficiency on a number of synthetic datasets and a real dataset from Ref. 1. Each outperforms standard methods of dictionary composition, such as random projection and orthogonal matching pursuit. Additionally, the predictive MGUs they recover are biologically relevant.

  12. A system structure for predictive relations in penetration mechanics

    NASA Astrophysics Data System (ADS)

    Korjack, Thomas A.

    1992-02-01

    The availability of a software system yielding quick numerical models to predict ballistic behavior is a requisite for any research laboratory engaged in material behavior. What is especially true about accessibility of rapid prototyping for terminal impaction is the enhancement of a system structure which will direct the specific material and impact situation towards a specific predictive model. This is of particular importance when the ranges of validity are at stake and the pertinent constraints associated with the impact are unknown. Hence, a compilation of semiempirical predictive penetration relations for various physical phenomena has been organized into a data structure for the purpose of developing a knowledge-based decision aided expert system to predict the terminal ballistic behavior of projectiles and targets. The ranges of validity and constraints of operation of each model were examined and cast into a decision tree structure to include target type, target material, projectile types, projectile materials, attack configuration, and performance or damage measures. This decision system implements many penetration relations, identifies formulas that match user-given conditions, and displays the predictive relation coincident with the match in addition to a numerical solution. The physical regimes under consideration encompass the hydrodynamic, transitional, and solid; the targets are either semi-infinite or plate, and the projectiles include kinetic and chemical energy. A preliminary databases has been constructed to allow further development of inductive and deductive reasoning techniques applied to ballistic situations involving terminal mechanics.

  13. Forecasting a winner for Malaysian Cup 2013 using soccer simulation model

    NASA Astrophysics Data System (ADS)

    Yusof, Muhammad Mat; Fauzee, Mohd Soffian Omar; Latif, Rozita Abdul

    2014-07-01

    This paper investigates through soccer simulation the calculation of the probability for each team winning Malaysia Cup 2013. Our methodology used here is we predict the outcomes of individual matches and then we simulate the Malaysia Cup 2013 tournament 5000 times. As match outcomes are always a matter of uncertainty, statistical model, in particular a double Poisson model is used to predict the number of goals scored and conceded for each team. Maximum likelihood estimation is use to measure the attacking strength and defensive weakness for each team. Based on our simulation result, LionXII has a higher probability in becoming the winner, followed by Selangor, ATM, JDT and Kelantan. Meanwhile, T-Team, Negeri Sembilan and Felda United have lower probabilities to win Malaysia Cup 2013. In summary, we find that the probability for each team becominga winner is small, indicating that the level of competitive balance in Malaysia Cup 2013 is quite high.

  14. Recurrent connectivity can account for the dynamics of disparity processing in V1

    PubMed Central

    Samonds, Jason M.; Potetz, Brian R.; Tyler, Christopher W.; Lee, Tai Sing

    2013-01-01

    Disparity tuning measured in the primary visual cortex (V1) is described well by the disparity energy model, but not all aspects of disparity tuning are fully explained by the model. Such deviations from the disparity energy model provide us with insight into how network interactions may play a role in disparity processing and help to solve the stereo correspondence problem. Here, we propose a neuronal circuit model with recurrent connections that provides a simple account of the observed deviations. The model is based on recurrent connections inferred from neurophysiological observations on spike timing correlations, and is in good accord with existing data on disparity tuning dynamics. We further performed two additional experiments to test predictions of the model. First, we increased the size of stimuli to drive more neurons and provide a stronger recurrent input. Our model predicted sharper disparity tuning for larger stimuli. Second, we displayed anti-correlated stereograms, where dots of opposite luminance polarity are matched between the left- and right-eye images and result in inverted disparity tuning in the disparity energy model. In this case, our model predicted reduced sharpening and strength of inverted disparity tuning. For both experiments, the dynamics of disparity tuning observed from the neurophysiological recordings in macaque V1 matched model simulation predictions. Overall, the results of this study support the notion that, while the disparity energy model provides a primary account of disparity tuning in V1 neurons, neural disparity processing in V1 neurons is refined by recurrent interactions among elements in the neural circuit. PMID:23407952

  15. Planck intermediate results. XLII. Large-scale Galactic magnetic fields

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Adam, R.; Ade, P. A. R.; Alves, M. I. R.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartolo, N.; Battaner, E.; Benabed, K.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Chiang, H. C.; Christensen, P. R.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Dickinson, C.; Diego, J. M.; Dolag, K.; Doré, O.; Ducout, A.; Dupac, X.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Ferrière, K.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Galeotta, S.; Ganga, K.; Ghosh, T.; Giard, M.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Harrison, D. L.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hobson, M.; Hornstrup, A.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Leahy, J. P.; Leonardi, R.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; Melchiorri, A.; Mennella, A.; Migliaccio, M.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Nørgaard-Nielsen, H. U.; Oppermann, N.; Orlando, E.; Pagano, L.; Pajot, F.; Paladini, R.; Paoletti, D.; Pasian, F.; Perotto, L.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Pratt, G. W.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Scott, D.; Spencer, L. D.; Stolyarov, V.; Stompor, R.; Strong, A. W.; Sudiwala, R.; Sunyaev, R.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Valenziano, L.; Valiviita, J.; Van Tent, F.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; Yvon, D.; Zacchei, A.; Zonca, A.

    2016-12-01

    Recent models for the large-scale Galactic magnetic fields in the literature have been largely constrained by synchrotron emission and Faraday rotation measures. We use three different but representative models to compare their predicted polarized synchrotron and dust emission with that measured by the Planck satellite. We first update these models to match the Planck synchrotron products using a common model for the cosmic-ray leptons. We discuss the impact on this analysis of the ongoing problems of component separation in the Planck microwave bands and of the uncertain cosmic-ray spectrum. In particular, the inferred degree of ordering in the magnetic fields is sensitive to these systematic uncertainties, and we further show the importance of considering the expected variations in the observables in addition to their mean morphology. We then compare the resulting simulated emission to the observed dust polarization and find that the dust predictions do not match the morphology in the Planck data but underpredict the dust polarization away from the plane. We modify one of the models to roughly match both observables at high latitudes by increasing the field ordering in the thin disc near the observer. Though this specific analysis is dependent on the component separation issues, we present the improved model as a proof of concept for how these studies can be advanced in future using complementary information from ongoing and planned observational projects.

  16. Field-scale prediction of enhanced DNAPL dissolution based on partitioning tracers.

    PubMed

    Wang, Fang; Annable, Michael D; Jawitz, James W

    2013-09-01

    The equilibrium streamtube model (EST) has demonstrated the ability to accurately predict dense nonaqueous phase liquid (DNAPL) dissolution in laboratory experiments and numerical simulations. Here the model is applied to predict DNAPL dissolution at a tetrachloroethylene (PCE)-contaminated dry cleaner site, located in Jacksonville, Florida. The EST model is an analytical solution with field-measurable input parameters. Measured data from a field-scale partitioning tracer test were used to parameterize the EST model and the predicted PCE dissolution was compared to measured data from an in-situ ethanol flood. In addition, a simulated partitioning tracer test from a calibrated, three-dimensional, spatially explicit multiphase flow model (UTCHEM) was also used to parameterize the EST analytical solution. The EST ethanol prediction based on both the field partitioning tracer test and the simulation closely matched the total recovery well field ethanol data with Nash-Sutcliffe efficiency E=0.96 and 0.90, respectively. The EST PCE predictions showed a peak shift to earlier arrival times for models based on either field-measured or simulated partitioning tracer tests, resulting in poorer matches to the field PCE data in both cases. The peak shifts were concluded to be caused by well screen interval differences between the field tracer test and ethanol flood. Both the EST model and UTCHEM were also used to predict PCE aqueous dissolution under natural gradient conditions, which has a much less complex flow pattern than the forced-gradient double five spot used for the ethanol flood. The natural gradient EST predictions based on parameters determined from tracer tests conducted with a complex flow pattern underestimated the UTCHEM-simulated natural gradient total mass removal by 12% after 170 pore volumes of water flushing indicating that some mass was not detected by the tracers likely due to stagnation zones in the flow field. These findings highlight the important influence of well configuration and the associated flow patterns on dissolution. © 2013.

  17. Field-scale prediction of enhanced DNAPL dissolution based on partitioning tracers

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Annable, Michael D.; Jawitz, James W.

    2013-09-01

    The equilibrium streamtube model (EST) has demonstrated the ability to accurately predict dense nonaqueous phase liquid (DNAPL) dissolution in laboratory experiments and numerical simulations. Here the model is applied to predict DNAPL dissolution at a tetrachloroethylene (PCE)-contaminated dry cleaner site, located in Jacksonville, Florida. The EST model is an analytical solution with field-measurable input parameters. Measured data from a field-scale partitioning tracer test were used to parameterize the EST model and the predicted PCE dissolution was compared to measured data from an in-situ ethanol flood. In addition, a simulated partitioning tracer test from a calibrated, three-dimensional, spatially explicit multiphase flow model (UTCHEM) was also used to parameterize the EST analytical solution. The EST ethanol prediction based on both the field partitioning tracer test and the simulation closely matched the total recovery well field ethanol data with Nash-Sutcliffe efficiency E = 0.96 and 0.90, respectively. The EST PCE predictions showed a peak shift to earlier arrival times for models based on either field-measured or simulated partitioning tracer tests, resulting in poorer matches to the field PCE data in both cases. The peak shifts were concluded to be caused by well screen interval differences between the field tracer test and ethanol flood. Both the EST model and UTCHEM were also used to predict PCE aqueous dissolution under natural gradient conditions, which has a much less complex flow pattern than the forced-gradient double five spot used for the ethanol flood. The natural gradient EST predictions based on parameters determined from tracer tests conducted with a complex flow pattern underestimated the UTCHEM-simulated natural gradient total mass removal by 12% after 170 pore volumes of water flushing indicating that some mass was not detected by the tracers likely due to stagnation zones in the flow field. These findings highlight the important influence of well configuration and the associated flow patterns on dissolution.

  18. Automated next-to-leading order predictions for new physics at the LHC: The case of colored scalar pair production

    DOE PAGES

    Degrande, Céline; Fuks, Benjamin; Hirschi, Valentin; ...

    2015-05-05

    We present for the first time the full automation of collider predictions matched with parton showers at the next-to-leading accuracy in QCD within nontrivial extensions of the standard model. The sole inputs required from the user are the model Lagrangian and the process of interest. As an application of the above, we explore scenarios beyond the standard model where new colored scalar particles can be pair produced in hadron collisions. Using simplified models to describe the new field interactions with the standard model, we present precision predictions for the LHC within the MadGraph5_aMC@NLO framework.

  19. The Research of Multiple Attenuation Based on Feedback Iteration and Independent Component Analysis

    NASA Astrophysics Data System (ADS)

    Xu, X.; Tong, S.; Wang, L.

    2017-12-01

    How to solve the problem of multiple suppression is a difficult problem in seismic data processing. The traditional technology for multiple attenuation is based on the principle of the minimum output energy of the seismic signal, this criterion is based on the second order statistics, and it can't achieve the multiple attenuation when the primaries and multiples are non-orthogonal. In order to solve the above problems, we combine the feedback iteration method based on the wave equation and the improved independent component analysis (ICA) based on high order statistics to suppress the multiple waves. We first use iterative feedback method to predict the free surface multiples of each order. Then, in order to predict multiples from real multiple in amplitude and phase, we design an expanded pseudo multi-channel matching filtering method to get a more accurate matching multiple result. Finally, we present the improved fast ICA algorithm which is based on the maximum non-Gauss criterion of output signal to the matching multiples and get better separation results of the primaries and the multiples. The advantage of our method is that we don't need any priori information to the prediction of the multiples, and can have a better separation result. The method has been applied to several synthetic data generated by finite-difference model technique and the Sigsbee2B model multiple data, the primaries and multiples are non-orthogonal in these models. The experiments show that after three to four iterations, we can get the perfect multiple results. Using our matching method and Fast ICA adaptive multiple subtraction, we can not only effectively preserve the effective wave energy in seismic records, but also can effectively suppress the free surface multiples, especially the multiples related to the middle and deep areas.

  20. Cross-matching: a modified cross-correlation underlying threshold energy model and match-based depth perception

    PubMed Central

    Doi, Takahiro; Fujita, Ichiro

    2014-01-01

    Three-dimensional visual perception requires correct matching of images projected to the left and right eyes. The matching process is faced with an ambiguity: part of one eye's image can be matched to multiple parts of the other eye's image. This stereo correspondence problem is complicated for random-dot stereograms (RDSs), because dots with an identical appearance produce numerous potential matches. Despite such complexity, human subjects can perceive a coherent depth structure. A coherent solution to the correspondence problem does not exist for anticorrelated RDSs (aRDSs), in which luminance contrast is reversed in one eye. Neurons in the visual cortex reduce disparity selectivity for aRDSs progressively along the visual processing hierarchy. A disparity-energy model followed by threshold nonlinearity (threshold energy model) can account for this reduction, providing a possible mechanism for the neural matching process. However, the essential computation underlying the threshold energy model is not clear. Here, we propose that a nonlinear modification of cross-correlation, which we term “cross-matching,” represents the essence of the threshold energy model. We placed half-wave rectification within the cross-correlation of the left-eye and right-eye images. The disparity tuning derived from cross-matching was attenuated for aRDSs. We simulated a psychometric curve as a function of graded anticorrelation (graded mixture of aRDS and normal RDS); this simulated curve reproduced the match-based psychometric function observed in human near/far discrimination. The dot density was 25% for both simulation and observation. We predicted that as the dot density increased, the performance for aRDSs should decrease below chance (i.e., reversed depth), and the level of anticorrelation that nullifies depth perception should also decrease. We suggest that cross-matching serves as a simple computation underlying the match-based disparity signals in stereoscopic depth perception. PMID:25360107

  1. Bottom-up coarse-grained models with predictive accuracy and transferability for both structural and thermodynamic properties of heptane-toluene mixtures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunn, Nicholas J. H.; Noid, W. G., E-mail: wnoid@chem.psu.edu

    This work investigates the promise of a “bottom-up” extended ensemble framework for developing coarse-grained (CG) models that provide predictive accuracy and transferability for describing both structural and thermodynamic properties. We employ a force-matching variational principle to determine system-independent, i.e., transferable, interaction potentials that optimally model the interactions in five distinct heptane-toluene mixtures. Similarly, we employ a self-consistent pressure-matching approach to determine a system-specific pressure correction for each mixture. The resulting CG potentials accurately reproduce the site-site rdfs, the volume fluctuations, and the pressure equations of state that are determined by all-atom (AA) models for the five mixtures. Furthermore, we demonstratemore » that these CG potentials provide similar accuracy for additional heptane-toluene mixtures that were not included their parameterization. Surprisingly, the extended ensemble approach improves not only the transferability but also the accuracy of the calculated potentials. Additionally, we observe that the required pressure corrections strongly correlate with the intermolecular cohesion of the system-specific CG potentials. Moreover, this cohesion correlates with the relative “structure” within the corresponding mapped AA ensemble. Finally, the appendix demonstrates that the self-consistent pressure-matching approach corresponds to minimizing an appropriate relative entropy.« less

  2. Bottom-up coarse-grained models with predictive accuracy and transferability for both structural and thermodynamic properties of heptane-toluene mixtures.

    PubMed

    Dunn, Nicholas J H; Noid, W G

    2016-05-28

    This work investigates the promise of a "bottom-up" extended ensemble framework for developing coarse-grained (CG) models that provide predictive accuracy and transferability for describing both structural and thermodynamic properties. We employ a force-matching variational principle to determine system-independent, i.e., transferable, interaction potentials that optimally model the interactions in five distinct heptane-toluene mixtures. Similarly, we employ a self-consistent pressure-matching approach to determine a system-specific pressure correction for each mixture. The resulting CG potentials accurately reproduce the site-site rdfs, the volume fluctuations, and the pressure equations of state that are determined by all-atom (AA) models for the five mixtures. Furthermore, we demonstrate that these CG potentials provide similar accuracy for additional heptane-toluene mixtures that were not included their parameterization. Surprisingly, the extended ensemble approach improves not only the transferability but also the accuracy of the calculated potentials. Additionally, we observe that the required pressure corrections strongly correlate with the intermolecular cohesion of the system-specific CG potentials. Moreover, this cohesion correlates with the relative "structure" within the corresponding mapped AA ensemble. Finally, the appendix demonstrates that the self-consistent pressure-matching approach corresponds to minimizing an appropriate relative entropy.

  3. Prediction of Drug-Drug Interactions with Crizotinib as the CYP3A Substrate Using a Physiologically Based Pharmacokinetic Model.

    PubMed

    Yamazaki, Shinji; Johnson, Theodore R; Smith, Bill J

    2015-10-01

    An orally available multiple tyrosine kinase inhibitor, crizotinib (Xalkori), is a CYP3A substrate, moderate time-dependent inhibitor, and weak inducer. The main objectives of the present study were to: 1) develop and refine a physiologically based pharmacokinetic (PBPK) model of crizotinib on the basis of clinical single- and multiple-dose results, 2) verify the crizotinib PBPK model from crizotinib single-dose drug-drug interaction (DDI) results with multiple-dose coadministration of ketoconazole or rifampin, and 3) apply the crizotinib PBPK model to predict crizotinib multiple-dose DDI outcomes. We also focused on gaining insights into the underlying mechanisms mediating crizotinib DDIs using a dynamic PBPK model, the Simcyp population-based simulator. First, PBPK model-predicted crizotinib exposures adequately matched clinically observed results in the single- and multiple-dose studies. Second, the model-predicted crizotinib exposures sufficiently matched clinically observed results in the crizotinib single-dose DDI studies with ketoconazole or rifampin, resulting in the reasonably predicted fold-increases in crizotinib exposures. Finally, the predicted fold-increases in crizotinib exposures in the multiple-dose DDI studies were roughly comparable to those in the single-dose DDI studies, suggesting that the effects of crizotinib CYP3A time-dependent inhibition (net inhibition) on the multiple-dose DDI outcomes would be negligible. Therefore, crizotinib dose-adjustment in the multiple-dose DDI studies could be made on the basis of currently available single-dose results. Overall, we believe that the crizotinib PBPK model developed, refined, and verified in the present study would adequately predict crizotinib oral exposures in other clinical studies, such as DDIs with weak/moderate CYP3A inhibitors/inducers and drug-disease interactions in patients with hepatic or renal impairment. Copyright © 2015 by The American Society for Pharmacology and Experimental Therapeutics.

  4. Predicting invasion risk using measures of introduction effort and environmental niche models.

    PubMed

    Herborg, Leif-Matthias; Jerde, Christopher L; Lodge, David M; Ruiz, Gregory M; MacIsaac, Hugh J

    2007-04-01

    The Chinese mitten crab (Eriocheir sinensis) is native to east Asia, is established throughout Europe, and is introduced but geographically restricted in North America. We developed and compared two separate environmental niche models using genetic algorithm for rule set prediction (GARP) and mitten crab occurrences in Asia and Europe to predict the species' potential distribution in North America. Since mitten crabs must reproduce in water with >15% per hundred salinity, we limited the potential North American range to freshwater habitats within the highest documented dispersal distance (1260 km) and a more restricted dispersal limit (354 km) from the sea. Applying the higher dispersal distance, both models predicted the lower Great Lakes, most of the eastern seaboard, the Gulf of Mexico and southern extent of the Mississippi River watershed, and the Pacific northwest as suitable environment for mitten crabs, but environmental match for southern states (below 35 degrees N) was much lower for the European model. Use of the lower range with both models reduced the expected range, especially in the Great Lakes, Mississippi drainage, and inland areas of the Pacific Northwest. To estimate the risk of introduction of mitten crabs, the amount of reported ballast water discharge into major United States ports from regions in Asia and Europe with established mitten crab populations was used as an index of introduction effort. Relative risk of invasion was estimated based on a combination of environmental match and volume of unexchanged ballast water received (July 1999-December 2003) for major ports. The ports of Norfolk and Baltimore were most vulnerable to invasion and establishment, making Chesapeake Bay the most likely location to be invaded by mitten crabs in the United States. The next highest risk was predicted for Portland, Oregon. Interestingly, the port of Los Angeles/Long Beach, which has a large shipping volume, had a low risk of invasion. Ports such as Jacksonville, Florida, had a medium risk owing to small shipping volume but high environmental match. This study illustrates that the combination of environmental niche- and vector-based models can provide managers with more precise estimates of invasion risk than can either of these approaches alone.

  5. Independent external validation of predictive models for urinary dysfunction following external beam radiotherapy of the prostate: Issues in model development and reporting.

    PubMed

    Yahya, Noorazrul; Ebert, Martin A; Bulsara, Max; Kennedy, Angel; Joseph, David J; Denham, James W

    2016-08-01

    Most predictive models are not sufficiently validated for prospective use. We performed independent external validation of published predictive models for urinary dysfunctions following radiotherapy of the prostate. Multivariable models developed to predict atomised and generalised urinary symptoms, both acute and late, were considered for validation using a dataset representing 754 participants from the TROG 03.04-RADAR trial. Endpoints and features were harmonised to match the predictive models. The overall performance, calibration and discrimination were assessed. 14 models from four publications were validated. The discrimination of the predictive models in an independent external validation cohort, measured using the area under the receiver operating characteristic (ROC) curve, ranged from 0.473 to 0.695, generally lower than in internal validation. 4 models had ROC >0.6. Shrinkage was required for all predictive models' coefficients ranging from -0.309 (prediction probability was inverse to observed proportion) to 0.823. Predictive models which include baseline symptoms as a feature produced the highest discrimination. Two models produced a predicted probability of 0 and 1 for all patients. Predictive models vary in performance and transferability illustrating the need for improvements in model development and reporting. Several models showed reasonable potential but efforts should be increased to improve performance. Baseline symptoms should always be considered as potential features for predictive models. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  6. PARTS: Probabilistic Alignment for RNA joinT Secondary structure prediction

    PubMed Central

    Harmanci, Arif Ozgun; Sharma, Gaurav; Mathews, David H.

    2008-01-01

    A novel method is presented for joint prediction of alignment and common secondary structures of two RNA sequences. The joint consideration of common secondary structures and alignment is accomplished by structural alignment over a search space defined by the newly introduced motif called matched helical regions. The matched helical region formulation generalizes previously employed constraints for structural alignment and thereby better accommodates the structural variability within RNA families. A probabilistic model based on pseudo free energies obtained from precomputed base pairing and alignment probabilities is utilized for scoring structural alignments. Maximum a posteriori (MAP) common secondary structures, sequence alignment and joint posterior probabilities of base pairing are obtained from the model via a dynamic programming algorithm called PARTS. The advantage of the more general structural alignment of PARTS is seen in secondary structure predictions for the RNase P family. For this family, the PARTS MAP predictions of secondary structures and alignment perform significantly better than prior methods that utilize a more restrictive structural alignment model. For the tRNA and 5S rRNA families, the richer structural alignment model of PARTS does not offer a benefit and the method therefore performs comparably with existing alternatives. For all RNA families studied, the posterior probability estimates obtained from PARTS offer an improvement over posterior probability estimates from a single sequence prediction. When considering the base pairings predicted over a threshold value of confidence, the combination of sensitivity and positive predictive value is superior for PARTS than for the single sequence prediction. PARTS source code is available for download under the GNU public license at http://rna.urmc.rochester.edu. PMID:18304945

  7. Time-dependent Ionization in a Steady Flow in an MHD Model of the Solar Corona and Wind

    NASA Astrophysics Data System (ADS)

    Shen, Chengcai; Raymond, John C.; Mikić, Zoran; Linker, Jon A.; Reeves, Katharine K.; Murphy, Nicholas A.

    2017-11-01

    Time-dependent ionization is important for diagnostics of coronal streamers and pseudostreamers. We describe time-dependent ionization calculations for a three-dimensional magnetohydrodynamic (MHD) model of the solar corona and inner heliosphere. We analyze how non-equilibrium ionization (NEI) influences emission from a pseudostreamer during the Whole Sun Month interval (Carrington rotation CR1913, 1996 August 22 to September 18). We use a time-dependent code to calculate NEI states, based on the plasma temperature, density, velocity, and magnetic field in the MHD model, to obtain the synthetic emissivities and predict the intensities of the Lyα, O VI, Mg x, and Si xii emission lines observed by the SOHO/Ultraviolet Coronagraph Spectrometer (UVCS). At low coronal heights, the predicted intensity profiles of both Lyα and O VI lines match UVCS observations well, but the Mg x and Si xii emission are predicted to be too bright. At larger heights, the O VI and Mg x lines are predicted to be brighter for NEI than equilibrium ionization around this pseudostreamer, and Si xii is predicted to be fainter for NEI cases. The differences of predicted UVCS intensities between NEI and equilibrium ionization are around a factor of 2, but neither matches the observed intensity distributions along the full length of the UVCS slit. Variations in elemental abundances in closed field regions due to the gravitational settling and the FIP effect may significantly contribute to the predicted uncertainty. The assumption of Maxwellian electron distributions and errors in the magnetic field on the solar surface may also have notable effects on the mismatch between observations and model predictions.

  8. Neuropsychological tests for predicting cognitive decline in older adults

    PubMed Central

    Baerresen, Kimberly M; Miller, Karen J; Hanson, Eric R; Miller, Justin S; Dye, Richelin V; Hartman, Richard E; Vermeersch, David; Small, Gary W

    2015-01-01

    Summary Aim To determine neuropsychological tests likely to predict cognitive decline. Methods A sample of nonconverters (n = 106) was compared with those who declined in cognitive status (n = 24). Significant univariate logistic regression prediction models were used to create multivariate logistic regression models to predict decline based on initial neuropsychological testing. Results Rey–Osterrieth Complex Figure Test (RCFT) Retention predicted conversion to mild cognitive impairment (MCI) while baseline Buschke Delay predicted conversion to Alzheimer’s disease (AD). Due to group sample size differences, additional analyses were conducted using a subsample of demographically matched nonconverters. Analyses indicated RCFT Retention predicted conversion to MCI and AD, and Buschke Delay predicted conversion to AD. Conclusion Results suggest RCFT Retention and Buschke Delay may be useful in predicting cognitive decline. PMID:26107318

  9. Enhancing model prediction reliability through improved soil representation and constrained model auto calibration - A paired waterhsed study

    USDA-ARS?s Scientific Manuscript database

    Process based and distributed watershed models possess a large number of parameters that are not directly measured in field and need to be calibrated through matching modeled in-stream fluxes with monitored data. Recently, there have been waves of concern about the reliability of this common practic...

  10. Verification of NWP Cloud Properties using A-Train Satellite Observations

    NASA Astrophysics Data System (ADS)

    Kucera, P. A.; Weeks, C.; Wolff, C.; Bullock, R.; Brown, B.

    2011-12-01

    Recently, the NCAR Model Evaluation Tools (MET) has been enhanced to incorporate satellite observations for the verification of Numerical Weather Prediction (NWP) cloud products. We have developed tools that match fields spatially (both in the vertical and horizontal dimensions) to compare NWP products with satellite observations. These matched fields provide diagnostic evaluation of cloud macro attributes such as vertical distribution of clouds, cloud top height, and the spatial and seasonal distribution of cloud fields. For this research study, we have focused on using CloudSat, CALIPSO, and MODIS observations to evaluate cloud fields for a variety of NWP fields and derived products. We have selected cases ranging from large, mid-latitude synoptic systems to well-organized tropical cyclones. For each case, we matched the observed cloud field with gridded model and/or derived product fields. CloudSat and CALIPSO observations and model fields were matched and compared in the vertical along the orbit track. MODIS data and model fields were matched and compared in the horizontal. We then use MET to compute the verification statistics to quantify the performance of the models in representing the cloud fields. In this presentation we will give a summary of our comparison and show verification results for both synoptic and tropical cyclone cases.

  11. QCT/FEA predictions of femoral stiffness are strongly affected by boundary condition modeling

    PubMed Central

    Rossman, Timothy; Kushvaha, Vinod; Dragomir-Daescu, Dan

    2015-01-01

    Quantitative computed tomography-based finite element models of proximal femora must be validated with cadaveric experiments before using them to assess fracture risk in osteoporotic patients. During validation it is essential to carefully assess whether the boundary condition modeling matches the experimental conditions. This study evaluated proximal femur stiffness results predicted by six different boundary condition methods on a sample of 30 cadaveric femora and compared the predictions with experimental data. The average stiffness varied by 280% among the six boundary conditions. Compared with experimental data the predictions ranged from overestimating the average stiffness by 65% to underestimating it by 41%. In addition we found that the boundary condition that distributed the load to the contact surfaces similar to the expected contact mechanics predictions had the best agreement with experimental stiffness. We concluded that boundary conditions modeling introduced large variations in proximal femora stiffness predictions. PMID:25804260

  12. A mathematical model for the interactive behavior of sulfate-reducing bacteria and methanogens during anaerobic digestion.

    PubMed

    Ahammad, S Ziauddin; Gomes, James; Sreekrishnan, T R

    2011-09-01

    Anaerobic degradation of waste involves different classes of microorganisms, and there are different types of interactions among them for substrates, terminal electron acceptors, and so on. A mathematical model is developed based on the mass balance of different substrates, products, and microbes present in the system to study the interaction between methanogens and sulfate-reducing bacteria (SRB). The performance of major microbial consortia present in the system, such as propionate-utilizing acetogens, butyrate-utilizing acetogens, acetoclastic methanogens, hydrogen-utilizing methanogens, and SRB were considered and analyzed in the model. Different substrates consumed and products formed during the process also were considered in the model. The experimental observations and model predictions showed very good prediction capabilities of the model. Model prediction was validated statistically. It was observed that the model-predicted values matched the experimental data very closely, with an average error of 3.9%.

  13. Predictive information processing in music cognition. A critical review.

    PubMed

    Rohrmeier, Martin A; Koelsch, Stefan

    2012-02-01

    Expectation and prediction constitute central mechanisms in the perception and cognition of music, which have been explored in theoretical and empirical accounts. We review the scope and limits of theoretical accounts of musical prediction with respect to feature-based and temporal prediction. While the concept of prediction is unproblematic for basic single-stream features such as melody, it is not straight-forward for polyphonic structures or higher-order features such as formal predictions. Behavioural results based on explicit and implicit (priming) paradigms provide evidence of priming in various domains that may reflect predictive behaviour. Computational learning models, including symbolic (fragment-based), probabilistic/graphical, or connectionist approaches, provide well-specified predictive models of specific features and feature combinations. While models match some experimental results, full-fledged music prediction cannot yet be modelled. Neuroscientific results regarding the early right-anterior negativity (ERAN) and mismatch negativity (MMN) reflect expectancy violations on different levels of processing complexity, and provide some neural evidence for different predictive mechanisms. At present, the combinations of neural and computational modelling methodologies are at early stages and require further research. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. An empirical model for estimating annual consumption by freshwater fish populations

    USGS Publications Warehouse

    Liao, H.; Pierce, C.L.; Larscheid, J.G.

    2005-01-01

    Population consumption is an important process linking predator populations to their prey resources. Simple tools are needed to enable fisheries managers to estimate population consumption. We assembled 74 individual estimates of annual consumption by freshwater fish populations and their mean annual population size, 41 of which also included estimates of mean annual biomass. The data set included 14 freshwater fish species from 10 different bodies of water. From this data set we developed two simple linear regression models predicting annual population consumption. Log-transformed population size explained 94% of the variation in log-transformed annual population consumption. Log-transformed biomass explained 98% of the variation in log-transformed annual population consumption. We quantified the accuracy of our regressions and three alternative consumption models as the mean percent difference from observed (bioenergetics-derived) estimates in a test data set. Predictions from our population-size regression matched observed consumption estimates poorly (mean percent difference = 222%). Predictions from our biomass regression matched observed consumption reasonably well (mean percent difference = 24%). The biomass regression was superior to an alternative model, similar in complexity, and comparable to two alternative models that were more complex and difficult to apply. Our biomass regression model, log10(consumption) = 0.5442 + 0.9962??log10(biomass), will be a useful tool for fishery managers, enabling them to make reasonably accurate annual population consumption predictions from mean annual biomass estimates. ?? Copyright by the American Fisheries Society 2005.

  15. Patient satisfaction and referral intention: effect of patient-physician match on ethnic origin and cultural similarity.

    PubMed

    Lin, Xiaohua; Guan, Jian

    2002-01-01

    The study brought a cultural perspective into the mainstream model of health service quality by taking into account minorities' unique experience, patient-physician match on ethnic origin and cultural similarity. Survey data from Asian-American respondents supported a three-dimensional humaneness-professionalism-competence model of physician attributes. Physician humaneness and professionalism, patient-physician match on ethnic origin and cultural similarity predicted patient overall satisfaction and referral intention among Asian-Americans. Interestingly, the 3-dimensional model of physician attributes was also revealed in a Caucasian-American sample. However, Caucasian-Americans differ from Asian-Americans in several ways: physician competence was a significant predictor of overall satisfaction; professionalism was the only determinant of referral intention; and cultural similarity was not a significant factor with regards to either overall satisfaction or referral intention.

  16. The Effect of Latent Binary Variables on the Uncertainty of the Prediction of a Dichotomous Outcome Using Logistic Regression Based Propensity Score Matching.

    PubMed

    Szekér, Szabolcs; Vathy-Fogarassy, Ágnes

    2018-01-01

    Logistic regression based propensity score matching is a widely used method in case-control studies to select the individuals of the control group. This method creates a suitable control group if all factors affecting the output variable are known. However, if relevant latent variables exist as well, which are not taken into account during the calculations, the quality of the control group is uncertain. In this paper, we present a statistics-based research in which we try to determine the relationship between the accuracy of the logistic regression model and the uncertainty of the dependent variable of the control group defined by propensity score matching. Our analyses show that there is a linear correlation between the fit of the logistic regression model and the uncertainty of the output variable. In certain cases, a latent binary explanatory variable can result in a relative error of up to 70% in the prediction of the outcome variable. The observed phenomenon calls the attention of analysts to an important point, which must be taken into account when deducting conclusions.

  17. Understanding the Psychological Processes of the Racial Match Effect in Asian Americans

    PubMed Central

    Meyer, Oanh; Zane, Nolan; Cho, Young Il

    2014-01-01

    Some studies on mental health outcomes research have found that when clients and therapists are ethnically or racially matched, this tends to be related to greater satisfaction and better outcomes. However, the precise underlying mechanism for the match effect has not been extensively examined. In this experimental study, we tested the effect of racial match on critical counseling processes (i.e., therapist credibility and the working alliance) using a sample of 171 Asian American respondents. We also examined Asian ethnic identification as a potential moderator of the racial match effect. Structural equation modeling analyses indicated that racially matched individuals perceived greater experiential similarity with the therapist than nonmatched individuals, and experiential similarity was positively associated with therapist credibility. Although racial match did not predict attitudinal similarity, attitudinal similarity was strongly related to the working alliance and therapist credibility. Counseling implications are discussed. PMID:21574698

  18. Analysis and modeling of infrasound from a four-stage rocket launch.

    PubMed

    Blom, Philip; Marcillo, Omar; Arrowsmith, Stephen

    2016-06-01

    Infrasound from a four-stage sounding rocket was recorded by several arrays within 100 km of the launch pad. Propagation modeling methods have been applied to the known trajectory to predict infrasonic signals at the ground in order to identify what information might be obtained from such observations. There is good agreement between modeled and observed back azimuths, and predicted arrival times for motor ignition signals match those observed. The signal due to the high-altitude stage ignition is found to be low amplitude, despite predictions of weak attenuation. This lack of signal is possibly due to inefficient aeroacoustic coupling in the rarefied upper atmosphere.

  19. Predicting Reading Growth with Event-Related Potentials: Thinking Differently about Indexing “Responsiveness”

    PubMed Central

    Lemons, Christopher J.; Key, Alexandra P.F.; Fuchs, Douglas; Yoder, Paul J.; Fuchs, Lynn S.; Compton, Donald L.; Williams, Susan M.; Bouton, Bobette

    2009-01-01

    The purpose of this study was to determine if event-related potential (ERP) data collected during three reading-related tasks (Letter Sound Matching, Nonword Rhyming, and Nonword Reading) could be used to predict short-term reading growth on a curriculum-based measure of word identification fluency over 19 weeks in a sample of 29 first-grade children. Results indicate that ERP responses to the Letter Sound Matching task were predictive of reading change and remained so after controlling for two previously validated behavioral predictors of reading, Rapid Letter Naming and Segmenting. ERP data for the other tasks were not correlated with reading change. The potential for cognitive neuroscience to enhance current methods of indexing responsiveness in a response-to-intervention (RTI) model is discussed. PMID:20514353

  20. Predicting the threshold of pulse-train electrical stimuli using a stochastic auditory nerve model: the effects of stimulus noise.

    PubMed

    Xu, Yifang; Collins, Leslie M

    2004-04-01

    The incorporation of low levels of noise into an electrical stimulus has been shown to improve auditory thresholds in some human subjects (Zeng et al., 2000). In this paper, thresholds for noise-modulated pulse-train stimuli are predicted utilizing a stochastic neural-behavioral model of ensemble fiber responses to bi-phasic stimuli. The neural refractory effect is described using a Markov model for a noise-free pulse-train stimulus and a closed-form solution for the steady-state neural response is provided. For noise-modulated pulse-train stimuli, a recursive method using the conditional probability is utilized to track the neural responses to each successive pulse. A neural spike count rule has been presented for both threshold and intensity discrimination under the assumption that auditory perception occurs via integration over a relatively long time period (Bruce et al., 1999). An alternative approach originates from the hypothesis of the multilook model (Viemeister and Wakefield, 1991), which argues that auditory perception is based on several shorter time integrations and may suggest an NofM model for prediction of pulse-train threshold. This motivates analyzing the neural response to each individual pulse within a pulse train, which is considered to be the brief look. A logarithmic rule is hypothesized for pulse-train threshold. Predictions from the multilook model are shown to match trends in psychophysical data for noise-free stimuli that are not always matched by the long-time integration rule. Theoretical predictions indicate that threshold decreases as noise variance increases. Theoretical models of the neural response to pulse-train stimuli not only reduce calculational overhead but also facilitate utilization of signal detection theory and are easily extended to multichannel psychophysical tasks.

  1. Using Uncertainty Quantification to Guide Development and Improvements of a Regional-Scale Model of the Coastal Lowlands Aquifer System Spanning Texas, Louisiana, Mississippi, Alabama and Florida

    NASA Astrophysics Data System (ADS)

    Foster, L. K.; Clark, B. R.; Duncan, L. L.; Tebo, D. T.; White, J.

    2017-12-01

    Several historical groundwater models exist within the Coastal Lowlands Aquifer System (CLAS), which spans the Gulf Coastal Plain in Texas, Louisiana, Mississippi, Alabama, and Florida. The largest of these models, called the Gulf Coast Regional Aquifer System Analysis (RASA) model, has been brought into a new framework using the Newton formulation for MODFLOW-2005 (MODFLOW-NWT) and serves as the starting point of a new investigation underway by the U.S. Geological Survey to improve understanding of the CLAS and provide predictions of future groundwater availability within an uncertainty quantification (UQ) framework. The use of an UQ framework will not only provide estimates of water-level observation worth, hydraulic parameter uncertainty, boundary-condition uncertainty, and uncertainty of future potential predictions, but it will also guide the model development process. Traditionally, model development proceeds from dataset construction to the process of deterministic history matching, followed by deterministic predictions using the model. This investigation will combine the use of UQ with existing historical models of the study area to assess in a quantitative framework the effect model package and property improvements have on the ability to represent past-system states, as well as the effect on the model's ability to make certain predictions of water levels, water budgets, and base-flow estimates. Estimates of hydraulic property information and boundary conditions from the existing models and literature, forming the prior, will be used to make initial estimates of model forecasts and their corresponding uncertainty, along with an uncalibrated groundwater model run within an unconstrained Monte Carlo analysis. First-Order Second-Moment (FOSM) analysis will also be used to investigate parameter and predictive uncertainty, and guide next steps in model development prior to rigorous history matching by using PEST++ parameter estimation code.

  2. Orbital and maxillofacial computer aided surgery: patient-specific finite element models to predict surgical outcomes.

    PubMed

    Luboz, Vincent; Chabanas, Matthieu; Swider, Pascal; Payan, Yohan

    2005-08-01

    This paper addresses an important issue raised for the clinical relevance of Computer-Assisted Surgical applications, namely the methodology used to automatically build patient-specific finite element (FE) models of anatomical structures. From this perspective, a method is proposed, based on a technique called the mesh-matching method, followed by a process that corrects mesh irregularities. The mesh-matching algorithm generates patient-specific volume meshes from an existing generic model. The mesh regularization process is based on the Jacobian matrix transform related to the FE reference element and the current element. This method for generating patient-specific FE models is first applied to computer-assisted maxillofacial surgery, and more precisely, to the FE elastic modelling of patient facial soft tissues. For each patient, the planned bone osteotomies (mandible, maxilla, chin) are used as boundary conditions to deform the FE face model, in order to predict the aesthetic outcome of the surgery. Seven FE patient-specific models were successfully generated by our method. For one patient, the prediction of the FE model is qualitatively compared with the patient's post-operative appearance, measured from a computer tomography scan. Then, our methodology is applied to computer-assisted orbital surgery. It is, therefore, evaluated for the generation of 11 patient-specific FE poroelastic models of the orbital soft tissues. These models are used to predict the consequences of the surgical decompression of the orbit. More precisely, an average law is extrapolated from the simulations carried out for each patient model. This law links the size of the osteotomy (i.e. the surgical gesture) and the backward displacement of the eyeball (the consequence of the surgical gesture).

  3. Match probabilities in a finite, subdivided population

    PubMed Central

    Malaspinas, Anna-Sapfo; Slatkin, Montgomery; Song, Yun S.

    2011-01-01

    We generalize a recently introduced graphical framework to compute the probability that haplotypes or genotypes of two individuals drawn from a finite, subdivided population match. As in the previous work, we assume an infinite-alleles model. We focus on the case of a population divided into two subpopulations, but the underlying framework can be applied to a general model of population subdivision. We examine the effect of population subdivision on the match probabilities and the accuracy of the product rule which approximates multi-locus match probabilities as a product of one-locus match probabilities. We quantify the deviation from predictions of the product rule by R, the ratio of the multi-locus match probability to the product of the one-locus match probabilities.We carry out the computation for two loci and find that ignoring subdivision can lead to underestimation of the match probabilities if the population under consideration actually has subdivision structure and the individuals originate from the same subpopulation. On the other hand, under a given model of population subdivision, we find that the ratio R for two loci is only slightly greater than 1 for a large range of symmetric and asymmetric migration rates. Keeping in mind that the infinite-alleles model is not the appropriate mutation model for STR loci, we conclude that, for two loci and biologically reasonable parameter values, population subdivision may lead to results that disfavor innocent suspects because of an increase in identity-by-descent in finite populations. On the other hand, for the same range of parameters, population subdivision does not lead to a substantial increase in linkage disequilibrium between loci. Those results are consistent with established practice. PMID:21266180

  4. A Surrogate-based Adaptive Sampling Approach for History Matching and Uncertainty Quantification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Weixuan; Zhang, Dongxiao; Lin, Guang

    A critical procedure in reservoir simulations is history matching (or data assimilation in a broader sense), which calibrates model parameters such that the simulation results are consistent with field measurements, and hence improves the credibility of the predictions given by the simulations. Often there exist non-unique combinations of parameter values that all yield the simulation results matching the measurements. For such ill-posed history matching problems, Bayesian theorem provides a theoretical foundation to represent different solutions and to quantify the uncertainty with the posterior PDF. Lacking an analytical solution in most situations, the posterior PDF may be characterized with a samplemore » of realizations, each representing a possible scenario. A novel sampling algorithm is presented here for the Bayesian solutions to history matching problems. We aim to deal with two commonly encountered issues: 1) as a result of the nonlinear input-output relationship in a reservoir model, the posterior distribution could be in a complex form, such as multimodal, which violates the Gaussian assumption required by most of the commonly used data assimilation approaches; 2) a typical sampling method requires intensive model evaluations and hence may cause unaffordable computational cost. In the developed algorithm, we use a Gaussian mixture model as the proposal distribution in the sampling process, which is simple but also flexible to approximate non-Gaussian distributions and is particularly efficient when the posterior is multimodal. Also, a Gaussian process is utilized as a surrogate model to speed up the sampling process. Furthermore, an iterative scheme of adaptive surrogate refinement and re-sampling ensures sampling accuracy while keeping the computational cost at a minimum level. The developed approach is demonstrated with an illustrative example and shows its capability in handling the above-mentioned issues. Multimodal posterior of the history matching problem is captured and are used to give a reliable production prediction with uncertainty quantification. The new algorithm reveals a great improvement in terms of computational efficiency comparing previously studied approaches for the sample problem.« less

  5. Reverse-translational biomarker validation of Abnormal Repetitive Behaviors in mice: an illustration of the 4P's modeling approach

    PubMed Central

    Garner, Joseph P.; Thogerson, Collette M.; Dufour, Brett D.; Würbel, Hanno; Murray, James D.; Mench, Joy A.

    2011-01-01

    The NIMH's new strategic plan, with its emphasis on the “4P's” (Prediction, Preemption, Personalization, & Populations) and biomarker-based medicine requires a radical shift in animal modeling methodology. In particular 4P's models will be non-determinant (i.e. disease severity will depend on secondary environmental and genetic factors); and validated by reverse-translation of animal homologues to human biomarkers. A powerful consequence of the biomarker approach is that different closely-related disorders have a unique fingerprint of biomarkers. Animals can be validated as a highly-specific model of a single disorder by matching this `fingerprint'; or as a model of a symptom seen in multiple disorders by matching common biomarkers. Here we illustrate this approach with two Abnormal Repetitive Behaviors (ARBs) in mice: stereotypies; and barbering (hair pulling). We developed animal versions of the neuropsychological biomarkers that distinguish human ARBs, and tested the fingerprint of the different mouse ARBs. As predicted, the two mouse ARBs were associated with different biomarkers. Both barbering and stereotypy could be discounted as models of OCD (even though they are widely used as such), due to the absence of limbic biomarkers which are characteristic of OCD and hence are necessary for a valid model. Conversely barbering matched the fingerprint of trichotillomania (i.e. selective deficits in set-shifting), suggesting it may be a highly specific model of this disorder. In contrast stereotypies were correlated only with a biomarker (deficits in response shifting) correlated with stereotypies in multiple disorders, suggesting that animal stereotypies model stereotypies in multiple disorders. PMID:21219937

  6. Comparison of simulator fidelity model predictions with in-simulator evaluation data

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.; Mckissick, B. T.; Ashworth, B. R.

    1983-01-01

    A full factorial in simulator experiment of a single axis, multiloop, compensatory pitch tracking task is described. The experiment was conducted to provide data to validate extensions to an analytic, closed loop model of a real time digital simulation facility. The results of the experiment encompassing various simulation fidelity factors, such as visual delay, digital integration algorithms, computer iteration rates, control loading bandwidths and proprioceptive cues, and g-seat kinesthetic cues, are compared with predictions obtained from the analytic model incorporating an optimal control model of the human pilot. The in-simulator results demonstrate more sensitivity to the g-seat and to the control loader conditions than were predicted by the model. However, the model predictions are generally upheld, although the predicted magnitudes of the states and of the error terms are sometimes off considerably. Of particular concern is the large sensitivity difference for one control loader condition, as well as the model/in-simulator mismatch in the magnitude of the plant states when the other states match.

  7. Model-based influences on humans' choices and striatal prediction errors.

    PubMed

    Daw, Nathaniel D; Gershman, Samuel J; Seymour, Ben; Dayan, Peter; Dolan, Raymond J

    2011-03-24

    The mesostriatal dopamine system is prominently implicated in model-free reinforcement learning, with fMRI BOLD signals in ventral striatum notably covarying with model-free prediction errors. However, latent learning and devaluation studies show that behavior also shows hallmarks of model-based planning, and the interaction between model-based and model-free values, prediction errors, and preferences is underexplored. We designed a multistep decision task in which model-based and model-free influences on human choice behavior could be distinguished. By showing that choices reflected both influences we could then test the purity of the ventral striatal BOLD signal as a model-free report. Contrary to expectations, the signal reflected both model-free and model-based predictions in proportions matching those that best explained choice behavior. These results challenge the notion of a separate model-free learner and suggest a more integrated computational architecture for high-level human decision-making. Copyright © 2011 Elsevier Inc. All rights reserved.

  8. Assessing Readiness for Online Education--Research Models for Identifying Students at Risk

    ERIC Educational Resources Information Center

    Wladis, Claire; Conway, Katherine M.; Hachey, Alyse C.

    2016-01-01

    This study explored the interaction between student characteristics and the online environment in predicting course performance and subsequent college persistence among students in a large urban U.S. university system. Multilevel modeling, propensity score matching, and the KHB decomposition method were used. The most consistent pattern observed…

  9. Planck intermediate results: XLII. Large-scale Galactic magnetic fields

    DOE PAGES

    Adam, R.; Ade, P. A. R.; Alves, M. I. R.; ...

    2016-12-12

    Recent models for the large-scale Galactic magnetic fields in the literature have been largely constrained by synchrotron emission and Faraday rotation measures. In this paper, we use three different but representative models to compare their predicted polarized synchrotron and dust emission with that measured by the Planck satellite. We first update these models to match the Planck synchrotron products using a common model for the cosmic-ray leptons. We discuss the impact on this analysis of the ongoing problems of component separation in the Planck microwave bands and of the uncertain cosmic-ray spectrum. In particular, the inferred degree of ordering inmore » the magnetic fields is sensitive to these systematic uncertainties, and we further show the importance of considering the expected variations in the observables in addition to their mean morphology. We then compare the resulting simulated emission to the observed dust polarization and find that the dust predictions do not match the morphology in the Planck data but underpredict the dust polarization away from the plane. We modify one of the models to roughly match both observables at high latitudes by increasing the field ordering in the thin disc near the observer. Finally, though this specific analysis is dependent on the component separation issues, we present the improved model as a proof of concept for how these studies can be advanced in future using complementary information from ongoing and planned observational projects.« less

  10. A Comparative Study of Data Mining Techniques on Football Match Prediction

    NASA Astrophysics Data System (ADS)

    Rosli, Che Mohamad Firdaus Che Mohd; Zainuri Saringat, Mohd; Razali, Nazim; Mustapha, Aida

    2018-05-01

    Data prediction have become a trend in today’s business or organization. This paper is set to predict match outcomes for association football from the perspective of football club managers and coaches. This paper explored different data mining techniques used for predicting the match outcomes where the target class is win, draw and lose. The main objective of this research is to find the most accurate data mining technique that fits the nature of football data. The techniques tested are Decision Trees, Neural Networks, Bayesian Network, and k-Nearest Neighbors. The results from the comparative experiments showed that Decision Trees produced the highest average prediction accuracy in the domain of football match prediction by 99.56%.

  11. Falsification of matching theory and confirmation of an evolutionary theory of behavior dynamics in a critical experiment.

    PubMed

    McDowell, J J; Calvin, Olivia L; Hackett, Ryan; Klapes, Bryan

    2017-07-01

    Two competing predictions of matching theory and an evolutionary theory of behavior dynamics, and one additional prediction of the evolutionary theory, were tested in a critical experiment in which human participants worked on concurrent schedules for money (Dallery et al., 2005). The three predictions concerned the descriptive adequacy of matching theory equations, and of equations describing emergent equilibria of the evolutionary theory. Tests of the predictions falsified matching theory and supported the evolutionary theory. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Urinary Biomarkers and Obstructive Sleep Apnea in Patients with Down Syndrome

    PubMed Central

    Elsharkawi, Ibrahim; Gozal, David; Macklin, Eric A.; Voelz, Lauren; Weintraub, Gil; Skotko, Brian G.

    2017-01-01

    Study Objectives The study aim was to compare urinary biomarkers in individuals with Down syndrome (DS) with and without obstructive sleep apnea (OSA) to those of age- and sex-matched neurotypically developing healthy controls (HC). We further investigated whether we could predict OSA in individuals with DS using these biomarkers. Methods Urine samples were collected from 58 individuals with DS the night before or the morning after their scheduled overnight polysomnogram or both, of whom 47 could be age- and sex-matched to a sample of 43 HC. Concentrations of 12 neurotransmitters were determined by enzyme-linked immunosorbent assay. Log-transformed creatinine-corrected assay levels were normalized. Normalized z-scores were compared between individuals with DS vs. HC, between individuals with DS with vs. without OSA, and to derive composite models to predict OSA. Results Most night-sampled urinary biomarkers were elevated among individuals with DS relative to matched HC. No urinary biomarker levels differed between individuals with DS with vs. without OSA. A combination of four urinary biomarkers predicted AHI > 1 with a positive predictive value of 90% and a negative predictive value of 68%. Conclusions Having DS, even in the absence of concurrent OSA, is associated with a different urinary biomarker profile when compared to HC. Therefore, while urinary biomarkers may be predictive of OSA in the general pediatric population, a different approach is needed in interpreting urinary biomarker assays in individuals with DS. Certain biomarkers also seem promising to be predictive of OSA in individuals with DS. PMID:28522103

  13. Predicting galaxy star formation rates via the co-evolution of galaxies and haloes

    NASA Astrophysics Data System (ADS)

    Watson, Douglas F.; Hearin, Andrew P.; Berlind, Andreas A.; Becker, Matthew R.; Behroozi, Peter S.; Skibba, Ramin A.; Reyes, Reinabelle; Zentner, Andrew R.; van den Bosch, Frank C.

    2015-01-01

    In this paper, we test the age matching hypothesis that the star formation rate (SFR) of a galaxy of fixed stellar mass is determined by its dark matter halo formation history, e.g. more quiescent galaxies reside in older haloes. We present new Sloan Digital Sky Survey measurements of the galaxy two-point correlation function and galaxy-galaxy lensing as a function of stellar mass and SFR, separated into quenched and star-forming galaxy samples to test this simple model. We find that our age matching model is in excellent agreement with these new measurements. We also find that our model is able to predict: (1) the relative SFRs of central and satellite galaxies, (2) the SFR dependence of the radial distribution of satellite galaxy populations within galaxy groups, rich groups, and clusters and their surrounding larger scale environments, and (3) the interesting feature that the satellite quenched fraction as a function of projected radial distance from the central galaxy exhibits an ˜r-.15 slope, independent of environment. These accurate predictions are intriguing given that we do not explicitly model satellite-specific processes after infall, and that in our model the virial radius does not mark a special transition region in the evolution of a satellite. The success of the model suggests that present-day galaxy SFR is strongly correlated with halo mass assembly history.

  14. Thermal Model Predictions of Advanced Stirling Radioisotope Generator Performance

    NASA Technical Reports Server (NTRS)

    Wang, Xiao-Yen J.; Fabanich, William Anthony; Schmitz, Paul C.

    2014-01-01

    This paper presents recent thermal model results of the Advanced Stirling Radioisotope Generator (ASRG). The three-dimensional (3D) ASRG thermal power model was built using the Thermal Desktop(trademark) thermal analyzer. The model was correlated with ASRG engineering unit test data and ASRG flight unit predictions from Lockheed Martin's (LM's) I-deas(trademark) TMG thermal model. The auxiliary cooling system (ACS) of the ASRG is also included in the ASRG thermal model. The ACS is designed to remove waste heat from the ASRG so that it can be used to heat spacecraft components. The performance of the ACS is reported under nominal conditions and during a Venus flyby scenario. The results for the nominal case are validated with data from Lockheed Martin. Transient thermal analysis results of ASRG for a Venus flyby with a representative trajectory are also presented. In addition, model results of an ASRG mounted on a Cassini-like spacecraft with a sunshade are presented to show a way to mitigate the high temperatures of a Venus flyby. It was predicted that the sunshade can lower the temperature of the ASRG alternator by 20 C for the representative Venus flyby trajectory. The 3D model also was modified to predict generator performance after a single Advanced Stirling Convertor failure. The geometry of the Microtherm HT insulation block on the outboard side was modified to match deformation and shrinkage observed during testing of a prototypic ASRG test fixture by LM. Test conditions and test data were used to correlate the model by adjusting the thermal conductivity of the deformed insulation to match the post-heat-dump steady state temperatures. Results for these conditions showed that the performance of the still-functioning inboard ACS was unaffected.

  15. Statistical Analysis of Notational AFL Data Using Continuous Time Markov Chains

    PubMed Central

    Meyer, Denny; Forbes, Don; Clarke, Stephen R.

    2006-01-01

    Animal biologists commonly use continuous time Markov chain models to describe patterns of animal behaviour. In this paper we consider the use of these models for describing AFL football. In particular we test the assumptions for continuous time Markov chain models (CTMCs), with time, distance and speed values associated with each transition. Using a simple event categorisation it is found that a semi-Markov chain model is appropriate for this data. This validates the use of Markov Chains for future studies in which the outcomes of AFL matches are simulated. Key Points A comparison of four AFL matches suggests similarity in terms of transition probabilities for events and the mean times, distances and speeds associated with each transition. The Markov assumption appears to be valid. However, the speed, time and distance distributions associated with each transition are not exponential suggesting that semi-Markov model can be used to model and simulate play. Team identified events and directions associated with transitions are required to develop the model into a tool for the prediction of match outcomes. PMID:24357946

  16. Statistical Analysis of Notational AFL Data Using Continuous Time Markov Chains.

    PubMed

    Meyer, Denny; Forbes, Don; Clarke, Stephen R

    2006-01-01

    Animal biologists commonly use continuous time Markov chain models to describe patterns of animal behaviour. In this paper we consider the use of these models for describing AFL football. In particular we test the assumptions for continuous time Markov chain models (CTMCs), with time, distance and speed values associated with each transition. Using a simple event categorisation it is found that a semi-Markov chain model is appropriate for this data. This validates the use of Markov Chains for future studies in which the outcomes of AFL matches are simulated. Key PointsA comparison of four AFL matches suggests similarity in terms of transition probabilities for events and the mean times, distances and speeds associated with each transition.The Markov assumption appears to be valid.However, the speed, time and distance distributions associated with each transition are not exponential suggesting that semi-Markov model can be used to model and simulate play.Team identified events and directions associated with transitions are required to develop the model into a tool for the prediction of match outcomes.

  17. Estimating long-wavelength dynamic topographic change of passive continental margins since the Early Cretaceous

    NASA Astrophysics Data System (ADS)

    Müller, Dietmar; Hassan, Rakib; Gurnis, Michael; Flament, Nicolas; Williams, Simon

    2017-04-01

    The influence of mantle convection on dynamic topographic change along continental margins is difficult to unravel, because their stratigraphic record is dominated by tectonic subsidence caused by rifting. Yet, dynamic topography can potentially introduce significant depth anomalies along passive margins, influencing their water depth, sedimentary environments and geohistory. Here we follow a three-fold approach to estimate changes in dynamic topography along both continental interiors and passive margins based on a set of seven global mantle convection models. These models include different methodologies (forward and hybrid backward-forward methods), different plate reconstructions and alternative mantle rheologies. We demonstrate that a geodynamic forward model that includes adiabatic heating in addition to internal heating from radiogenic sources, and a mantle viscosity profile with a gradual increase in viscosity below the mantle transition zone, provides a greatly improved match to the spectral range of residual topography end-members as compared with previous models at very long wavelengths (spherical degrees 2-3). We combine global sea level estimates with predicted surface dynamic topography to evaluate the match between predicted continental flooding patterns and published paleo-coastlines by comparing predicted versus geologically reconstructed land fractions and spatial overlaps of flooded regions for individual continents since 140 Ma. Modelled versus geologically reconstructed land fractions match within 10% for most models, and the spatial overlaps of inundated regions are mostly between 85% and 100% for the Cenozoic, dropping to about 75-100% in the Cretaceous. We categorise the evolution of modelled dynamic topography in both continental interiors and along passive margins using cluster analysis to investigate how clusters of similar dynamic topography time series are distributed spatially. A subdivision of four clusters is found to best reveal end-members of dynamic topography evolution along passive margins and their hinterlands, differentiating topographic stability, long-term pronounced subsidence, initial stability over a dynamic high followed by moderate subsidence and regions that are relatively proximal to subduction zones with varied dynamic topography histories. Along passive continental margins the most commonly observed process is a gradual move from dynamic highs towards lows during the fragmentation of Pangea, reflecting that many passive margins now overly slabs sinking in the lower mantle. Our best-fit model results in up to 500 ±150 m of total dynamic subsidence of continental interiors while along passive margins the maximum predicted dynamic topographic change over 140 million years is about 350 ±150 m of subsidence. Models with plumes exhibit clusters of transient passive margin uplift of about 200 ±200m. The good overall match between predicted dynamic topography and geologically mapped paleo-coastlines makes a convincing case that mantle-driven topographic change is a critical component of relative sea level change, and one of the main driving forces generating the observed geometries and timings of large-scale shifts in paleo-coastlines.

  18. Model-based conifer crown surface reconstruction from multi-ocular high-resolution aerial imagery

    NASA Astrophysics Data System (ADS)

    Sheng, Yongwei

    2000-12-01

    Tree crown parameters such as width, height, shape and crown closure are desirable in forestry and ecological studies, but they are time-consuming and labor intensive to measure in the field. The stereoscopic capability of high-resolution aerial imagery provides a way to crown surface reconstruction. Existing photogrammetric algorithms designed to map terrain surfaces, however, cannot adequately extract crown surfaces, especially for steep conifer crowns. Considering crown surface reconstruction in a broader context of tree characterization from aerial images, we develop a rigorous perspective tree image formation model to bridge image-based tree extraction and crown surface reconstruction, and an integrated model-based approach to conifer crown surface reconstruction. Based on the fact that most conifer crowns are in a solid geometric form, conifer crowns are modeled as a generalized hemi-ellipsoid. Both the automatic and semi-automatic approaches are investigated to optimal tree model development from multi-ocular images. The semi-automatic 3D tree interpreter developed in this thesis is able to efficiently extract reliable tree parameters and tree models in complicated tree stands. This thesis starts with a sophisticated stereo matching algorithm, and incorporates tree models to guide stereo matching. The following critical problems are addressed in the model-based surface reconstruction process: (1) the problem of surface model composition from tree models, (2) the occlusion problem in disparity prediction from tree models, (3) the problem of integrating the predicted disparities into image matching, (4) the tree model edge effect reduction on the disparity map, (5) the occlusion problem in orthophoto production, and (6) the foreshortening problem in image matching, which is very serious for conifer crown surfaces. Solutions to the above problems are necessary for successful crown surface reconstruction. The model-based approach was applied to recover the canopy surface of a dense redwood stand using tri-ocular high-resolution images scanned from 1:2,400 aerial photographs. The results demonstrate the approach's ability to reconstruct complicated stands. The model-based approach proposed in this thesis is potentially applicable to other surfaces recovering problems with a priori knowledge about objects.

  19. Physics-based model for predicting the performance of a miniature wind turbine

    NASA Astrophysics Data System (ADS)

    Xu, F. J.; Hu, J. Z.; Qiu, Y. P.; Yuan, F. G.

    2011-04-01

    A comprehensive physics-based model for predicting the performance of the miniature wind turbine (MWT) for power wireless sensor systems was proposed in this paper. An approximation of the power coefficient of the turbine rotor was made after the turbine rotor performance was measured. Incorporation of the approximation with the equivalent circuit model which was proposed according to the principles of the MWT, the overall system performance of the MWT was predicted. To demonstrate the prediction, the MWT system comprised of a 7.6 cm thorgren plastic propeller as turbine rotor and a DC motor as generator was designed and its performance was tested experimentally. The predicted output voltage, power and system efficiency are matched well with the tested results, which imply that this study holds promise in estimating and optimizing the performance of the MWT.

  20. The estimation of the thyroid volume before surgery--an important prerequisite for minimally invasive thyroidectomy.

    PubMed

    Ruggieri, M; Fumarola, A; Straniero, A; Maiuolo, A; Coletta, I; Veltri, A; Di Fiore, A; Trimboli, P; Gargiulo, P; Genderini, M; D'Armiento, M

    2008-09-01

    Actually, thyroid volume >25 ml, obtained by preoperative ultrasound evaluation, is a very important exclusion criteria for minimally invasive thyroidectomy. So far, among different imaging techniques, two-dimensional ultrasonography has become the more accepted method for the assessment of thyroid volume (US-TV). The aims of this study were: (1) to estimate the preoperative thyroid volume in patients undergoing minimally invasive total thyroidectomy using a mathematical formula and (2) to verify its validity by comparing it with the postsurgical TV (PS-TV). In 53 patients who underwent minimally invasive total thyroidectomy (from January 2003 to December 2007), US-TV, obtained by ellipsoid volume formula, was compared to PS-TV determined by the Archimedes' principle. A mathematical formula able to predict the TV from the US-TV was applied in 34 cases in the last 2 years. Mean US-TV (14.4 +/- 5.9 ml) was significantly lower than mean PS-TV (21.7 +/- 10.3 ml). This underestimation was related to gland multinodularity and/or nodular involvement of the isthmus. A mathematical formula to reduce US-TV underestimation and predict the real TV was developed using a linear model. Mean predicted TV (16.8 +/- 3.7 ml) perfectly matched mean PS-TV, underestimating PS-TV in 19% of cases. We verified the accuracy of this mathematical model in patients' eligibility for minimally invasive total thyroidectomy, and we demonstrated that a predicted TV <25 ml was confirmed post-surgery in 94% of cases. We demonstrated that using a linear model, it is possible to predict from US the PS-TV with high accuracy. In fact, the mean predicted TV perfectly matched the mean PS-TV in all cases. In particular, the percentage of cases in which the predicted TV perfectly matched the PS-TV increases from 23%, estimated by US, to 43%. Moreover, the percentage of TV underestimation was reduced from 77% to 19%, as well as the range of the disagreement from up to 200% to 80%. This study shows that two-dimensional US can provide the accurate estimation of thyroid volume but that it can be improved by a mathematical model. This may contribute to a more appropriate surgical management of thyroid diseases.

  1. Simply criminal: predicting burglars' occupancy decisions with a simple heuristic.

    PubMed

    Snook, Brent; Dhami, Mandeep K; Kavanagh, Jennifer M

    2011-08-01

    Rational choice theories of criminal decision making assume that offenders weight and integrate multiple cues when making decisions (i.e., are compensatory). We tested this assumption by comparing how well a compensatory strategy called Franklin's Rule captured burglars' decision policies regarding residence occupancy compared to a non-compensatory strategy (i.e., Matching Heuristic). Forty burglars each decided on the occupancy of 20 randomly selected photographs of residences (for which actual occupancy was known when the photo was taken). Participants also provided open-ended reports on the cues that influenced their decisions in each case, and then rated the importance of eight cues (e.g., deadbolt visible) over all decisions. Burglars predicted occupancy beyond chance levels. The Matching Heuristic was a significantly better predictor of burglars' decisions than Franklin's Rule, and cue use in the Matching Heuristic better corresponded to the cue ecological validities in the environment than cue use in Franklin's Rule. The most important cue in burglars' models was also the most ecologically valid or predictive of actual occupancy (i.e., vehicle present). The majority of burglars correctly identified the most important cue in their models, and the open-ended technique showed greater correspondence between self-reported and captured cue use than the rating over decision technique. Our findings support a limited rationality perspective to understanding criminal decision making, and have implications for crime prevention.

  2. Analysis and modeling of infrasound from a four-stage rocket launch

    DOE PAGES

    Blom, Philip Stephen; Marcillo, Omar Eduardo; Arrowsmith, Stephen

    2016-06-17

    Infrasound from a four-stage sounding rocket was recorded by several arrays within 100 km of the launch pad. Propagation modeling methods have been applied to the known trajectory to predict infrasonic signals at the ground in order to identify what information might be obtained from such observations. There is good agreement between modeled and observed back azimuths, and predicted arrival times for motor ignition signals match those observed. The signal due to the high-altitude stage ignition is found to be low amplitude, despite predictions of weak attenuation. As a result, this lack of signal is possibly due to inefficient aeroacousticmore » coupling in the rarefied upper atmosphere.« less

  3. Analysis and modeling of infrasound from a four-stage rocket launch

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blom, Philip Stephen; Marcillo, Omar Eduardo; Arrowsmith, Stephen

    Infrasound from a four-stage sounding rocket was recorded by several arrays within 100 km of the launch pad. Propagation modeling methods have been applied to the known trajectory to predict infrasonic signals at the ground in order to identify what information might be obtained from such observations. There is good agreement between modeled and observed back azimuths, and predicted arrival times for motor ignition signals match those observed. The signal due to the high-altitude stage ignition is found to be low amplitude, despite predictions of weak attenuation. As a result, this lack of signal is possibly due to inefficient aeroacousticmore » coupling in the rarefied upper atmosphere.« less

  4. Discussion of comparison study of hydraulic fracturing models -- Test case: GRI Staged Field Experiment No. 3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cleary, M.P.

    This paper provides comments to a companion journal paper on predictive modeling of hydraulic fracturing patterns (N.R. Warpinski et. al., 1994). The former paper was designed to compare various modeling methods to demonstrate the most accurate methods under various geologic constraints. The comments of this paper are centered around potential deficiencies in the former authors paper which include: limited actual comparisons offered between models, the issues of matching predictive data with that from related field operations was lacking or undocumented, and the relevance/impact of accurate modeling on the overall hydraulic fracturing cost and production.

  5. The affordance-matching hypothesis: how objects guide action understanding and prediction

    PubMed Central

    Bach, Patric; Nicholson, Toby; Hudson, Matthew

    2014-01-01

    Action understanding lies at the heart of social interaction. Prior research has often conceptualized this capacity in terms of a motoric matching of observed actions to an action in one’s motor repertoire, but has ignored the role of object information. In this manuscript, we set out an alternative conception of intention understanding, which places the role of objects as central to our observation and comprehension of the actions of others. We outline the current understanding of the interconnectedness of action and object knowledge, demonstrating how both rely heavily on the other. We then propose a novel framework, the affordance-matching hypothesis, which incorporates these findings into a simple model of action understanding, in which object knowledge—what an object is for and how it is used—can inform and constrain both action interpretation and prediction. We will review recent empirical evidence that supports such an object-based view of action understanding and we relate the affordance matching hypothesis to recent proposals that have re-conceptualized the role of mirror neurons in action understanding. PMID:24860468

  6. Mathematical modeling of ethanol production in solid-state fermentation based on solid medium' dry weight variation.

    PubMed

    Mazaheri, Davood; Shojaosadati, Seyed Abbas; Zamir, Seyed Morteza; Mousavi, Seyyed Mohammad

    2018-04-21

    In this work, mathematical modeling of ethanol production in solid-state fermentation (SSF) has been done based on the variation in the dry weight of solid medium. This method was previously used for mathematical modeling of enzyme production; however, the model should be modified to predict the production of a volatile compound like ethanol. The experimental results of bioethanol production from the mixture of carob pods and wheat bran by Zymomonas mobilis in SSF were used for the model validation. Exponential and logistic kinetic models were used for modeling the growth of microorganism. In both cases, the model predictions matched well with the experimental results during the exponential growth phase, indicating the good ability of solid medium weight variation method for modeling a volatile product formation in solid-state fermentation. In addition, using logistic model, better predictions were obtained.

  7. Attributable Cost of Clostridium difficile Infection in Pediatric Patients.

    PubMed

    Mehrotra, Preeti; Jang, Jisun; Gidengil, Courtney; Sandora, Thomas J

    2017-12-01

    OBJECTIVES The attributable cost of Clostridium difficile infection (CDI) in children is unknown. We sought to determine a national estimate of attributable cost and length of stay (LOS) of CDI occurring during hospitalization in children. DESIGN AND METHODS We analyzed discharge records of patients between 2 and 18 years of age from the Agency for Healthcare Research and Quality (AHRQ) Kids' Inpatient Database. We created a logistic regression model to predict CDI during hospitalization based on demographic and clinical characteristics. Predicted probabilities from the logistic regression model were then used as propensity scores to match 1:2 CDI to non-CDI cases. Charges were converted to costs and compared between patients with CDI and propensity-score-matched controls. In a sensitivity analysis, we adjusted for LOS as a confounder by including it in both the propensity score and a generalized linear model predicting cost. RESULTS We identified 8,527 pediatric hospitalizations (0.53%) with a diagnosis of CDI and 1,597,513 discharges without CDI. In our matched cohorts, the attributable cost of CDI occurring during a hospitalization ranged from $1,917 to $8,317, depending on whether model was adjusted for LOS. When not adjusting for LOS, CDI-associated hospitalizations cost 1.6 times more than non-CDI associated hospitalizations. Attributable LOS of CDI was approximately 4 days. CONCLUSIONS Clostridium difficile infection in hospitalized children is associated with an economic burden similar to adult estimates. This finding supports a continued focus on preventing CDI in children as a priority. Pediatric CDI cost analyses should account for LOS as an important confounder of cost. Infect Control Hosp Epidemiol 2017;38:1472-1477.

  8. The planum temporale as a computational hub.

    PubMed

    Griffiths, Timothy D; Warren, Jason D

    2002-07-01

    It is increasingly recognized that the human planum temporale is not a dedicated language processor, but is in fact engaged in the analysis of many types of complex sound. We propose a model of the human planum temporale as a computational engine for the segregation and matching of spectrotemporal patterns. The model is based on segregating the components of the acoustic world and matching these components with learned spectrotemporal representations. Spectrotemporal information derived from such a 'computational hub' would be gated to higher-order cortical areas for further processing, leading to object recognition and the perception of auditory space. We review the evidence for the model and specific predictions that follow from it.

  9. A comparative study of several compressibility corrections to turbulence models applied to high-speed shear layers

    NASA Technical Reports Server (NTRS)

    Viegas, John R.; Rubesin, Morris W.

    1991-01-01

    Several recently published compressibility corrections to the standard k-epsilon turbulence model are used with the Navier-Stokes equations to compute the mixing region of a large variety of high speed flows. These corrections, specifically developed to address the weakness of higher order turbulence models to accurately predict the spread rate of compressible free shear flows, are applied to two stream flows of the same gas mixing under a large variety of free stream conditions. Results are presented for two types of flows: unconfined streams with either (1) matched total temperatures and static pressures, or (2) matched static temperatures and pressures, and a confined stream.

  10. Generating Converged Accurate Free Energy Surfaces for Chemical Reactions with a Force-Matched Semiempirical Model.

    PubMed

    Kroonblawd, Matthew P; Pietrucci, Fabio; Saitta, Antonino Marco; Goldman, Nir

    2018-04-10

    We demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTB model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol -1 .

  11. Generating Converged Accurate Free Energy Surfaces for Chemical Reactions with a Force-Matched Semiempirical Model

    DOE PAGES

    Kroonblawd, Matthew P.; Pietrucci, Fabio; Saitta, Antonino Marco; ...

    2018-03-15

    Here, we demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTBmore » model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol –1.« less

  12. Generating Converged Accurate Free Energy Surfaces for Chemical Reactions with a Force-Matched Semiempirical Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroonblawd, Matthew P.; Pietrucci, Fabio; Saitta, Antonino Marco

    Here, we demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTBmore » model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol –1.« less

  13. Matching-centrality decomposition and the forecasting of new links in networks.

    PubMed

    Rohr, Rudolf P; Naisbit, Russell E; Mazza, Christian; Bersier, Louis-Félix

    2016-02-10

    Networks play a prominent role in the study of complex systems of interacting entities in biology, sociology, and economics. Despite this diversity, we demonstrate here that a statistical model decomposing networks into matching and centrality components provides a comprehensive and unifying quantification of their architecture. The matching term quantifies the assortative structure in which node makes links with which other node, whereas the centrality term quantifies the number of links that nodes make. We show, for a diverse set of networks, that this decomposition can provide a tight fit to observed networks. Then we provide three applications. First, we show that the model allows very accurate prediction of missing links in partially known networks. Second, when node characteristics are known, we show how the matching-centrality decomposition can be related to this external information. Consequently, it offers us a simple and versatile tool to explore how node characteristics explain network architecture. Finally, we demonstrate the efficiency and flexibility of the model to forecast the links that a novel node would create if it were to join an existing network. © 2016 The Author(s).

  14. Complex networks untangle competitive advantage in Australian football

    NASA Astrophysics Data System (ADS)

    Braham, Calum; Small, Michael

    2018-05-01

    We construct player-based complex network models of Australian football teams for the 2014 Australian Football League season; modelling the passes between players as weighted, directed edges. We show that analysis of these measures can give an insight into the underlying structure and strategy of Australian football teams, quantitatively distinguishing different playing styles. The relationships observed between network properties and match outcomes suggest that successful teams exhibit well-connected passing networks with the passes distributed between all 22 players as evenly as possible. Linear regression models of team scores and match margins show significant improvements in R2 and Bayesian information criterion when network measures are added to models that use conventional measures, demonstrating that network analysis measures contain useful, extra information. Several measures, particularly the mean betweenness centrality, are shown to be useful in predicting the outcomes of future matches, suggesting they measure some aspect of the intrinsic strength of teams. In addition, several local centrality measures are shown to be useful in analysing individual players' differing contributions to the team's structure.

  15. Complex networks untangle competitive advantage in Australian football.

    PubMed

    Braham, Calum; Small, Michael

    2018-05-01

    We construct player-based complex network models of Australian football teams for the 2014 Australian Football League season; modelling the passes between players as weighted, directed edges. We show that analysis of these measures can give an insight into the underlying structure and strategy of Australian football teams, quantitatively distinguishing different playing styles. The relationships observed between network properties and match outcomes suggest that successful teams exhibit well-connected passing networks with the passes distributed between all 22 players as evenly as possible. Linear regression models of team scores and match margins show significant improvements in R 2 and Bayesian information criterion when network measures are added to models that use conventional measures, demonstrating that network analysis measures contain useful, extra information. Several measures, particularly the mean betweenness centrality, are shown to be useful in predicting the outcomes of future matches, suggesting they measure some aspect of the intrinsic strength of teams. In addition, several local centrality measures are shown to be useful in analysing individual players' differing contributions to the team's structure.

  16. Validation of a modified Medical Resource Model for mass gatherings.

    PubMed

    Smith, Wayne P; Tuffin, Heather; Stratton, Samuel J; Wallis, Lee A

    2013-02-01

    A modified Medical Resource Model to predict the medical resources required at mass gatherings based on the risk profile of events has been developed. This study was undertaken to validate this tool using data from events held in both a developed and a developing country. A retrospective study was conducted utilizing prospectively gathered data from individual events at Old Trafford Stadium in Manchester, United Kingdom, and Ellis Park Stadium, Johannesburg, South Africa. Both stadia are similar in design and spectator capacity. Data for Professional Football as well as Rugby League and Rugby Union (respectively) matches were used for the study. The medical resources predicted for the events were determined by entering the risk profile of each of the events into the Medical Resource Model. A recently developed South African tool was used to predetermine medical staffing for mass gatherings. For the study, the medical resources actually required to deal with the patient load for events within the control sample from the two stadia were compared with the number of needed resources predicted by the Medical Resource Model when that tool was applied retrospectively to the study events. The comparison was used to determine if the newly developed tool was either over- or under-predicting the resource requirements. In the case of Ellis Park, the model under-predicted the basic life support (BLS) requirement for 1.5% of the events in the data set. Mean over-prediction was 209.1 minutes for BLS availability. Old Trafford displayed no events for which the Medical Resource Model would have under-predicted. The mean over-prediction of BLS availability for Old Trafford was 671.6 minutes. The intermediate life support (ILS) requirement for Ellis Park was under-predicted for seven of the total 66 events (10.6% of the events), all of which had one factor in common, that being relatively low spectator attendance numbers. Modelling for ILS at Old Trafford did not under-predict for any events. The ILS requirements showed a mean over-prediction of 161.4 minutes ILS availability for Ellis Park compared with 425.2 minutes for Old Trafford. Of the events held at Ellis Park, the Medical Resource Model under-predicted the ambulance requirement in 4.5% of the events. For Old Trafford events, the under-prediction was higher: 7.5% of cases. The medical resources that are deployed at a mass gathering should best match the requirement for patient care at a particular event. An important consideration for any model is that it does not continually under-predict the resources required in relation to the actual requirement. With the exception of a specific subset of events at Ellis Park, the rate of under-prediction for this model was acceptable.

  17. Statistical Properties of Differences between Low and High Resolution CMAQ Runs with Matched Initial and Boundary Conditions

    EPA Science Inventory

    The difficulty in assessing errors in numerical models of air quality is a major obstacle to improving their ability to predict and retrospectively map air quality. In this paper, using simulation outputs from the Community Multi-scale Air Quality Model (CMAQ), the statistic...

  18. Are Divorce Studies Trustworthy? The Effects of Survey Nonresponse and Response Errors

    ERIC Educational Resources Information Center

    Mitchell, Colter

    2010-01-01

    Researchers rely on relationship data to measure the multifaceted nature of families. This article speaks to relationship data quality by examining the ramifications of different types of error on divorce estimates, models predicting divorce behavior, and models employing divorce as a predictor. Comparing matched survey and divorce certificate…

  19. FIEFDom: A Transparent Domain Boundary Recognition System using a Fuzzy Mean Operator

    DTIC Science & Technology

    2008-12-04

    to search for matching fragments by running the PSI-BLAST program a second time. During this step, the expectation value threshold ( e -value) is set at...statistical significance (or low e -value), and therefore have low scores. Finally, the domain boundaries (if any) are predicted using the scored...neighbor (match) is weighted by its e -value, the relative contribution of each neighbor is apparent. This is contrary to black-box models in which the

  20. Model Design for Military Advisors

    DTIC Science & Technology

    2013-05-02

    needs of their counterpart. This paper explores one area that would significantly improve advising outcomes; using advising models to match the...more specific. This paper develops three dominate models for advisors; the Stoic Acquaintance, the General Manger, and the Entertainer which can...then outcomes related to the individual counterpart’s developmental needs will be more predictable and specific. This paper will focus only on

  1. The stay/switch model describes choice among magnitudes of reinforcers.

    PubMed

    MacDonall, James S

    2008-06-01

    The stay/switch model is an alternative to the generalized matching law for describing choice in concurrent procedures. The purpose of the present experiment was to extend this model to choice among magnitudes of reinforcers. Rats were exposed to conditions in which the magnitude of reinforcers (number of food pellets) varied for staying at alternative 1, switching from alternative 1, staying at alternative 2 and switching from alternative 2. A changeover delay was not used. The results showed that the stay/switch model provided a good account of the data overall, and deviations from fits of the generalized matching law to response allocation data were in the direction predicted by the stay/switch model. In addition, comparisons among specific conditions suggested that varying the ratio of obtained reinforcers, as in the generalized matching law, was not necessary to change the response and time allocations. Other comparisons suggested that varying the ratio of obtained reinforcers was not sufficient to change response allocation. Taken together these results provide additional support for the stay/switch model of concurrent choice.

  2. Whole-lake invasive crayfish removal and qualitative modeling reveal habitat-specific food web topology

    DOE PAGES

    Hansen, Gretchen J. A.; Tunney, Tyler D.; Winslow, Luke A.; ...

    2017-02-10

    Patterning of the presence/absence of food web linkages (hereafter topology) is a fundamental characteristic of ecosystems that can influence species responses to perturbations. However, the insight from food web topology into dynamic effects of perturbations on species is potentially hindered because most described topologies represent data integrated across spatial and temporal scales. We conducted a 10-year, whole-lake experiment in which we removed invasive rusty crayfish ( Orconectes rusticus) from a 64-ha north-temperate lake and monitored responses of multiple trophic levels. We compared species responses observed in two sub-habitats to the responses predicted from all topologies of an integrated, literature-informed basemore » food web model of 32 potential links. Out of 4.3 billion possible topologies, only 308,833 (0.0072%) predicted responses that qualitatively matched observed species responses in cobble habitat, and only 12,673 (0.0003%) matched observed responses in sand habitat. Furthermore, when constrained to predictions that both matched observed responses and were highly reliable (i.e., predictions were robust to link strength values), only 5040 (0.0001%) and 140 (0.000003%) topologies were identified for cobble and sand habitats, respectively. A small number of linkages were nearly always present in these valid, reliable networks in sand, while a greater variety of possible network configurations were possible in cobble. Direct links involving invasive rusty crayfish were more important in cobble, while indirect effects involving Lepomis spp. were more important in sand. Importantly, the importance of individual species linkages differed dramatically among cobble and sand sub-habitats within a single lake, even though species composition was identical. Furthermore the true topology of food webs is difficult to determine, constraining topologies to include spatial resolution that matches observed experimental outcomes may reduce possibilities to a small number of plausible alternatives.« less

  3. Whole-lake invasive crayfish removal and qualitative modeling reveal habitat-specific food web topology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, Gretchen J. A.; Tunney, Tyler D.; Winslow, Luke A.

    Patterning of the presence/absence of food web linkages (hereafter topology) is a fundamental characteristic of ecosystems that can influence species responses to perturbations. However, the insight from food web topology into dynamic effects of perturbations on species is potentially hindered because most described topologies represent data integrated across spatial and temporal scales. We conducted a 10-year, whole-lake experiment in which we removed invasive rusty crayfish ( Orconectes rusticus) from a 64-ha north-temperate lake and monitored responses of multiple trophic levels. We compared species responses observed in two sub-habitats to the responses predicted from all topologies of an integrated, literature-informed basemore » food web model of 32 potential links. Out of 4.3 billion possible topologies, only 308,833 (0.0072%) predicted responses that qualitatively matched observed species responses in cobble habitat, and only 12,673 (0.0003%) matched observed responses in sand habitat. Furthermore, when constrained to predictions that both matched observed responses and were highly reliable (i.e., predictions were robust to link strength values), only 5040 (0.0001%) and 140 (0.000003%) topologies were identified for cobble and sand habitats, respectively. A small number of linkages were nearly always present in these valid, reliable networks in sand, while a greater variety of possible network configurations were possible in cobble. Direct links involving invasive rusty crayfish were more important in cobble, while indirect effects involving Lepomis spp. were more important in sand. Importantly, the importance of individual species linkages differed dramatically among cobble and sand sub-habitats within a single lake, even though species composition was identical. Furthermore the true topology of food webs is difficult to determine, constraining topologies to include spatial resolution that matches observed experimental outcomes may reduce possibilities to a small number of plausible alternatives.« less

  4. Analysis of Highly-Resolved Simulations of 2-D Humps Toward Improvement of Second-Moment Closures

    NASA Technical Reports Server (NTRS)

    Jeyapaul, Elbert; Rumsey Christopher

    2013-01-01

    Fully resolved simulation data of flow separation over 2-D humps has been used to analyze the modeling terms in second-moment closures of the Reynolds-averaged Navier- Stokes equations. Existing models for the pressure-strain and dissipation terms have been analyzed using a priori calculations. All pressure-strain models are incorrect in the high-strain region near separation, although a better match is observed downstream, well into the separated-flow region. Near-wall inhomogeneity causes pressure-strain models to predict incorrect signs for the normal components close to the wall. In a posteriori computations, full Reynolds stress and explicit algebraic Reynolds stress models predict the separation point with varying degrees of success. However, as with one- and two-equation models, the separation bubble size is invariably over-predicted.

  5. Optimizing countershading camouflage.

    PubMed

    Cuthill, Innes C; Sanghera, N Simon; Penacchio, Olivier; Lovell, Paul George; Ruxton, Graeme D; Harris, Julie M

    2016-11-15

    Countershading, the widespread tendency of animals to be darker on the side that receives strongest illumination, has classically been explained as an adaptation for camouflage: obliterating cues to 3D shape and enhancing background matching. However, there have only been two quantitative tests of whether the patterns observed in different species match the optimal shading to obliterate 3D cues, and no tests of whether optimal countershading actually improves concealment or survival. We use a mathematical model of the light field to predict the optimal countershading for concealment that is specific to the light environment and then test this prediction with correspondingly patterned model "caterpillars" exposed to avian predation in the field. We show that the optimal countershading is strongly illumination-dependent. A relatively sharp transition in surface patterning from dark to light is only optimal under direct solar illumination; if there is diffuse illumination from cloudy skies or shade, the pattern provides no advantage over homogeneous background-matching coloration. Conversely, a smoother gradation between dark and light is optimal under cloudy skies or shade. The demonstration of these illumination-dependent effects of different countershading patterns on predation risk strongly supports the comparative evidence showing that the type of countershading varies with light environment.

  6. Reverse-translational biomarker validation of Abnormal Repetitive Behaviors in mice: an illustration of the 4P's modeling approach.

    PubMed

    Garner, Joseph P; Thogerson, Collette M; Dufour, Brett D; Würbel, Hanno; Murray, James D; Mench, Joy A

    2011-06-01

    The NIMH's new strategic plan, with its emphasis on the "4P's" (Prediction, Pre-emption, Personalization, and Populations) and biomarker-based medicine requires a radical shift in animal modeling methodology. In particular 4P's models will be non-determinant (i.e. disease severity will depend on secondary environmental and genetic factors); and validated by reverse-translation of animal homologues to human biomarkers. A powerful consequence of the biomarker approach is that different closely related disorders have a unique fingerprint of biomarkers. Animals can be validated as a highly specific model of a single disorder by matching this 'fingerprint'; or as a model of a symptom seen in multiple disorders by matching common biomarkers. Here we illustrate this approach with two Abnormal Repetitive Behaviors (ARBs) in mice: stereotypies and barbering (hair pulling). We developed animal versions of the neuropsychological biomarkers that distinguish human ARBs, and tested the fingerprint of the different mouse ARBs. As predicted, the two mouse ARBs were associated with different biomarkers. Both barbering and stereotypy could be discounted as models of OCD (even though they are widely used as such), due to the absence of limbic biomarkers which are characteristic of OCD and hence are necessary for a valid model. Conversely barbering matched the fingerprint of trichotillomania (i.e. selective deficits in set-shifting), suggesting it may be a highly specific model of this disorder. In contrast stereotypies were correlated only with a biomarker (deficits in response shifting) correlated with stereotypies in multiple disorders, suggesting that animal stereotypies model stereotypies in multiple disorders. Copyright © 2011 Elsevier B.V. All rights reserved.

  7. Modelling blast induced damage from a fully coupled explosive charge

    PubMed Central

    Onederra, Italo A.; Furtney, Jason K.; Sellers, Ewan; Iverson, Stephen

    2015-01-01

    This paper presents one of the latest developments in the blasting engineering modelling field—the Hybrid Stress Blasting Model (HSBM). HSBM includes a rock breakage engine to model detonation, wave propagation, rock fragmentation, and muck pile formation. Results from two controlled blasting experiments were used to evaluate the code’s ability to predict the extent of damage. Results indicate that the code is capable of adequately predicting both the extent and shape of the damage zone associated with the influence of point-of-initiation and free-face boundary conditions. Radial fractures extending towards a free face are apparent in the modelling output and matched those mapped after the experiment. In the stage 2 validation experiment, the maximum extent of visible damage was of the order of 1.45 m for the fully coupled 38-mm emulsion charge. Peak radial velocities were predicted within a relative difference of only 1.59% at the nearest history point at 0.3 m from the explosive charge. Discrepancies were larger further away from the charge, with relative differences of −22.4% and −42.9% at distances of 0.46 m and 0.61 m, respectively, meaning that the model overestimated particle velocities at these distances. This attenuation deficiency in the modelling produced an overestimation of the damage zone at the corner of the block due to excessive stress reflections. The extent of visible damage in the immediate vicinity of the blasthole adequately matched the measurements. PMID:26412978

  8. Toward seamless hydrologic predictions across spatial scales

    NASA Astrophysics Data System (ADS)

    Samaniego, Luis; Kumar, Rohini; Thober, Stephan; Rakovec, Oldrich; Zink, Matthias; Wanders, Niko; Eisner, Stephanie; Müller Schmied, Hannes; Sutanudjaja, Edwin H.; Warrach-Sagi, Kirsten; Attinger, Sabine

    2017-09-01

    Land surface and hydrologic models (LSMs/HMs) are used at diverse spatial resolutions ranging from catchment-scale (1-10 km) to global-scale (over 50 km) applications. Applying the same model structure at different spatial scales requires that the model estimates similar fluxes independent of the chosen resolution, i.e., fulfills a flux-matching condition across scales. An analysis of state-of-the-art LSMs and HMs reveals that most do not have consistent hydrologic parameter fields. Multiple experiments with the mHM, Noah-MP, PCR-GLOBWB, and WaterGAP models demonstrate the pitfalls of deficient parameterization practices currently used in most operational models, which are insufficient to satisfy the flux-matching condition. These examples demonstrate that J. Dooge's 1982 statement on the unsolved problem of parameterization in these models remains true. Based on a review of existing parameter regionalization techniques, we postulate that the multiscale parameter regionalization (MPR) technique offers a practical and robust method that provides consistent (seamless) parameter and flux fields across scales. Herein, we develop a general model protocol to describe how MPR can be applied to a particular model and present an example application using the PCR-GLOBWB model. Finally, we discuss potential advantages and limitations of MPR in obtaining the seamless prediction of hydrological fluxes and states across spatial scales.

  9. The galaxy clustering crisis in abundance matching

    NASA Astrophysics Data System (ADS)

    Campbell, Duncan; van den Bosch, Frank C.; Padmanabhan, Nikhil; Mao, Yao-Yuan; Zentner, Andrew R.; Lange, Johannes U.; Jiang, Fangzhou; Villarreal, Antonio

    2018-06-01

    Galaxy clustering on small scales is significantly underpredicted by sub-halo abundance matching (SHAM) models that populate (sub-)haloes with galaxies based on peak halo mass, Mpeak. SHAM models based on the peak maximum circular velocity, Vpeak, have had much better success. The primary reason for Mpeak-based models fail is the relatively low abundance of satellite galaxies produced in these models compared to those based on Vpeak. Despite success in predicting clustering, a simple Vpeak-based SHAM model results in predictions for galaxy growth that are at odds with observations. We evaluate three possible remedies that could `save' mass-based SHAM: (1) SHAM models require a significant population of `orphan' galaxies as a result of artificial disruption/merging of sub-haloes in modern high-resolution dark matter simulations; (2) satellites must grow significantly after their accretion; and (3) stellar mass is significantly affected by halo assembly history. No solution is entirely satisfactory. However, regardless of the particulars, we show that popular SHAM models based on Mpeak cannot be complete physical models as presented. Either Vpeak truly is a better predictor of stellar mass at z ˜ 0 and it remains to be seen how the correlation between stellar mass and Vpeak comes about, or SHAM models are missing vital component(s) that significantly affect galaxy clustering.

  10. Comparison of two weighted integration models for the cueing task: linear and likelihood

    NASA Technical Reports Server (NTRS)

    Shimozaki, Steven S.; Eckstein, Miguel P.; Abbey, Craig K.

    2003-01-01

    In a task in which the observer must detect a signal at two locations, presenting a precue that predicts the location of a signal leads to improved performance with a valid cue (signal location matches the cue), compared to an invalid cue (signal location does not match the cue). The cue validity effect has often been explained with a limited capacity attentional mechanism improving the perceptual quality at the cued location. Alternatively, the cueing effect can also be explained by unlimited capacity models that assume a weighted combination of noisy responses across the two locations. We compare two weighted integration models, a linear model and a sum of weighted likelihoods model based on a Bayesian observer. While qualitatively these models are similar, quantitatively they predict different cue validity effects as the signal-to-noise ratios (SNR) increase. To test these models, 3 observers performed in a cued discrimination task of Gaussian targets with an 80% valid precue across a broad range of SNR's. Analysis of a limited capacity attentional switching model was also included and rejected. The sum of weighted likelihoods model best described the psychophysical results, suggesting that human observers approximate a weighted combination of likelihoods, and not a weighted linear combination.

  11. Experimental and modelling of Arthrospira platensis cultivation in open raceway ponds.

    PubMed

    Ranganathan, Panneerselvam; Amal, J C; Savithri, S; Haridas, Ajith

    2017-10-01

    In this study, the growth of Arthrospira platensis was studied in an open raceway pond. Furthermore, dynamic model for algae growth and CFD modelling of hydrodynamics in open raceway pond were developed. The dynamic behaviour of the algal system was developed by solving mass balance equations of various components, considering light intensity and gas-liquid mass transfer. A CFD modelling of the hydrodynamics of open raceway pond was developed by solving mass and momentum balance equations of the liquid medium. The prediction of algae concentration from the dynamic model was compared with the experimental data. The hydrodynamic behaviour of the open raceway pond was compared with the literature data for model validation. The model predictions match the experimental findings. Furthermore, the hydrodynamic behaviour and residence time distribution in our small raceway pond were predicted. These models can serve as a tool to assess the pond performance criteria. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Modeling of Sustainable Base Production by Microbial Electrolysis Cell.

    PubMed

    Blatter, Maxime; Sugnaux, Marc; Comninellis, Christos; Nealson, Kenneth; Fischer, Fabian

    2016-07-07

    A predictive model for the microbial/electrochemical base formation from wastewater was established and compared to experimental conditions within a microbial electrolysis cell. A Na2 SO4 /K2 SO4 anolyte showed that model prediction matched experimental results. Using Shewanella oneidensis MR-1, a strong base (pH≈13) was generated using applied voltages between 0.3 and 1.1 V. Due to the use of bicarbonate, the pH value in the anolyte remained unchanged, which is required to maintain microbial activity. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Predicting Galaxy Star Formation Rates via the Co-evolution of Galaxies and Halos

    DOE PAGES

    Watson, Douglas F.; Hearin, Andrew P.; Berlind, Andreas A.; ...

    2014-03-06

    In this paper, we test the age matching hypothesis that the star formation rate (SFR) of a galaxy is determined by its dark matter halo formation history, and as such, that more quiescent galaxies reside in older halos. This simple model has been remarkably successful at predicting color-based galaxy statistics at low redshift as measured in the Sloan Digital Sky Survey (SDSS). To further test this method with observations, we present new SDSS measurements of the galaxy two-point correlation function and galaxy-galaxy lensing as a function of stellar mass and SFR, separated into quenched and star forming galaxy samples. Wemore » find that our age matching model is in excellent agreement with these new measurements. We also employ a galaxy group finder and show that our model is able to predict: (1) the relative SFRs of central and satellite galaxies, (2) the SFR-dependence of the radial distribution of satellite galaxy populations within galaxy groups, rich groups, and clusters and their surrounding larger scale environments, and (3) the interesting feature that the satellite quenched fraction as a function of projected radial distance from the central galaxy exhibits an approx r -.15 slope, independent of environment. The accurate prediction for the spatial distribution of satellites is intriguing given the fact that we do not explicitly model satellite-specific processes after infall, and that in our model the virial radius does not mark a special transition region in the evolution of a satellite, contrary to most galaxy evolution models. The success of the model suggests that present-day galaxy SFR is strongly correlated with halo mass assembly history.« less

  14. Modeling to predict pilot performance during CDTI-based in-trail following experiments

    NASA Technical Reports Server (NTRS)

    Sorensen, J. A.; Goka, T.

    1984-01-01

    A mathematical model was developed of the flight system with the pilot using a cockpit display of traffic information (CDTI) to establish and maintain in-trail spacing behind a lead aircraft during approach. Both in-trail and vertical dynamics were included. The nominal spacing was based on one of three criteria (Constant Time Predictor; Constant Time Delay; or Acceleration Cue). This model was used to simulate digitally the dynamics of a string of multiple following aircraft, including response to initial position errors. The simulation was used to predict the outcome of a series of in-trail following experiments, including pilot performance in maintaining correct longitudinal spacing and vertical position. The experiments were run in the NASA Ames Research Center multi-cab cockpit simulator facility. The experimental results were then used to evaluate the model and its prediction accuracy. Model parameters were adjusted, so that modeled performance matched experimental results. Lessons learned in this modeling and prediction study are summarized.

  15. From points to forecasts: Predicting invasive species habitat suitability in the near term

    USGS Publications Warehouse

    Holcombe, Tracy R.; Stohlgren, Thomas J.; Jarnevich, Catherine S.

    2010-01-01

    We used near-term climate scenarios for the continental United States, to model 12 invasive plants species. We created three potential habitat suitability models for each species using maximum entropy modeling: (1) current; (2) 2020; and (3) 2035. Area under the curve values for the models ranged from 0.92 to 0.70, with 10 of the 12 being above 0.83 suggesting strong and predictable species-environment matching. Change in area between the current potential habitat and 2035 ranged from a potential habitat loss of about 217,000 km2, to a potential habitat gain of about 133,000 km2.

  16. Simulation and analysis of a model dinoflagellate predator-prey system

    NASA Astrophysics Data System (ADS)

    Mazzoleni, M. J.; Antonelli, T.; Coyne, K. J.; Rossi, L. F.

    2015-12-01

    This paper analyzes the dynamics of a model dinoflagellate predator-prey system and uses simulations to validate theoretical and experimental studies. A simple model for predator-prey interactions is derived by drawing upon analogies from chemical kinetics. This model is then modified to account for inefficiencies in predation. Simulation results are shown to closely match the model predictions. Additional simulations are then run which are based on experimental observations of predatory dinoflagellate behavior, and this study specifically investigates how the predatory dinoflagellate Karlodinium veneficum uses toxins to immobilize its prey and increase its feeding rate. These simulations account for complex dynamics that were not included in the basic models, and the results from these computational simulations closely match the experimentally observed predatory behavior of K. veneficum and reinforce the notion that predatory dinoflagellates utilize toxins to increase their feeding rate.

  17. An analytical model for light backscattering by coccoliths and coccospheres of Emiliania huxleyi.

    PubMed

    Fournier, Georges; Neukermans, Griet

    2017-06-26

    We present an analytical model for light backscattering by coccoliths and coccolithophores of the marine calcifying phytoplankter Emiliania huxleyi. The model is based on the separation of the effects of diffraction, refraction, and reflection on scattering, a valid assumption for particle sizes typical of coccoliths and coccolithophores. Our model results match closely with results from an exact scattering code that uses complex particle geometry and our model also mimics well abrupt transitions in scattering magnitude. Finally, we apply our model to predict changes in the spectral backscattering coefficient during an Emiliania huxleyi bloom with results that closely match in situ measurements. Because our model captures the key features that control the light backscattering process, it can be generalized to coccoliths and coccolithophores of different morphologies which can be obtained from size-calibrated electron microphotographs. Matlab codes of this model are provided as supplementary material.

  18. Coincidental match of numerical simulation and physics

    NASA Astrophysics Data System (ADS)

    Pierre, B.; Gudmundsson, J. S.

    2010-08-01

    Consequences of rapid pressure transients in pipelines range from increased fatigue to leakages and to complete ruptures of pipeline. Therefore, accurate predictions of rapid pressure transients in pipelines using numerical simulations are critical. State of the art modelling of pressure transient in general, and water hammer in particular include unsteady friction in addition to the steady frictional pressure drop, and numerical simulations rely on the method of characteristics. Comparison of rapid pressure transient calculations by the method of characteristics and a selected high resolution finite volume method highlights issues related to modelling of pressure waves and illustrates that matches between numerical simulations and physics are purely coincidental.

  19. Computational model of in vivo human energy metabolism during semi-starvation and re-feeding

    PubMed Central

    Hall, Kevin D.

    2008-01-01

    Changes of body weight and composition are the result of complex interactions among metabolic fluxes contributing to macronutrient balances. To better understand these interactions, a mathematical model was constructed that used the measured dietary macronutrient intake during semi-starvation and re-feeding as model inputs and computed whole-body energy expenditure, de novo lipogenesis, gluconeogenesis, as well as turnover and oxidation of carbohydrate, fat and protein. Published in vivo human data provided the basis for the model components which were integrated by fitting a few unknown parameters to the classic Minnesota human starvation experiment. The model simulated the measured body weight and fat mass changes during semi-starvation and re-feeding and predicted the unmeasured metabolic fluxes underlying the body composition changes. The resting metabolic rate matched the experimental measurements and required a model of adaptive thermogenesis. Re-feeding caused an elevation of de novo lipogenesis which, along with increased fat intake, resulted in a rapid repletion and overshoot of body fat. By continuing the computer simulation with the pre-starvation diet and physical activity, the original body weight and composition was eventually restored, but body fat mass was predicted to take more than one additional year to return to within 5% of its original value. The model was validated by simulating a recently published short-term caloric restriction experiment without changing the model parameters. The predicted changes of body weight, fat mass, resting metabolic rate, and nitrogen balance matched the experimental measurements thereby providing support for the validity of the model. PMID:16449298

  20. Galaxy Formation At Extreme Redshifts: Semi-Analytic Model Predictions And Challenges For Observations

    NASA Astrophysics Data System (ADS)

    Yung, L. Y. Aaron; Somerville, Rachel S.

    2017-06-01

    The well-established Santa Cruz semi-analytic galaxy formation framework has been shown to be quite successful at explaining observations in the local Universe, as well as making predictions for low-redshift observations. Recently, metallicity-based gas partitioning and H2-based star formation recipes have been implemented in our model, replacing the legacy cold-gas based recipe. We then use our revised model to explore the high-redshift Universe and make predictions up to z = 15. Although our model is only calibrated to observations from the local universe, our predictions seem to match incredibly well with mid- to high-redshift observational constraints available-to-date, including rest-frame UV luminosity functions and the reionization history as constrained by CMB and IGM observations. We provide predictions for individual and statistical galaxy properties at a wide range of redshifts (z = 4 - 15), including objects that are too far or too faint to be detected with current facilities. And using our model predictions, we also provide forecasted luminosity functions and other observables for upcoming studies with JWST.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pinilla, Maria Isabel

    This report seeks to study and benchmark code predictions against experimental data; determine parameters to match MCNP-simulated detector response functions to experimental stilbene measurements; add stilbene processing capabilities to DRiFT; and improve NEUANCE detector array modeling and analysis using new MCNP6 and DRiFT features.

  2. Correlation of AH-1G airframe flight vibration data with a coupled rotor-fuselage analysis

    NASA Technical Reports Server (NTRS)

    Sangha, K.; Shamie, J.

    1990-01-01

    The formulation and features of the Rotor-Airframe Comprehensive Analysis Program (RACAP) is described. The analysis employs a frequency domain, transfer matrix approach for the blade structural model, a time domain wake or momentum theory aerodynamic model, and impedance matching for rotor-fuselage coupling. The analysis is applied to the AH-1G helicopter, and a correlation study is conducted on fuselage vibration predictions. The purpose of the study is to evaluate the state-of-the-art in helicopter fuselage vibration prediction technology. The fuselage vibration predicted using RACAP are fairly good in the vertical direction and somewhat deficient in the lateral/longitudinal directions. Some of these deficiencies are traced to the fuselage finite element model.

  3. Improvement, Verification, and Refinement of Spatially-Explicit Exposure Models in Risk Assessment - FishRand Spatially-Explicit Bioaccumulation Model Demonstration

    DTIC Science & Technology

    2015-08-01

    21  Figure 4. Data-based proportion of DDD , DDE and DDT in total DDx in fish and sediment by... DDD dichlorodiphenyldichloroethane DDE dichlorodiphenyldichloroethylene DDT dichlorodiphenyltrichloroethane DoD Department of Defense ERM... DDD ) at the other site. The spatially-explicit model consistently predicts tissue concentrations that closely match both the average and the

  4. Modeling of detachment experiments at DIII-D

    DOE PAGES

    Canik, John M.; Briesemeister, Alexis R.; Lasnier, C. J.; ...

    2014-11-26

    Edge fluid–plasma/kinetic–neutral modeling of well-diagnosed DIII-D experiments is performed in order to document in detail how well certain aspects of experimental measurements are reproduced within the model as the transition to detachment is approached. Results indicate, that at high densities near detachment onset, the poloidal temperature profile produced in the simulations agrees well with that measured in experiment. However, matching the heat flux in the model requires a significant increase in the radiated power compared to what is predicted using standard chemical sputtering rates. Lastly, these results suggest that the model is adequate to predict the divertor temperature, provided thatmore » the discrepancy in radiated power level can be resolved.« less

  5. Dynamics of Social Group Competition: Modeling the Decline of Religious Affiliation

    NASA Astrophysics Data System (ADS)

    Abrams, Daniel M.; Yaple, Haley A.; Wiener, Richard J.

    2011-08-01

    When social groups compete for members, the resulting dynamics may be understandable with mathematical models. We demonstrate that a simple ordinary differential equation (ODE) model is a good fit for religious shift by comparing it to a new international data set tracking religious nonaffiliation. We then generalize the model to include the possibility of nontrivial social interaction networks and examine the limiting case of a continuous system. Analytical and numerical predictions of this generalized system, which is robust to polarizing perturbations, match those of the original ODE model and justify its agreement with real-world data. The resulting predictions highlight possible causes of social shift and suggest future lines of research in both physics and sociology.

  6. The influence of spelling ability on handwriting production: children with and without dyslexia.

    PubMed

    Sumner, Emma; Connelly, Vincent; Barnett, Anna L

    2014-09-01

    Current models of writing do not sufficiently address the complex relationship between the 2 transcription skills: spelling and handwriting. For children with dyslexia and beginning writers, it is conceivable that spelling ability will influence rate of handwriting production. Our aim in this study was to examine execution speed and temporal characteristics of handwriting when completing sentence-copying tasks that are free from composing demands and to determine the predictive value of spelling, pausing, and motor skill on handwriting production. Thirty-one children with dyslexia (Mage = 9 years 4 months) were compared with age-matched and spelling-ability matched children (Mage = 6 years 6 months). A digital writing tablet and Eye and Pen software were used to analyze handwriting. Children with dyslexia were able to execute handwriting at the same speed as the age-matched peers. However, they wrote less overall and paused more frequently while writing, especially within words. Combined spelling ability and within-word pausing accounted for over 76% of the variance in handwriting production of children with dyslexia, demonstrating that productivity relies on spelling capabilities. Motor skill did not significantly predict any additional variance in handwriting production. Reading ability predicted performance of the age-matched group, and pausing predicted performance for the spelling-ability group. The findings from the digital writing tablet highlight the interactive relationship between the transcription skills and how, if spelling is not fully automatized, it can constrain the rate of handwriting production. Practical implications are also addressed, emphasizing the need for more consideration to be given to what common handwriting tasks are assessing as a whole.

  7. Supermodeling by Synchronization of Alternative SPEEDO Models

    NASA Astrophysics Data System (ADS)

    Duane, Gregory; Selten, Frank

    2016-04-01

    The supermodeling approach, wherein different imperfect models of the same objective process are dynamically combined in run-time to reduce systematic error, is tested using SPEEDO - a primitive equation atmospheric model coupled to the CLIO ocean model. Three versions of SPEEDO are defined by parameters that differ in a range that arguably mimics differences among state-of-the-art climate models. A fourth model is taken to represent truth. The "true" ocean drives all three model atmospheres. The three models are also connected to one another at every level, with spatially uniform nudging coefficients that are trained so that the three models, which synchronize with one another, also synchronize with truth when data is continuously assimilated, as in weather prediction. The SPEEDO supermodel is evaluated in weather-prediction mode, with nudging to truth. It is found that the supemodel performs better than any of the three models and marginally better than the best weighted average of the outputs of the three models run separately. To evaluate the utility for climate projection, parameters corresponding to green house gas levels are changed in truth and in the three models. The supermodel formed with inter-model connections from the present-CO2 runs no longer give the optimal configuration for the supermodel in the doubled-CO2 realm, but the supermodel with the previously trained connections is still useful as compared to the separate models or averages of their outputs. In ongoing work, a training algorithm is examined that attempts to match the blocked-zonal index cycle of the SPEEDO model atmosphere to truth, rather than simply minimizing the RMS error in the various fields. Such an approach comes closer to matching the model attractor to the true attractor - the desired effect in climate projection - rather than matching instantaneous states. Gradient descent in a cost function defined over a finite temporal window can indeed be done efficiently. Preliminary results are presented for a crudely defined index cycle.

  8. First-harmonic nonlinearities can predict unseen third-harmonics in medium-amplitude oscillatory shear (MAOS)

    NASA Astrophysics Data System (ADS)

    Carey-De La Torre, Olivia; Ewoldt, Randy H.

    2018-02-01

    We use first-harmonic MAOS nonlinearities from G 1' and G 1″ to test a predictive structure-rheology model for a transient polymer network. Using experiments with PVA-Borax (polyvinyl alcohol cross-linked by sodium tetraborate (borax)) at 11 different compositions, the model is calibrated to first-harmonic MAOS data on a torque-controlled rheometer at a fixed frequency, and used to predict third-harmonic MAOS on a displacement controlled rheometer at a different frequency three times larger. The prediction matches experiments for decomposed MAOS measures [ e 3] and [ v 3] with median disagreement of 13% and 25%, respectively, across all 11 compositions tested. This supports the validity of this model, and demonstrates the value of using all four MAOS signatures to understand and test structure-rheology relations for complex fluids.

  9. Study of chromatic adaptation using memory color matches, Part I: neutral illuminants.

    PubMed

    Smet, Kevin A G; Zhai, Qiyan; Luo, Ming R; Hanselaer, Peter

    2017-04-03

    Twelve corresponding color data sets have been obtained using the long-term memory colors of familiar objects as target stimuli. Data were collected for familiar objects with neutral, red, yellow, green and blue hues under 4 approximately neutral illumination conditions on or near the blackbody locus. The advantages of the memory color matching method are discussed in light of other more traditional asymmetric matching techniques. Results were compared to eight corresponding color data sets available in literature. The corresponding color data was used to test several linear (von Kries, RLAB, etc.) and nonlinear (Hunt & Nayatani) chromatic adaptation transforms (CAT). It was found that a simple two-step von Kries, whereby the degree of adaptation D is optimized to minimize the DEu'v' prediction errors, outperformed all other tested models for both memory color and literature corresponding color sets, whereby prediction errors were lower for the memory color sets. The predictive errors were substantially smaller than the standard uncertainty on the average observer and were comparable to what are considered just-noticeable-differences in the CIE u'v' chromaticity diagram, supporting the use of memory color based internal references to study chromatic adaptation mechanisms.

  10. Convective Heat Transfer in the Reusable Solid Rocket Motor of the Space Transportation System

    NASA Technical Reports Server (NTRS)

    Ahmad, Rashid A.; Cash, Stephen F. (Technical Monitor)

    2002-01-01

    This simulation involved a two-dimensional axisymmetric model of a full motor initial grain of the Reusable Solid Rocket Motor (RSRM) of the Space Transportation System (STS). It was conducted with CFD (computational fluid dynamics) commercial code FLUENT. This analysis was performed to: a) maintain continuity with most related previous analyses, b) serve as a non-vectored baseline for any three-dimensional vectored nozzles, c) provide a relatively simple application and checkout for various CFD solution schemes, grid sensitivity studies, turbulence modeling and heat transfer, and d) calculate nozzle convective heat transfer coefficients. The accuracy of the present results and the selection of the numerical schemes and turbulence models were based on matching the rocket ballistic predictions of mass flow rate, head end pressure, vacuum thrust and specific impulse, and measured chamber pressure drop. Matching these ballistic predictions was found to be good. This study was limited to convective heat transfer and the results compared favorably with existing theory. On the other hand, qualitative comparison with backed-out data of the ratio of the convective heat transfer coefficient to the specific heat at constant pressure was made in a relative manner. This backed-out data was devised to match nozzle erosion that was a result of heat transfer (convective, radiative and conductive), chemical (transpirating), and mechanical (shear and particle impingement forces) effects combined.

  11. Logistic Mixed Models to Investigate Implicit and Explicit Belief Tracking.

    PubMed

    Lages, Martin; Scheel, Anne

    2016-01-01

    We investigated the proposition of a two-systems Theory of Mind in adults' belief tracking. A sample of N = 45 participants predicted the choice of one of two opponent players after observing several rounds in an animated card game. Three matches of this card game were played and initial gaze direction on target and subsequent choice predictions were recorded for each belief task and participant. We conducted logistic regressions with mixed effects on the binary data and developed Bayesian logistic mixed models to infer implicit and explicit mentalizing in true belief and false belief tasks. Although logistic regressions with mixed effects predicted the data well a Bayesian logistic mixed model with latent task- and subject-specific parameters gave a better account of the data. As expected explicit choice predictions suggested a clear understanding of true and false beliefs (TB/FB). Surprisingly, however, model parameters for initial gaze direction also indicated belief tracking. We discuss why task-specific parameters for initial gaze directions are different from choice predictions yet reflect second-order perspective taking.

  12. Replacing the nucleus pulposus of the intervertebral disk: prediction of suitable properties of a replacement material using finite element analysis.

    PubMed

    Meakin, J R

    2001-03-01

    An axisymmetric finite element model of a human lumbar disk was developed to investigate the properties required of an implant to replace the nucleus pulposus. In the intact disk, the nucleus was modeled as a fluid, and the annulus as an elastic solid. The Young's modulus of the annulus was determined empirically by matching model predictions to experimental results. The model was checked for sensitivity to the input parameter values and found to give reasonable behavior. The model predicted that removal of the nucleus would change the response of the annulus to compression. This prediction was consistent with experimental results, thus validating the model. Implants to fill the cavity produced by nucleus removal were modeled as elastic solids. The Poisson's ratio was fixed at 0.49, and the Young's modulus was varied from 0.5 to 100 MPa. Two sizes of implant were considered: full size (filling the cavity) and small size (smaller than the cavity). The model predicted that a full size implant would reverse the changes to annulus behavior, but a smaller implant would not. By comparing the stress distribution in the annulus, the ideal Young's modulus was predicted to be approximately 3 MPa. These predictions have implications for current nucleus implant designs. Copyright 2001 Kluwer Academic Publishers

  13. Statistical validation of predictive TRANSP simulations of baseline discharges in preparation for extrapolation to JET D-T

    NASA Astrophysics Data System (ADS)

    Kim, Hyun-Tae; Romanelli, M.; Yuan, X.; Kaye, S.; Sips, A. C. C.; Frassinetti, L.; Buchanan, J.; Contributors, JET

    2017-06-01

    This paper presents for the first time a statistical validation of predictive TRANSP simulations of plasma temperature using two transport models, GLF23 and TGLF, over a database of 80 baseline H-mode discharges in JET-ILW. While the accuracy of the predicted T e with TRANSP-GLF23 is affected by plasma collisionality, the dependency of predictions on collisionality is less significant when using TRANSP-TGLF, indicating that the latter model has a broader applicability across plasma regimes. TRANSP-TGLF also shows a good matching of predicted T i with experimental measurements allowing for a more accurate prediction of the neutron yields. The impact of input data and assumptions prescribed in the simulations are also investigated in this paper. The statistical validation and the assessment of uncertainty level in predictive TRANSP simulations for JET-ILW-DD will constitute the basis for the extrapolation to JET-ILW-DT experiments.

  14. Predicting intensity ranks of peptide fragment ions.

    PubMed

    Frank, Ari M

    2009-05-01

    Accurate modeling of peptide fragmentation is necessary for the development of robust scoring functions for peptide-spectrum matches, which are the cornerstone of MS/MS-based identification algorithms. Unfortunately, peptide fragmentation is a complex process that can involve several competing chemical pathways, which makes it difficult to develop generative probabilistic models that describe it accurately. However, the vast amounts of MS/MS data being generated now make it possible to use data-driven machine learning methods to develop discriminative ranking-based models that predict the intensity ranks of a peptide's fragment ions. We use simple sequence-based features that get combined by a boosting algorithm into models that make peak rank predictions with high accuracy. In an accompanying manuscript, we demonstrate how these prediction models are used to significantly improve the performance of peptide identification algorithms. The models can also be useful in the design of optimal multiple reaction monitoring (MRM) transitions, in cases where there is insufficient experimental data to guide the peak selection process. The prediction algorithm can also be run independently through PepNovo+, which is available for download from http://bix.ucsd.edu/Software/PepNovo.html.

  15. Predicting Intensity Ranks of Peptide Fragment Ions

    PubMed Central

    Frank, Ari M.

    2009-01-01

    Accurate modeling of peptide fragmentation is necessary for the development of robust scoring functions for peptide-spectrum matches, which are the cornerstone of MS/MS-based identification algorithms. Unfortunately, peptide fragmentation is a complex process that can involve several competing chemical pathways, which makes it difficult to develop generative probabilistic models that describe it accurately. However, the vast amounts of MS/MS data being generated now make it possible to use data-driven machine learning methods to develop discriminative ranking-based models that predict the intensity ranks of a peptide's fragment ions. We use simple sequence-based features that get combined by a boosting algorithm in to models that make peak rank predictions with high accuracy. In an accompanying manuscript, we demonstrate how these prediction models are used to significantly improve the performance of peptide identification algorithms. The models can also be useful in the design of optimal MRM transitions, in cases where there is insufficient experimental data to guide the peak selection process. The prediction algorithm can also be run independently through PepNovo+, which is available for download from http://bix.ucsd.edu/Software/PepNovo.html. PMID:19256476

  16. Modelling biological invasions: species traits, species interactions, and habitat heterogeneity.

    PubMed

    Cannas, Sergio A; Marco, Diana E; Páez, Sergio A

    2003-05-01

    In this paper we explore the integration of different factors to understand, predict and control ecological invasions, through a general cellular automaton model especially developed. The model includes life history traits of several species in a modular structure interacting multiple cellular automata. We performed simulations using field values corresponding to the exotic Gleditsia triacanthos and native co-dominant trees in a montane area. Presence of G. triacanthos juvenile bank was a determinant condition for invasion success. Main parameters influencing invasion velocity were mean seed dispersal distance and minimum reproductive age. Seed production had a small influence on the invasion velocity. Velocities predicted by the model agreed well with estimations from field data. Values of population density predicted matched field values closely. The modular structure of the model, the explicit interaction between the invader and the native species, and the simplicity of parameters and transition rules are novel features of the model.

  17. Validation of the Economic and Health Outcomes Model of Type 2 Diabetes Mellitus (ECHO-T2DM).

    PubMed

    Willis, Michael; Johansen, Pierre; Nilsson, Andreas; Asseburg, Christian

    2017-03-01

    The Economic and Health Outcomes Model of Type 2 Diabetes Mellitus (ECHO-T2DM) was developed to address study questions pertaining to the cost-effectiveness of treatment alternatives in the care of patients with type 2 diabetes mellitus (T2DM). Naturally, the usefulness of a model is determined by the accuracy of its predictions. A previous version of ECHO-T2DM was validated against actual trial outcomes and the model predictions were generally accurate. However, there have been recent upgrades to the model, which modify model predictions and necessitate an update of the validation exercises. The objectives of this study were to extend the methods available for evaluating model validity, to conduct a formal model validation of ECHO-T2DM (version 2.3.0) in accordance with the principles espoused by the International Society for Pharmacoeconomics and Outcomes Research (ISPOR) and the Society for Medical Decision Making (SMDM), and secondarily to evaluate the relative accuracy of four sets of macrovascular risk equations included in ECHO-T2DM. We followed the ISPOR/SMDM guidelines on model validation, evaluating face validity, verification, cross-validation, and external validation. Model verification involved 297 'stress tests', in which specific model inputs were modified systematically to ascertain correct model implementation. Cross-validation consisted of a comparison between ECHO-T2DM predictions and those of the seminal National Institutes of Health model. In external validation, study characteristics were entered into ECHO-T2DM to replicate the clinical results of 12 studies (including 17 patient populations), and model predictions were compared to observed values using established statistical techniques as well as measures of average prediction error, separately for the four sets of macrovascular risk equations supported in ECHO-T2DM. Sub-group analyses were conducted for dependent vs. independent outcomes and for microvascular vs. macrovascular vs. mortality endpoints. All stress tests were passed. ECHO-T2DM replicated the National Institutes of Health cost-effectiveness application with numerically similar results. In external validation of ECHO-T2DM, model predictions agreed well with observed clinical outcomes. For all sets of macrovascular risk equations, the results were close to the intercept and slope coefficients corresponding to a perfect match, resulting in high R 2 and failure to reject concordance using an F test. The results were similar for sub-groups of dependent and independent validation, with some degree of under-prediction of macrovascular events. ECHO-T2DM continues to match health outcomes in clinical trials in T2DM, with prediction accuracy similar to other leading models of T2DM.

  18. An efficient assisted history matching and uncertainty quantification workflow using Gaussian processes proxy models and variogram based sensitivity analysis: GP-VARS

    NASA Astrophysics Data System (ADS)

    Rana, Sachin; Ertekin, Turgay; King, Gregory R.

    2018-05-01

    Reservoir history matching is frequently viewed as an optimization problem which involves minimizing misfit between simulated and observed data. Many gradient and evolutionary strategy based optimization algorithms have been proposed to solve this problem which typically require a large number of numerical simulations to find feasible solutions. Therefore, a new methodology referred to as GP-VARS is proposed in this study which uses forward and inverse Gaussian processes (GP) based proxy models combined with a novel application of variogram analysis of response surface (VARS) based sensitivity analysis to efficiently solve high dimensional history matching problems. Empirical Bayes approach is proposed to optimally train GP proxy models for any given data. The history matching solutions are found via Bayesian optimization (BO) on forward GP models and via predictions of inverse GP model in an iterative manner. An uncertainty quantification method using MCMC sampling in conjunction with GP model is also presented to obtain a probabilistic estimate of reservoir properties and estimated ultimate recovery (EUR). An application of the proposed GP-VARS methodology on PUNQ-S3 reservoir is presented in which it is shown that GP-VARS provides history match solutions in approximately four times less numerical simulations as compared to the differential evolution (DE) algorithm. Furthermore, a comparison of uncertainty quantification results obtained by GP-VARS, EnKF and other previously published methods shows that the P50 estimate of oil EUR obtained by GP-VARS is in close agreement to the true values for the PUNQ-S3 reservoir.

  19. Using Deep Learning Model for Meteorological Satellite Cloud Image Prediction

    NASA Astrophysics Data System (ADS)

    Su, X.

    2017-12-01

    A satellite cloud image contains much weather information such as precipitation information. Short-time cloud movement forecast is important for precipitation forecast and is the primary means for typhoon monitoring. The traditional methods are mostly using the cloud feature matching and linear extrapolation to predict the cloud movement, which makes that the nonstationary process such as inversion and deformation during the movement of the cloud is basically not considered. It is still a hard task to predict cloud movement timely and correctly. As deep learning model could perform well in learning spatiotemporal features, to meet this challenge, we could regard cloud image prediction as a spatiotemporal sequence forecasting problem and introduce deep learning model to solve this problem. In this research, we use a variant of Gated-Recurrent-Unit(GRU) that has convolutional structures to deal with spatiotemporal features and build an end-to-end model to solve this forecast problem. In this model, both the input and output are spatiotemporal sequences. Compared to Convolutional LSTM(ConvLSTM) model, this model has lower amount of parameters. We imply this model on GOES satellite data and the model perform well.

  20. Available pressure amplitude of linear compressor based on phasor triangle model

    NASA Astrophysics Data System (ADS)

    Duan, C. X.; Jiang, X.; Zhi, X. Q.; You, X. K.; Qiu, L. M.

    2017-12-01

    The linear compressor for cryocoolers possess the advantages of long-life operation, high efficiency, low vibration and compact structure. It is significant to study the match mechanisms between the compressor and the cold finger, which determines the working efficiency of the cryocooler. However, the output characteristics of linear compressor are complicated since it is affected by many interacting parameters. The existing matching methods are simplified and mainly focus on the compressor efficiency and output acoustic power, while neglecting the important output parameter of pressure amplitude. In this study, a phasor triangle model basing on analyzing the forces of the piston is proposed. It can be used to predict not only the output acoustic power, the efficiency, but also the pressure amplitude of the linear compressor. Calculated results agree well with the measurement results of the experiment. By this phasor triangle model, the theoretical maximum output pressure amplitude of the linear compressor can be calculated simply based on a known charging pressure and operating frequency. Compared with the mechanical and electrical model of the linear compressor, the new model can provide an intuitionistic understanding on the match mechanism with faster computational process. The model can also explain the experimental phenomenon of the proportional relationship between the output pressure amplitude and the piston displacement in experiments. By further model analysis, such phenomenon is confirmed as an expression of the unmatched design of the compressor. The phasor triangle model may provide an alternative method for the compressor design and matching with the cold finger.

  1. Myocardial segmentation based on coronary anatomy using coronary computed tomography angiography: Development and validation in a pig model.

    PubMed

    Chung, Mi Sun; Yang, Dong Hyun; Kim, Young-Hak; Kang, Soo-Jin; Jung, Joonho; Kim, Namkug; Heo, Seung-Ho; Baek, Seunghee; Seo, Joon Beom; Choi, Byoung Wook; Kang, Joon-Won; Lim, Tae-Hwan

    2017-10-01

    To validate a method for performing myocardial segmentation based on coronary anatomy using coronary CT angiography (CCTA). Coronary artery-based myocardial segmentation (CAMS) was developed for use with CCTA. To validate and compare this method with the conventional American Heart Association (AHA) classification, a single coronary occlusion model was prepared and validated using six pigs. The unstained occluded coronary territories of the specimens and corresponding arterial territories from CAMS and AHA segmentations were compared using slice-by-slice matching and 100 virtual myocardial columns. CAMS more precisely predicted ischaemic area than the AHA method, as indicated by 95% versus 76% (p < 0.001) of the percentage of matched columns (defined as percentage of matched columns of segmentation method divided by number of unstained columns in the specimen). According to the subgroup analyses, CAMS demonstrated a higher percentage of matched columns than the AHA method in the left anterior descending artery (100% vs. 77%; p < 0.001) and mid- (99% vs. 83%; p = 0.046) and apical-level territories of the left ventricle (90% vs. 52%; p = 0.011). CAMS is a feasible method for identifying the corresponding myocardial territories of the coronary arteries using CCTA. • CAMS is a feasible method for identifying corresponding coronary territory using CTA • CAMS is more accurate in predicting coronary territory than the AHA method • The AHA method may underestimate the ischaemic territory of LAD stenosis.

  2. Prior probability and feature predictability interactively bias perceptual decisions

    PubMed Central

    Dunovan, Kyle E.; Tremel, Joshua J.; Wheeler, Mark E.

    2014-01-01

    Anticipating a forthcoming sensory experience facilitates perception for expected stimuli but also hinders perception for less likely alternatives. Recent neuroimaging studies suggest that expectation biases arise from feature-level predictions that enhance early sensory representations and facilitate evidence accumulation for contextually probable stimuli while suppressing alternatives. Reasonably then, the extent to which prior knowledge biases subsequent sensory processing should depend on the precision of expectations at the feature level as well as the degree to which expected features match those of an observed stimulus. In the present study we investigated how these two sources of uncertainty modulated pre- and post-stimulus bias mechanisms in the drift-diffusion model during a probabilistic face/house discrimination task. We tested several plausible models of choice bias, concluding that predictive cues led to a bias in both the starting-point and rate of evidence accumulation favoring the more probable stimulus category. We further tested the hypotheses that prior bias in the starting-point was conditional on the feature-level uncertainty of category expectations and that dynamic bias in the drift-rate was modulated by the match between expected and observed stimulus features. Starting-point estimates suggested that subjects formed a constant prior bias in favor of the face category, which exhibits less feature-level variability, that was strengthened or weakened by trial-wise predictive cues. Furthermore, we found that the gain on face/house evidence was increased for stimuli with less ambiguous features and that this relationship was enhanced by valid category expectations. These findings offer new evidence that bridges psychological models of decision-making with recent predictive coding theories of perception. PMID:24978303

  3. N -loop running should be combined with N -loop matching

    NASA Astrophysics Data System (ADS)

    Braathen, Johannes; Goodsell, Mark D.; Krauss, Manuel E.; Opferkuch, Toby; Staub, Florian

    2018-01-01

    We investigate the high-scale behavior of Higgs sectors beyond the Standard Model, pointing out that the proper matching of the quartic couplings before applying the renormalization group equations (RGEs) is of crucial importance for reliable predictions at larger energy scales. In particular, the common practice of leading-order parameters in the RGE evolution is insufficient to make precise statements on a given model's UV behavior, typically resulting in uncertainties of many orders of magnitude. We argue that, before applying N -loop RGEs, a matching should even be performed at N -loop order in contrast to common lore. We show both analytical and numerical results where the impact is sizable for three minimal extensions of the Standard Model: a singlet extension, a second Higgs doublet and finally vector-like quarks. We highlight that the known two-loop RGEs tend to moderate the running of their one-loop counterparts, typically delaying the appearance of Landau poles. For the addition of vector-like quarks we show that the complete two-loop matching and RGE evolution hints at a stabilization of the electroweak vacuum at high energies, in contrast to results in the literature.

  4. A mathematical model of medial consonant identification by cochlear implant users.

    PubMed

    Svirsky, Mario A; Sagi, Elad; Meyer, Ted A; Kaiser, Adam R; Teoh, Su Wooi

    2011-04-01

    The multidimensional phoneme identification model is applied to consonant confusion matrices obtained from 28 postlingually deafened cochlear implant users. This model predicts consonant matrices based on these subjects' ability to discriminate a set of postulated spectral, temporal, and amplitude speech cues as presented to them by their device. The model produced confusion matrices that matched many aspects of individual subjects' consonant matrices, including information transfer for the voicing, manner, and place features, despite individual differences in age at implantation, implant experience, device and stimulation strategy used, as well as overall consonant identification level. The model was able to match the general pattern of errors between consonants, but not the full complexity of all consonant errors made by each individual. The present study represents an important first step in developing a model that can be used to test specific hypotheses about the mechanisms cochlear implant users employ to understand speech.

  5. A mathematical model of medial consonant identification by cochlear implant users

    PubMed Central

    Svirsky, Mario A.; Sagi, Elad; Meyer, Ted A.; Kaiser, Adam R.; Teoh, Su Wooi

    2011-01-01

    The multidimensional phoneme identification model is applied to consonant confusion matrices obtained from 28 postlingually deafened cochlear implant users. This model predicts consonant matrices based on these subjects’ ability to discriminate a set of postulated spectral, temporal, and amplitude speech cues as presented to them by their device. The model produced confusion matrices that matched many aspects of individual subjects’ consonant matrices, including information transfer for the voicing, manner, and place features, despite individual differences in age at implantation, implant experience, device and stimulation strategy used, as well as overall consonant identification level. The model was able to match the general pattern of errors between consonants, but not the full complexity of all consonant errors made by each individual. The present study represents an important first step in developing a model that can be used to test specific hypotheses about the mechanisms cochlear implant users employ to understand speech. PMID:21476674

  6. Automated docking of ligands to an artificial active site: augmenting crystallographic analysis with computer modeling

    NASA Astrophysics Data System (ADS)

    Rosenfeld, Robin J.; Goodsell, David S.; Musah, Rabi A.; Morris, Garrett M.; Goodin, David B.; Olson, Arthur J.

    2003-08-01

    The W191G cavity of cytochrome c peroxidase is useful as a model system for introducing small molecule oxidation in an artificially created cavity. A set of small, cyclic, organic cations was previously shown to bind in the buried, solvent-filled pocket created by the W191G mutation. We docked these ligands and a set of non-binders in the W191G cavity using AutoDock 3.0. For the ligands, we compared docking predictions with experimentally determined binding energies and X-ray crystal structure complexes. For the ligands, predicted binding energies differed from measured values by ± 0.8 kcal/mol. For most ligands, the docking simulation clearly predicted a single binding mode that matched the crystallographic binding mode within 1.0 Å RMSD. For 2 ligands, where the docking procedure yielded an ambiguous result, solutions matching the crystallographic result could be obtained by including an additional crystallographically observed water molecule in the protein model. For the remaining 2 ligands, docking indicated multiple binding modes, consistent with the original electron density, suggesting disordered binding of these ligands. Visual inspection of the atomic affinity grid maps used in docking calculations revealed two patches of high affinity for hydrogen bond donating groups. Multiple solutions are predicted as these two sites compete for polar hydrogens in the ligand during the docking simulation. Ligands could be distinguished, to some extent, from non-binders using a combination of two trends: predicted binding energy and level of clustering. In summary, AutoDock 3.0 appears to be useful in predicting key structural and energetic features of ligand binding in the W191G cavity.

  7. A Simplified Micromechanical Modeling Approach to Predict the Tensile Flow Curve Behavior of Dual-Phase Steels

    NASA Astrophysics Data System (ADS)

    Nanda, Tarun; Kumar, B. Ravi; Singh, Vishal

    2017-11-01

    Micromechanical modeling is used to predict material's tensile flow curve behavior based on microstructural characteristics. This research develops a simplified micromechanical modeling approach for predicting flow curve behavior of dual-phase steels. The existing literature reports on two broad approaches for determining tensile flow curve of these steels. The modeling approach developed in this work attempts to overcome specific limitations of the existing two approaches. This approach combines dislocation-based strain-hardening method with rule of mixtures. In the first step of modeling, `dislocation-based strain-hardening method' was employed to predict tensile behavior of individual phases of ferrite and martensite. In the second step, the individual flow curves were combined using `rule of mixtures,' to obtain the composite dual-phase flow behavior. To check accuracy of proposed model, four distinct dual-phase microstructures comprising of different ferrite grain size, martensite fraction, and carbon content in martensite were processed by annealing experiments. The true stress-strain curves for various microstructures were predicted with the newly developed micromechanical model. The results of micromechanical model matched closely with those of actual tensile tests. Thus, this micromechanical modeling approach can be used to predict and optimize the tensile flow behavior of dual-phase steels.

  8. The pitch of short-duration fundamental frequency glissandos.

    PubMed

    d'Alessandro, C; Rosset, S; Rossi, J P

    1998-10-01

    Pitch perception for short-duration fundamental frequency (F0) glissandos was studied. In the first part, new measurements using the method of adjustment are reported. Stimuli were F0 glissandos centered at 220 Hz. The parameters under study were: F0 glissando extents (0, 0.8, 1.5, 3, 6, and 12 semitones, i.e., 0, 10.17, 18.74, 38.17, 76.63, and 155.56 Hz), F0 glissando durations (50, 100, 200, and 300 ms), F0 glissando directions (rising or falling), and the extremity of F0 glissandos matched (beginning or end). In the second part, the main results are discussed: (1) perception seems to correspond to an average of the frequencies present in the vicinity of the extremity matched; (2) the higher extremities of the glissando seem more important; (3) adjustments at the end are closer to the extremities than adjustments at the beginning. In the third part, numerical models accounting for the experimental data are proposed: a time-average model and a weighted time-average model. Optimal parameters for these models are derived. The weighted time-average model achieves a 94% accurate prediction rate for the experimental data. The numerical model is successful in predicting the pitch of short-duration F0 glissandos.

  9. Revisiting the Tale of Hercules: How Stars Orbiting the Lagrange Points Visit the Sun

    NASA Astrophysics Data System (ADS)

    Pérez-Villegas, Angeles; Portail, Matthieu; Wegg, Christopher; Gerhard, Ortwin

    2017-05-01

    We propose a novel explanation for the Hercules stream consistent with recent measurements of the extent and pattern speed of the Galactic bar. We have adapted a made-to-measure dynamical model tailored for the Milky Way to investigate the kinematics of the solar neighborhood (SNd). The model matches the 3D density of the red clump giant stars (RCGs) in the bulge and bar as well as stellar kinematics in the inner Galaxy, with a pattern speed of 39 km s-1 kpc-1. Cross-matching this model with the Gaia DR1 TGAS data combined with RAVE and LAMOST radial velocities, we find that the model naturally predicts a bimodality in the U-V-velocity distribution for nearby stars which is in good agreement with the Hercules stream. In the model, the Hercules stream is made of stars orbiting the Lagrange points of the bar which move outward from the bar’s corotation radius to visit the SNd. While the model is not yet a quantitative fit of the velocity distribution, the new picture naturally predicts that the Hercules stream is more prominent inward from the Sun and nearly absent only a few 100 pc outward of the Sun, and plausibly explains that Hercules is prominent in old and metal-rich stars.

  10. Optimal control model predictions of system performance and attention allocation and their experimental validation in a display design study

    NASA Technical Reports Server (NTRS)

    Johannsen, G.; Govindaraj, T.

    1980-01-01

    The influence of different types of predictor displays in a longitudinal vertical takeoff and landing (VTOL) hover task is analyzed in a theoretical study. Several cases with differing amounts of predictive and rate information are compared. The optimal control model of the human operator is used to estimate human and system performance in terms of root-mean-square (rms) values and to compute optimized attention allocation. The only part of the model which is varied to predict these data is the observation matrix. Typical cases are selected for a subsequent experimental validation. The rms values as well as eye-movement data are recorded. The results agree favorably with those of the theoretical study in terms of relative differences. Better matching is achieved by revised model input data.

  11. Statistically Based Approach to Broadband Liner Design and Assessment

    NASA Technical Reports Server (NTRS)

    Jones, Michael G. (Inventor); Nark, Douglas M. (Inventor)

    2016-01-01

    A broadband liner design optimization includes utilizing in-duct attenuation predictions with a statistical fan source model to obtain optimum impedance spectra over a number of flow conditions for one or more liner locations in a bypass duct. The predicted optimum impedance information is then used with acoustic liner modeling tools to design liners having impedance spectra that most closely match the predicted optimum values. Design selection is based on an acceptance criterion that provides the ability to apply increasing weighting to specific frequencies and/or operating conditions. One or more broadband design approaches are utilized to produce a broadband liner that targets a full range of frequencies and operating conditions.

  12. Temperature evolution during compaction of pharmaceutical powders.

    PubMed

    Zavaliangos, Antonios; Galen, Steve; Cunningham, John; Winstead, Denita

    2008-08-01

    A numerical approach to the prediction of temperature evolution in tablet compaction is presented here. It is based on a coupled thermomechanical finite element analysis and a calibrated Drucker-Prager Cap model. This approach is capable of predicting transient temperatures during compaction, which cannot be assessed by experimental techniques due to inherent test limitations. Model predictions are validated with infrared (IR) temperature measurements of the top tablet surface after ejection and match well with experiments. The dependence of temperature fields on speed and degree of compaction are naturally captured. The estimated transient temperatures are maximum at the end of compaction at the center of the tablet and close to the die wall next to the powder/die interface.

  13. Virtual Reality Calibration for Telerobotic Servicing

    NASA Technical Reports Server (NTRS)

    Kim, W.

    1994-01-01

    A virtual reality calibration technique of matching a virtual environment of simulated graphics models in 3-D geometry and perspective with actual camera views of the remote site task environment has been developed to enable high-fidelity preview/predictive displays with calibrated graphics overlay on live video.

  14. MicroRNA Targeting Specificity in Mammals: Determinants Beyond Seed Pairing

    PubMed Central

    Grimson, Andrew; Farh, Kyle Kai-How; Johnston, Wendy K.; Garrett-Engele, Philip; Lim, Lee P.; Bartel, David P.

    2013-01-01

    Summary Mammalian microRNAs (miRNAs) pair to 3'UTRs of mRNAs to direct their posttranscriptional repression. Important for target recognition are ~7-nt sites that match the seed region of the miRNA. However, these seed matches are not always sufficient for repression, indicating that other characteristics help specify targeting. By combining computational and experimental approaches, we uncovered five general features of site context that boost site efficacy: AU-rich nucleotide composition near the site, proximity to sites for co-expressed miRNAs (which leads to cooperative action), proximity to residues pairing to miRNA nucleotides 13–16, and positioning within the 3'UTR at least 15 nt from the stop codon and away from the center of long UTRs. A model combining these context determinants quantitatively predicts site performance both for exogenously added miRNAs and for endogenous miRNA-message interactions. Because it predicts site efficacy without recourse to evolutionary conservation, the model also identifies effective nonconserved sites and siRNA off-targets. PMID:17612493

  15. Tear dynamics in healthy and dry eyes.

    PubMed

    Cerretani, Colin F; Radke, C J

    2014-06-01

    Dry-eye disease, an increasingly prevalent ocular-surface disorder, significantly alters tear physiology. Understanding the basic physics of tear dynamics in healthy and dry eyes benefits both diagnosis and treatment of dry eye. We present a physiological-based model to describe tear dynamics during blinking. Tears are compartmentalized over the ocular surface; the blink cycle is divided into three repeating phases. Conservation laws quantify the tear volume and tear osmolarity of each compartment during each blink phase. Lacrimal-supply and tear-evaporation rates are varied to reveal the dependence of tear dynamics on dry-eye conditions, specifically tear osmolarity, tear volume, tear-turnover rate (TTR), and osmotic water flow. Predicted periodic-steady tear-meniscus osmolarity is 309 and 321 mOsM in normal and dry eyes, respectively. Tear osmolarity, volume, and TTR all match available clinical measurements. Osmotic water flow through the cornea and conjunctiva contribute 10 and 50% to the total tear supply in healthy and dry-eye conditions, respectively. TTR in aqueous-deficient dry eye (ADDE) is only half that in evaporative dry eye (EDE). The compartmental periodic-steady tear-dynamics model accurately predicts tear behavior in normal and dry eyes. Inclusion of osmotic water flow is crucial to match measured tear osmolarity. Tear-dynamics predictions corroborate the use of TTR as a clinical discriminator between ADDE and EDE. The proposed model is readily extended to predict the dynamics of aqueous solutes such as drugs or fluorescent tags.

  16. Do We Know the Actual Magnetopause Position for Typical Solar Wind Conditions?

    NASA Technical Reports Server (NTRS)

    Samsonov, A. A.; Gordeev, E.; Tsyganenko, N. A.; Safrankova, J.; Nemecek, Z.; Simunek, J.; Sibeck, D. G.; Toth, G.; Merkin, V. G.; Raeder, J.

    2016-01-01

    We compare predicted magnetopause positions at the subsolar point and four reference points in the terminator plane obtained from several empirical and numerical MHD (magnetohydrodynamics) models. Empirical models using various sets of magnetopause crossings and making different assumptions about the magnetopause shape predict significantly different magnetopause positions (with a scatter greater than 1 Earth radius (R (sub E)) even at the subsolar point. Axisymmetric magnetopause models cannot reproduce the cusp indentations or the changes related to the dipole tilt effect, and most of them predict the magnetopause closer to the Earth than non axisymmetric models for typical solar wind conditions and zero tilt angle. Predictions of two global non axisymmetric models do not match each other, and the models need additional verification. MHD models often predict the magnetopause closer to the Earth than the non axisymmetric empirical models, but the predictions of MHD simulations may need corrections for the ring current effect and decreases of the solar wind pressure that occur in the foreshock. Comparing MHD models in which the ring current magnetic field is taken into account with the empirical Lin et al. model, we find that the differences in the reference point positions predicted by these models are relatively small for B (sub z) equals 0 (note: B (sub z) is when the Earth's magnetic field points north versus Sun's magnetic field pointing south). Therefore, we assume that these predictions indicate the actual magnetopause position, but future investigations are still needed.

  17. Accommodation and age-dependent eye model based on in vivo measurements.

    PubMed

    Zapata-Díaz, Juan F; Radhakrishnan, Hema; Charman, W Neil; López-Gil, Norberto

    2018-03-21

    To develop a flexible model of the average eye that incorporates changes with age and accommodation in all optical parameters, including entrance pupil diameter, under photopic, natural, environmental conditions. We collated retrospective in vivo measurements of all optical parameters, including entrance pupil diameter. Ray-tracing was used to calculate the wavefront aberrations of the eye model as a function of age, stimulus vergence and pupil diameter. These aberrations were used to calculate objective refraction using paraxial curvature matching. This was also done for several stimulus positions to calculate the accommodation response/stimulus curve. The model predicts a hyperopic change in distance refraction as the eye ages (+0.22D every 10 years) between 20 and 65 years. The slope of the accommodation response/stimulus curve was 0.72 for a 25 years-old subject, with little change between 20 and 45 years. A trend to a more negative value of primary spherical aberration as the eye accommodates is predicted for all ages (20-50 years). When accommodation is relaxed, a slight increase in primary spherical aberration (0.008μm every 10 years) between 20 and 65 years is predicted, for an age-dependent entrance pupil diameter ranging between 3.58mm (20 years) and 3.05mm (65 years). Results match reasonably well with studies performed in real eyes, except that spherical aberration is systematically slightly negative as compared with the practical data. The proposed eye model is able to predict changes in objective refraction and accommodation response. It has the potential to be a useful design and testing tool for devices (e.g. intraocular lenses or contact lenses) designed to correct the eye's optical errors. Copyright © 2018 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.

  18. Prediction of matching condition for a microstrip subsystem using artificial neural network and adaptive neuro-fuzzy inference system

    NASA Astrophysics Data System (ADS)

    Salehi, Mohammad Reza; Noori, Leila; Abiri, Ebrahim

    2016-11-01

    In this paper, a subsystem consisting of a microstrip bandpass filter and a microstrip low noise amplifier (LNA) is designed for WLAN applications. The proposed filter has a small implementation area (49 mm2), small insertion loss (0.08 dB) and wide fractional bandwidth (FBW) (61%). To design the proposed LNA, the compact microstrip cells, an field effect transistor, and only a lumped capacitor are used. It has a low supply voltage and a low return loss (-40 dB) at the operation frequency. The matching condition of the proposed subsystem is predicted using subsystem analysis, artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS). To design the proposed filter, the transmission matrix of the proposed resonator is obtained and analysed. The performance of the proposed ANN and ANFIS models is tested using the numerical data by four performance measures, namely the correlation coefficient (CC), the mean absolute error (MAE), the average percentage error (APE) and the root mean square error (RMSE). The obtained results show that these models are in good agreement with the numerical data, and a small error between the predicted values and numerical solution is obtained.

  19. Use of artificial intelligence as an innovative donor-recipient matching model for liver transplantation: results from a multicenter Spanish study.

    PubMed

    Briceño, Javier; Cruz-Ramírez, Manuel; Prieto, Martín; Navasa, Miguel; Ortiz de Urbina, Jorge; Orti, Rafael; Gómez-Bravo, Miguel-Ángel; Otero, Alejandra; Varo, Evaristo; Tomé, Santiago; Clemente, Gerardo; Bañares, Rafael; Bárcena, Rafael; Cuervas-Mons, Valentín; Solórzano, Guillermo; Vinaixa, Carmen; Rubín, Angel; Colmenero, Jordi; Valdivieso, Andrés; Ciria, Rubén; Hervás-Martínez, César; de la Mata, Manuel

    2014-11-01

    There is an increasing discrepancy between the number of potential liver graft recipients and the number of organs available. Organ allocation should follow the concept of benefit of survival, avoiding human-innate subjectivity. The aim of this study is to use artificial-neural-networks (ANNs) for donor-recipient (D-R) matching in liver transplantation (LT) and to compare its accuracy with validated scores (MELD, D-MELD, DRI, P-SOFT, SOFT, and BAR) of graft survival. 64 donor and recipient variables from a set of 1003 LTs from a multicenter study including 11 Spanish centres were included. For each D-R pair, common statistics (simple and multiple regression models) and ANN formulae for two non-complementary probability-models of 3-month graft-survival and -loss were calculated: a positive-survival (NN-CCR) and a negative-loss (NN-MS) model. The NN models were obtained by using the Neural Net Evolutionary Programming (NNEP) algorithm. Additionally, receiver-operating-curves (ROC) were performed to validate ANNs against other scores. Optimal results for NN-CCR and NN-MS models were obtained, with the best performance in predicting the probability of graft-survival (90.79%) and -loss (71.42%) for each D-R pair, significantly improving results from multiple regressions. ROC curves for 3-months graft-survival and -loss predictions were significantly more accurate for ANN than for other scores in both NN-CCR (AUROC-ANN=0.80 vs. -MELD=0.50; -D-MELD=0.54; -P-SOFT=0.54; -SOFT=0.55; -BAR=0.67 and -DRI=0.42) and NN-MS (AUROC-ANN=0.82 vs. -MELD=0.41; -D-MELD=0.47; -P-SOFT=0.43; -SOFT=0.57, -BAR=0.61 and -DRI=0.48). ANNs may be considered a powerful decision-making technology for this dataset, optimizing the principles of justice, efficiency and equity. This may be a useful tool for predicting the 3-month outcome and a potential research area for future D-R matching models. Copyright © 2014 European Association for the Study of the Liver. Published by Elsevier B.V. All rights reserved.

  20. Climate matching as a tool for predicting potential North American spread of Brown Treesnakes

    USGS Publications Warehouse

    Rodda, Gordon H.; Reed, Robert N.; Jarnevich, Catherine S.; Witmer, G.W.; Pitt, W. C.; Fagerstone, K.A.

    2007-01-01

    Climate matching identifies extralimital destinations that could be colonized by a potential invasive species on the basis of similarity to climates found in the species’ native range. Climate is a proxy for the factors that determine whether a population will reproduce enough to offset mortality. Previous climate matching models (e.g., Genetic Algorithm for Rule-set Prediction [GARP]) for brown treesnakes (Boiga irregularis) were unsatisfactory, perhaps because the models failed to allow different combinations of climate attributes to influence a species’ range limits in different parts of the range. Therefore, we explored the climate space described by bivariate parameters of native range temperature and rainfall, allowing up to two months of aestivation in the warmer portions of the range, or four months of hibernation in temperate climes. We found colonization area to be minimally sensitive to assumptions regarding hibernation temperature thresholds. Although brown treesnakes appear to be limited by dry weather in the interior of Australia, aridity rarely limits potential distribution in most of the world. Potential colonization area in North America is limited primarily by cold. Climatically suitable portions of the United States (US) mainland include the Central Valley of California, mesic patches in the Southwest, and the southeastern coastal plain from Texas to Virginia.

  1. A Simple Artificial Life Model Explains Irrational Behavior in Human Decision-Making

    PubMed Central

    Feher da Silva, Carolina; Baldo, Marcus Vinícius Chrysóstomo

    2012-01-01

    Although praised for their rationality, humans often make poor decisions, even in simple situations. In the repeated binary choice experiment, an individual has to choose repeatedly between the same two alternatives, where a reward is assigned to one of them with fixed probability. The optimal strategy is to perseverate with choosing the alternative with the best expected return. Whereas many species perseverate, humans tend to match the frequencies of their choices to the frequencies of the alternatives, a sub-optimal strategy known as probability matching. Our goal was to find the primary cognitive constraints under which a set of simple evolutionary rules can lead to such contrasting behaviors. We simulated the evolution of artificial populations, wherein the fitness of each animat (artificial animal) depended on its ability to predict the next element of a sequence made up of a repeating binary string of varying size. When the string was short relative to the animats’ neural capacity, they could learn it and correctly predict the next element of the sequence. When it was long, they could not learn it, turning to the next best option: to perseverate. Animats from the last generation then performed the task of predicting the next element of a non-periodical binary sequence. We found that, whereas animats with smaller neural capacity kept perseverating with the best alternative as before, animats with larger neural capacity, which had previously been able to learn the pattern of repeating strings, adopted probability matching, being outperformed by the perseverating animats. Our results demonstrate how the ability to make predictions in an environment endowed with regular patterns may lead to probability matching under less structured conditions. They point to probability matching as a likely by-product of adaptive cognitive strategies that were crucial in human evolution, but may lead to sub-optimal performances in other environments. PMID:22563454

  2. A simple artificial life model explains irrational behavior in human decision-making.

    PubMed

    Feher da Silva, Carolina; Baldo, Marcus Vinícius Chrysóstomo

    2012-01-01

    Although praised for their rationality, humans often make poor decisions, even in simple situations. In the repeated binary choice experiment, an individual has to choose repeatedly between the same two alternatives, where a reward is assigned to one of them with fixed probability. The optimal strategy is to perseverate with choosing the alternative with the best expected return. Whereas many species perseverate, humans tend to match the frequencies of their choices to the frequencies of the alternatives, a sub-optimal strategy known as probability matching. Our goal was to find the primary cognitive constraints under which a set of simple evolutionary rules can lead to such contrasting behaviors. We simulated the evolution of artificial populations, wherein the fitness of each animat (artificial animal) depended on its ability to predict the next element of a sequence made up of a repeating binary string of varying size. When the string was short relative to the animats' neural capacity, they could learn it and correctly predict the next element of the sequence. When it was long, they could not learn it, turning to the next best option: to perseverate. Animats from the last generation then performed the task of predicting the next element of a non-periodical binary sequence. We found that, whereas animats with smaller neural capacity kept perseverating with the best alternative as before, animats with larger neural capacity, which had previously been able to learn the pattern of repeating strings, adopted probability matching, being outperformed by the perseverating animats. Our results demonstrate how the ability to make predictions in an environment endowed with regular patterns may lead to probability matching under less structured conditions. They point to probability matching as a likely by-product of adaptive cognitive strategies that were crucial in human evolution, but may lead to sub-optimal performances in other environments.

  3. Experimental Evaluation of Tuned Chamber Core Panels for Payload Fairing Noise Control

    NASA Technical Reports Server (NTRS)

    Schiller, Noah H.; Allen, Albert R.; Herlan, Jonathan W.; Rosenthal, Bruce N.

    2015-01-01

    Analytical models have been developed to predict the sound absorption and sound transmission loss of tuned chamber core panels. The panels are constructed of two facesheets sandwiching a corrugated core. When ports are introduced through one facesheet, the long chambers within the core can be used as an array of low-frequency acoustic resonators. To evaluate the accuracy of the analytical models, absorption and sound transmission loss tests were performed on flat panels. Measurements show that the acoustic resonators embedded in the panels improve both the absorption and transmission loss of the sandwich structure at frequencies near the natural frequency of the resonators. Analytical predictions for absorption closely match measured data. However, transmission loss predictions miss important features observed in the measurements. This suggests that higher-fidelity analytical or numerical models will be needed to supplement transmission loss predictions in the future.

  4. Supply-demand balance in outward-directed networks and Kleiber's law

    PubMed Central

    Painter, Page R

    2005-01-01

    Background Recent theories have attempted to derive the value of the exponent α in the allometric formula for scaling of basal metabolic rate from the properties of distribution network models for arteries and capillaries. It has recently been stated that a basic theorem relating the sum of nutrient currents to the specific nutrient uptake rate, together with a relationship claimed to be required in order to match nutrient supply to nutrient demand in 3-dimensional outward-directed networks, leads to Kleiber's law (b = 3/4). Methods The validity of the supply-demand matching principle and the assumptions required to prove the basic theorem are assessed. The supply-demand principle is evaluated by examining the supply term and the demand term in outward-directed lattice models of nutrient and water distribution systems and by applying the principle to fractal-like models of mammalian arterial systems. Results Application of the supply-demand principle to bifurcating fractal-like networks that are outward-directed does not predict 3/4-power scaling, and evaluation of water distribution system models shows that the matching principle does not match supply to demand in such systems. Furthermore, proof of the basic theorem is shown to require that the covariance of nutrient uptake and current path length is 0, an assumption unlikely to be true in mammalian arterial systems. Conclusion The supply-demand matching principle does not lead to a satisfactory explanation for the approximately 3/4-power scaling of mammalian basal metabolic rate. PMID:16283939

  5. Supply-demand balance in outward-directed networks and Kleiber's law.

    PubMed

    Painter, Page R

    2005-11-10

    Recent theories have attempted to derive the value of the exponent alpha in the allometric formula for scaling of basal metabolic rate from the properties of distribution network models for arteries and capillaries. It has recently been stated that a basic theorem relating the sum of nutrient currents to the specific nutrient uptake rate, together with a relationship claimed to be required in order to match nutrient supply to nutrient demand in 3-dimensional outward-directed networks, leads to Kleiber's law (b = 3/4). The validity of the supply-demand matching principle and the assumptions required to prove the basic theorem are assessed. The supply-demand principle is evaluated by examining the supply term and the demand term in outward-directed lattice models of nutrient and water distribution systems and by applying the principle to fractal-like models of mammalian arterial systems. Application of the supply-demand principle to bifurcating fractal-like networks that are outward-directed does not predict 3/4-power scaling, and evaluation of water distribution system models shows that the matching principle does not match supply to demand in such systems. Furthermore, proof of the basic theorem is shown to require that the covariance of nutrient uptake and current path length is 0, an assumption unlikely to be true in mammalian arterial systems. The supply-demand matching principle does not lead to a satisfactory explanation for the approximately 3/4-power scaling of mammalian basal metabolic rate.

  6. Dynamic Response-by-Response Models of Matching Behavior in Rhesus Monkeys

    PubMed Central

    Lau, Brian; Glimcher, Paul W

    2005-01-01

    We studied the choice behavior of 2 monkeys in a discrete-trial task with reinforcement contingencies similar to those Herrnstein (1961) used when he described the matching law. In each session, the monkeys experienced blocks of discrete trials at different relative-reinforcer frequencies or magnitudes with unsignalled transitions between the blocks. Steady-state data following adjustment to each transition were well characterized by the generalized matching law; response ratios undermatched reinforcer frequency ratios but matched reinforcer magnitude ratios. We modelled response-by-response behavior with linear models that used past reinforcers as well as past choices to predict the monkeys' choices on each trial. We found that more recently obtained reinforcers more strongly influenced choice behavior. Perhaps surprisingly, we also found that the monkeys' actions were influenced by the pattern of their own past choices. It was necessary to incorporate both past reinforcers and past choices in order to accurately capture steady-state behavior as well as the fluctuations during block transitions and the response-by-response patterns of behavior. Our results suggest that simple reinforcement learning models must account for the effects of past choices to accurately characterize behavior in this task, and that models with these properties provide a conceptual tool for studying how both past reinforcers and past choices are integrated by the neural systems that generate behavior. PMID:16596980

  7. An infrared sky model based on the IRAS point source data

    NASA Technical Reports Server (NTRS)

    Cohen, Martin; Walker, Russell; Wainscoat, Richard; Volk, Kevin; Walker, Helen; Schwartz, Deborah

    1990-01-01

    A detailed model for the infrared point source sky is presented that comprises geometrically and physically realistic representations of the galactic disk, bulge, spheroid, spiral arms, molecular ring, and absolute magnitudes. The model was guided by a parallel Monte Carlo simulation of the Galaxy. The content of the galactic source table constitutes an excellent match to the 12 micrometer luminosity function in the simulation, as well as the luminosity functions at V and K. Models are given for predicting the density of asteroids to be observed, and the diffuse background radiance of the Zodiacal cloud. The model can be used to predict the character of the point source sky expected for observations from future infrared space experiments.

  8. Study of indoor radon distribution using measurements and CFD modeling.

    PubMed

    Chauhan, Neetika; Chauhan, R P; Joshi, M; Agarwal, T K; Aggarwal, Praveen; Sahoo, B K

    2014-10-01

    Measurement and/or prediction of indoor radon ((222)Rn) concentration are important due to the impact of radon on indoor air quality and consequent inhalation hazard. In recent times, computational fluid dynamics (CFD) based modeling has become the cost effective replacement of experimental methods for the prediction and visualization of indoor pollutant distribution. The aim of this study is to implement CFD based modeling for studying indoor radon gas distribution. This study focuses on comparison of experimentally measured and CFD modeling predicted spatial distribution of radon concentration for a model test room. The key inputs for simulation viz. radon exhalation rate and ventilation rate were measured as a part of this study. Validation experiments were performed by measuring radon concentration at different locations of test room using active (continuous radon monitor) and passive (pin-hole dosimeters) techniques. Modeling predictions have been found to be reasonably matching with the measurement results. The validated model can be used to understand and study factors affecting indoor radon distribution for more realistic indoor environment. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. The human placental perfusion model: a systematic review and development of a model to predict in vivo transfer of therapeutic drugs.

    PubMed

    Hutson, J R; Garcia-Bournissen, F; Davis, A; Koren, G

    2011-07-01

    Dual perfusion of a single placental lobule is the only experimental model to study human placental transfer of substances in organized placental tissue. To date, there has not been any attempt at a systematic evaluation of this model. The aim of this study was to systematically evaluate the perfusion model in predicting placental drug transfer and to develop a pharmacokinetic model to account for nonplacental pharmacokinetic parameters in the perfusion results. In general, the fetal-to-maternal drug concentration ratios matched well between placental perfusion experiments and in vivo samples taken at the time of delivery of the infant. After modeling for differences in maternal and fetal/neonatal protein binding and blood pH, the perfusion results were able to accurately predict in vivo transfer at steady state (R² = 0.85, P < 0.0001). Placental perfusion experiments can be used to predict placental drug transfer when adjusting for extra parameters and can be useful for assessing drug therapy risks and benefits in pregnancy.

  10. Use of Inverse-Modeling Methods to Improve Ground-Water-Model Calibration and Evaluate Model-Prediction Uncertainty, Camp Edwards, Cape Cod, Massachusetts

    USGS Publications Warehouse

    Walter, Donald A.; LeBlanc, Denis R.

    2008-01-01

    Historical weapons testing and disposal activities at Camp Edwards, which is located on the Massachusetts Military Reservation, western Cape Cod, have resulted in the release of contaminants into an underlying sand and gravel aquifer that is the sole source of potable water to surrounding communities. Ground-water models have been used at the site to simulate advective transport in the aquifer in support of field investigations. Reasonable models developed by different groups and calibrated by trial and error often yield different predictions of advective transport, and the predictions lack quantitative measures of uncertainty. A recently (2004) developed regional model of western Cape Cod, modified to include the sensitivity and parameter-estimation capabilities of MODFLOW-2000, was used in this report to evaluate the utility of inverse (statistical) methods to (1) improve model calibration and (2) assess model-prediction uncertainty. Simulated heads and flows were most sensitive to recharge and to the horizontal hydraulic conductivity of the Buzzards Bay and Sandwich Moraines and the Buzzards Bay and northern parts of the Mashpee outwash plains. Conversely, simulated heads and flows were much less sensitive to vertical hydraulic conductivity. Parameter estimation (inverse calibration) improved the match to observed heads and flows; the absolute mean residual for heads improved by 0.32 feet and the absolute mean residual for streamflows improved by about 0.2 cubic feet per second. Advective-transport predictions in Camp Edwards generally were most sensitive to the parameters with the highest precision (lowest coefficients of variation), indicating that the numerical model is adequate for evaluating prediction uncertainties in and around Camp Edwards. The incorporation of an advective-transport observation, representing the leading edge of a contaminant plume that had been difficult to match by using trial-and-error calibration, improved the match between an observed and simulated plume path; however, a modified representation of local geology was needed to simultaneously maintain a reasonable calibration to heads and flows and to the plume path. Advective-transport uncertainties were expressed as about 68-, 95-, and 99-percent confidence intervals on three dimensional simulated particle positions. The confidence intervals can be graphically represented as ellipses around individual particle positions in the X-Y (geographic) plane and in the X-Z or Y-Z (vertical) planes. The merging of individual ellipses allows uncertainties on forward particle tracks to be displayed in map or cross-sectional view as a cone of uncertainty around a simulated particle path; uncertainties on reverse particle-track endpoints - representing simulated recharge locations - can be geographically displayed as areas at the water table around the discrete particle endpoints. This information gives decisionmakers insight into the level of confidence they can have in particle-tracking results and can assist them in the efficient use of available field resources.

  11. Spontaneous cortical activity reveals hallmarks of an optimal internal model of the environment.

    PubMed

    Berkes, Pietro; Orbán, Gergo; Lengyel, Máté; Fiser, József

    2011-01-07

    The brain maintains internal models of its environment to interpret sensory inputs and to prepare actions. Although behavioral studies have demonstrated that these internal models are optimally adapted to the statistics of the environment, the neural underpinning of this adaptation is unknown. Using a Bayesian model of sensory cortical processing, we related stimulus-evoked and spontaneous neural activities to inferences and prior expectations in an internal model and predicted that they should match if the model is statistically optimal. To test this prediction, we analyzed visual cortical activity of awake ferrets during development. Similarity between spontaneous and evoked activities increased with age and was specific to responses evoked by natural scenes. This demonstrates the progressive adaptation of internal models to the statistics of natural stimuli at the neural level.

  12. Detecting failure of climate predictions

    USGS Publications Warehouse

    Runge, Michael C.; Stroeve, Julienne C.; Barrett, Andrew P.; McDonald-Madden, Eve

    2016-01-01

    The practical consequences of climate change challenge society to formulate responses that are more suited to achieving long-term objectives, even if those responses have to be made in the face of uncertainty1, 2. Such a decision-analytic focus uses the products of climate science as probabilistic predictions about the effects of management policies3. Here we present methods to detect when climate predictions are failing to capture the system dynamics. For a single model, we measure goodness of fit based on the empirical distribution function, and define failure when the distribution of observed values significantly diverges from the modelled distribution. For a set of models, the same statistic can be used to provide relative weights for the individual models, and we define failure when there is no linear weighting of the ensemble models that produces a satisfactory match to the observations. Early detection of failure of a set of predictions is important for improving model predictions and the decisions based on them. We show that these methods would have detected a range shift in northern pintail 20 years before it was actually discovered, and are increasingly giving more weight to those climate models that forecast a September ice-free Arctic by 2055.

  13. When and why do ideal partner preferences affect the process of initiating and maintaining romantic relationships?

    PubMed

    Eastwick, Paul W; Finkel, Eli J; Eagly, Alice H

    2011-11-01

    Three studies explored how the traits that people ideally desire in a romantic partner, or ideal partner preferences, intersect with the process of romantic relationship initiation and maintenance. Two attraction experiments in the laboratory found that, when participants evaluated a potential romantic partner's written profile, they expressed more romantic interest in a partner whose traits were manipulated to match (vs. mismatch) their idiosyncratic ideals. However, after a live interaction with the partner, the match vs. mismatch manipulation was no longer associated with romantic interest. This pattern appeared to have emerged because participants reinterpreted the meaning of the traits as they applied to the partner, a context effect predicted by classic models of person perception (S. E. Asch, 1946). Finally, a longitudinal study of middle-aged adults demonstrated that participants evaluated a current romantic partner (but not a partner who was merely desired) more positively to the extent that the partner matched their overall pattern of ideals across several traits; the match in level of ideals (i.e., high vs. low ratings) was not relevant to participants' evaluations. In general, the match between ideals and a partner's traits may predict relational outcomes when participants are learning about a partner in the abstract and when they are actually in a relationship with the partner, but not when considering potential dating partners they have met in person.

  14. Microfabricated 1-3 composite acoustic matching layers for 15 MHz transducers.

    PubMed

    Manh, Tung; Jensen, Geir Uri; Johansen, Tonni F; Hoff, Lars

    2013-08-01

    Medical ultrasound transducers require matching layers to couple energy from the piezoelectric ceramic into the tissue. Composites of type 0-3 are often used to obtain the desired acoustic impedances, but they introduce challenges at high frequencies, i.e. non-uniformity, attenuation, and dispersion. This paper presents novel acoustic matching layers made as silicon-polymer 1-3 composites, fabricated by deep reactive ion etch (DRIE). This fabrication method is well-established for high-volume production in the microtechnology industry. First estimates for the acoustic properties were found from the iso-strain theory, while the Finite Element Method (FEM) was employed for more accurate modeling. The composites were used as single matching layers in 15 MHz ultrasound transducers. Acoustic properties of the composite were estimated by fitting the electrical impedance measurements to the Mason model. Five composites were fabricated. All had period 16 μm, while the silicon width was varied to cover silicon volume fractions between 0.17 and 0.28. Silicon-on-Insulator (SOI) wafers were used to get a controlled etch stop against the buried oxide layer at a defined depth, resulting in composites with thickness 83 μm. A slight tapering of the silicon side walls was observed; their widths were 0.9 μm smaller at the bottom than at the top, corresponding to a tapering angle of 0.3°. Acoustic parameters estimated from electrical impedance measurements were lower than predicted from the iso-strain model, but fitted within 5% to FEM simulations. The deviation was explained by dispersion caused by the finite dimensions of the composite and by the tapered walls. Pulse-echo measurements on a transducer with silicon volume fraction 0.17 showed a two-way -6 dB relative bandwidth of 50%. The pulse-echo measurements agreed with predictions from the Mason model when using material parameter values estimated from electrical impedance measurements. The results show the feasibility of the fabrication method and the theoretical description. A next step would be to include these composites as one of several layers in an acoustic matching layer stack. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Equivalent circuit of radio frequency-plasma with the transformer model

    NASA Astrophysics Data System (ADS)

    Nishida, K.; Mochizuki, S.; Ohta, M.; Yasumoto, M.; Lettry, J.; Mattei, S.; Hatayama, A.

    2014-02-01

    LINAC4 H- source is radio frequency (RF) driven type source. In the RF system, it is required to match the load impedance, which includes H- source, to that of final amplifier. We model RF plasma inside the H- source as circuit elements using transformer model so that characteristics of the load impedance become calculable. It has been shown that the modeling based on the transformer model works well to predict the resistance and inductance of the plasma.

  16. Validation of Groundwater Models: Meaningful or Meaningless?

    NASA Astrophysics Data System (ADS)

    Konikow, L. F.

    2003-12-01

    Although numerical simulation models are valuable tools for analyzing groundwater systems, their predictive accuracy is limited. People who apply groundwater flow or solute-transport models, as well as those who make decisions based on model results, naturally want assurance that a model is "valid." To many people, model validation implies some authentication of the truth or accuracy of the model. History matching is often presented as the basis for model validation. Although such model calibration is a necessary modeling step, it is simply insufficient for model validation. Because of parameter uncertainty and solution non-uniqueness, declarations of validation (or verification) of a model are not meaningful. Post-audits represent a useful means to assess the predictive accuracy of a site-specific model, but they require the existence of long-term monitoring data. Model testing may yield invalidation, but that is an opportunity to learn and to improve the conceptual and numerical models. Examples of post-audits and of the application of a solute-transport model to a radioactive waste disposal site illustrate deficiencies in model calibration, prediction, and validation.

  17. Dynamic Simulation and Static Matching for Action Prediction: Evidence from Body Part Priming

    ERIC Educational Resources Information Center

    Springer, Anne; Brandstadter, Simone; Prinz, Wolfgang

    2013-01-01

    Accurately predicting other people's actions may involve two processes: internal real-time simulation (dynamic updating) and matching recently perceived action images (static matching). Using a priming of body parts, this study aimed to differentiate the two processes. Specifically, participants played a motion-controlled video game with…

  18. Fast history matching of time-lapse seismic and production data for high resolution models

    NASA Astrophysics Data System (ADS)

    Jimenez Arismendi, Eduardo Antonio

    Integrated reservoir modeling has become an important part of day-to-day decision analysis in oil and gas management practices. A very attractive and promising technology is the use of time-lapse or 4D seismic as an essential component in subsurface modeling. Today, 4D seismic is enabling oil companies to optimize production and increase recovery through monitoring fluid movements throughout the reservoir. 4D seismic advances are also being driven by an increased need by the petroleum engineering community to become more quantitative and accurate in our ability to monitor reservoir processes. Qualitative interpretations of time-lapse anomalies are being replaced by quantitative inversions of 4D seismic data to produce accurate maps of fluid saturations, pore pressure, temperature, among others. Within all steps involved in this subsurface modeling process, the most demanding one is integrating the geologic model with dynamic field data, including 4Dseismic when available. The validation of the geologic model with observed dynamic data is accomplished through a "history matching" (HM) process typically carried out with well-based measurements. Due to low resolution of production data, the validation process is severely limited in its reservoir areal coverage, compromising the quality of the model and any subsequent predictive exercise. This research will aim to provide a novel history matching approach that can use information from high-resolution seismic data to supplement the areally sparse production data. The proposed approach will utilize streamline-derived sensitivities as means of relating the forward model performance with the prior geologic model. The essential ideas underlying this approach are similar to those used for high-frequency approximations in seismic wave propagation. In both cases, this leads to solutions that are defined along "streamlines" (fluid flow), or "rays" (seismic wave propagation). Synthetic and field data examples will be used extensively to demonstrate the value and contribution of this work. Our results show that the problem of non-uniqueness in this complex history matching problem is greatly reduced when constraints in the form of saturation maps from spatially closely sampled seismic data are included. Further on, our methodology can be used to quickly identify discrepancies between static and dynamic modeling. Reducing this gap will ensure robust and reliable models leading to accurate predictions and ultimately an optimum hydrocarbon extraction.

  19. Evaluation of Industry Standard Turbulence Models on an Axisymmetric Supersonic Compression Corner

    NASA Technical Reports Server (NTRS)

    DeBonis, James R.

    2015-01-01

    Reynolds-averaged Navier-Stokes computations of a shock-wave/boundary-layer interaction (SWBLI) created by a Mach 2.85 flow over an axisymmetric 30-degree compression corner were carried out. The objectives were to evaluate four turbulence models commonly used in industry, for SWBLIs, and to evaluate the suitability of this test case for use in further turbulence model benchmarking. The Spalart-Allmaras model, Menter's Baseline and Shear Stress Transport models, and a low-Reynolds number k- model were evaluated. Results indicate that the models do not accurately predict the separation location; with the SST model predicting the separation onset too early and the other models predicting the onset too late. Overall the Spalart-Allmaras model did the best job in matching the experimental data. However there is significant room for improvement, most notably in the prediction of the turbulent shear stress. Density data showed that the simulations did not accurately predict the thermal boundary layer upstream of the SWBLI. The effect of turbulent Prandtl number and wall temperature were studied in an attempt to improve this prediction and understand their effects on the interaction. The data showed that both parameters can significantly affect the separation size and location, but did not improve the agreement with the experiment. This case proved challenging to compute and should provide a good test for future turbulence modeling work.

  20. Learning receptive fields using predictive feedback.

    PubMed

    Jehee, Janneke F M; Rothkopf, Constantin; Beck, Jeffrey M; Ballard, Dana H

    2006-01-01

    Previously, it was suggested that feedback connections from higher- to lower-level areas carry predictions of lower-level neural activities, whereas feedforward connections carry the residual error between the predictions and the actual lower-level activities [Rao, R.P.N., Ballard, D.H., 1999. Nature Neuroscience 2, 79-87.]. A computational model implementing the hypothesis learned simple cell receptive fields when exposed to natural images. Here, we use predictive feedback to explain tuning properties in medial superior temporal area (MST). We implement the hypothesis using a new, biologically plausible, algorithm based on matching pursuit, which retains all the features of the previous implementation, including its ability to efficiently encode input. When presented with natural images, the model developed receptive field properties as found in primary visual cortex. In addition, when exposed to visual motion input resulting from movements through space, the model learned receptive field properties resembling those in MST. These results corroborate the idea that predictive feedback is a general principle used by the visual system to efficiently encode natural input.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bodvarsson, G.S.; Pruess, K.; Stefansson, V.

    A detailed three-dimensional well-by-well model of the East Olkaria geothermal field in Kenya has been developed. The model matches reasonably well the flow rate and enthalpy data from all wells, as well as the overall pressure decline in the reservoir. The model is used to predict the generating capacity of the field, well decline, enthalpy behavior, the number of make-up wells needed and the effects of injection on well performance and overall reservoir depletion. 26 refs., 10 figs.

  2. A hybrid model for predicting carbon monoxide from vehicular exhausts in urban environments

    NASA Astrophysics Data System (ADS)

    Gokhale, Sharad; Khare, Mukesh

    Several deterministic-based air quality models evaluate and predict the frequently occurring pollutant concentration well but, in general, are incapable of predicting the 'extreme' concentrations. In contrast, the statistical distribution models overcome the above limitation of the deterministic models and predict the 'extreme' concentrations. However, the environmental damages are caused by both extremes as well as by the sustained average concentration of pollutants. Hence, the model should predict not only 'extreme' ranges but also the 'middle' ranges of pollutant concentrations, i.e. the entire range. Hybrid modelling is one of the techniques that estimates/predicts the 'entire range' of the distribution of pollutant concentrations by combining the deterministic based models with suitable statistical distribution models ( Jakeman, et al., 1988). In the present paper, a hybrid model has been developed to predict the carbon monoxide (CO) concentration distributions at one of the traffic intersections, Income Tax Office (ITO), in the Delhi city, where the traffic is heterogeneous in nature and meteorology is 'tropical'. The model combines the general finite line source model (GFLSM) as its deterministic, and log logistic distribution (LLD) model, as its statistical components. The hybrid (GFLSM-LLD) model is then applied at the ITO intersection. The results show that the hybrid model predictions match with that of the observed CO concentration data within the 5-99 percentiles range. The model is further validated at different street location, i.e. Sirifort roadway. The validation results show that the model predicts CO concentrations fairly well ( d=0.91) in 10-95 percentiles range. The regulatory compliance is also developed to estimate the probability of exceedance of hourly CO concentration beyond the National Ambient Air Quality Standards (NAAQS) of India. It consists of light vehicles, heavy vehicles, three- wheelers (auto rickshaws) and two-wheelers (scooters, motorcycles, etc).

  3. Prediction of active control of subsonic centrifugal compressor rotating stall

    NASA Technical Reports Server (NTRS)

    Lawless, Patrick B.; Fleeter, Sanford

    1993-01-01

    A mathematical model is developed to predict the suppression of rotating stall in a centrifugal compressor with a vaned diffuser. This model is based on the employment of a control vortical waveform generated upstream of the impeller inlet to damp weak potential disturbances that are the early stages of rotating stall. The control system is analyzed by matching the perturbation pressure in the compressor inlet and exit flow fields with a model for the unsteady behavior of the compressor. The model was effective at predicting the stalling behavior of the Purdue Low Speed Centrifugal Compressor for two distinctly different stall patterns. Predictions made for the effect of a controlled inlet vorticity wave on the stability of the compressor show that for minimum control wave magnitudes, on the order of the total inlet disturbance magnitude, significant damping of the instability can be achieved. For control waves of sufficient amplitude, the control phase angle appears to be the most important factor in maintaining a stable condition in the compressor.

  4. Application of a Model for Simulating the Vacuum Arc Remelting Process in Titanium Alloys

    NASA Astrophysics Data System (ADS)

    Patel, Ashish; Tripp, David W.; Fiore, Daniel

    Mathematical modeling is routinely used in the process development and production of advanced aerospace alloys to gain greater insight into system dynamics and to predict the effect of process modifications or upsets on final properties. This article describes the application of a 2-D mathematical VAR model presented in previous LMPC meetings. The impact of process parameters on melt pool geometry, solidification behavior, fluid-flow and chemistry in Ti-6Al-4V ingots will be discussed. Model predictions were first validated against the measured characteristics of industrially produced ingots, and process inputs and model formulation were adjusted to match macro-etched pool shapes. The results are compared to published data in the literature. Finally, the model is used to examine ingot chemistry during successive VAR melts.

  5. Reconstructing mantle heterogeneity with data assimilation based on the back-and-forth nudging method: Implications for mantle-dynamic fitting of past plate motions

    NASA Astrophysics Data System (ADS)

    Glišović, Petar; Forte, Alessandro

    2016-04-01

    The paleo-distribution of density variations throughout the mantle is unknown. To address this question, we reconstruct 3-D mantle structure over the Cenozoic era using a data assimilation method that implements a new back-and-forth nudging algorithm. For this purpose, we employ convection models for a compressible and self-gravitating mantle that employ 3-D mantle structure derived from joint seismic-geodynamic tomography as a starting condition. These convection models are then integrated backwards in time and are required to match geologic estimates of past plate motions derived from marine magnetic data. Our implementation of the nudging algorithm limits the difference between a reconstruction (backward-in-time solution) and a prediction (forward-in-time solution) on over a sequence of 5-million-year time windows that span the Cenozoic. We find that forward integration of reconstructed mantle heterogeneity that is constrained to match past plate motions delivers relatively poor fits to the seismic-tomographic inference of present-day mantle heterogeneity in the upper mantle. We suggest that uncertainties in the past plate motions, related for example to plate reorganization episodes, could partly contribute to the poor match between predicted and observed present-day heterogeneity. We propose that convection models that allow tectonic plates to evolve freely in accord with the buoyancy forces and rheological structure in the mantle could provide additional constraints on geologic estimates of paleo-configurations of the major tectonic plates.

  6. 2D-pattern matching image and video compression: theory, algorithms, and experiments.

    PubMed

    Alzina, Marc; Szpankowski, Wojciech; Grama, Ananth

    2002-01-01

    In this paper, we propose a lossy data compression framework based on an approximate two-dimensional (2D) pattern matching (2D-PMC) extension of the Lempel-Ziv (1977, 1978) lossless scheme. This framework forms the basis upon which higher level schemes relying on differential coding, frequency domain techniques, prediction, and other methods can be built. We apply our pattern matching framework to image and video compression and report on theoretical and experimental results. Theoretically, we show that the fixed database model used for video compression leads to suboptimal but computationally efficient performance. The compression ratio of this model is shown to tend to the generalized entropy. For image compression, we use a growing database model for which we provide an approximate analysis. The implementation of 2D-PMC is a challenging problem from the algorithmic point of view. We use a range of techniques and data structures such as k-d trees, generalized run length coding, adaptive arithmetic coding, and variable and adaptive maximum distortion level to achieve good compression ratios at high compression speeds. We demonstrate bit rates in the range of 0.25-0.5 bpp for high-quality images and data rates in the range of 0.15-0.5 Mbps for a baseline video compression scheme that does not use any prediction or interpolation. We also demonstrate that this asymmetric compression scheme is capable of extremely fast decompression making it particularly suitable for networked multimedia applications.

  7. Impact of geocoding methods on associations between long-term exposure to urban air pollution and lung function.

    PubMed

    Jacquemin, Bénédicte; Lepeule, Johanna; Boudier, Anne; Arnould, Caroline; Benmerad, Meriem; Chappaz, Claire; Ferran, Joane; Kauffmann, Francine; Morelli, Xavier; Pin, Isabelle; Pison, Christophe; Rios, Isabelle; Temam, Sofia; Künzli, Nino; Slama, Rémy; Siroux, Valérie

    2013-09-01

    Errors in address geocodes may affect estimates of the effects of air pollution on health. We investigated the impact of four geocoding techniques on the association between urban air pollution estimated with a fine-scale (10 m × 10 m) dispersion model and lung function in adults. We measured forced expiratory volume in 1 sec (FEV1) and forced vital capacity (FVC) in 354 adult residents of Grenoble, France, who were participants in two well-characterized studies, the Epidemiological Study on the Genetics and Environment on Asthma (EGEA) and the European Community Respiratory Health Survey (ECRHS). Home addresses were geocoded using individual building matching as the reference approach and three spatial interpolation approaches. We used a dispersion model to estimate mean PM10 and nitrogen dioxide concentrations at each participant's address during the 12 months preceding their lung function measurements. Associations between exposures and lung function parameters were adjusted for individual confounders and same-day exposure to air pollutants. The geocoding techniques were compared with regard to geographical distances between coordinates, exposure estimates, and associations between the estimated exposures and health effects. Median distances between coordinates estimated using the building matching and the three interpolation techniques were 26.4, 27.9, and 35.6 m. Compared with exposure estimates based on building matching, PM10 concentrations based on the three interpolation techniques tended to be overestimated. When building matching was used to estimate exposures, a one-interquartile range increase in PM10 (3.0 μg/m3) was associated with a 3.72-point decrease in FVC% predicted (95% CI: -0.56, -6.88) and a 3.86-point decrease in FEV1% predicted (95% CI: -0.14, -3.24). The magnitude of associations decreased when other geocoding approaches were used [e.g., for FVC% predicted -2.81 (95% CI: -0.26, -5.35) using NavTEQ, or 2.08 (95% CI -4.63, 0.47, p = 0.11) using Google Maps]. Our findings suggest that the choice of geocoding technique may influence estimated health effects when air pollution exposures are estimated using a fine-scale exposure model.

  8. Short-Term Memory for Temporal Intervals: Contrasting Explanations of the Choose-Short Effect in Pigeons

    ERIC Educational Resources Information Center

    Pinto, Carlos; Machado, Armando

    2011-01-01

    To better understand short-term memory for temporal intervals, we re-examined the choose-short effect. In Experiment 1, to contrast the predictions of two models of this effect, the subjective shortening and the coding models, pigeons were exposed to a delayed matching-to-sample task with three sample durations (2, 6 and 18 s) and retention…

  9. Monte Carlo modeling of atomic oxygen attack of polymers with protective coatings on LDEF

    NASA Technical Reports Server (NTRS)

    Banks, Bruce A.; Degroh, Kim K.; Sechkar, Edward A.

    1992-01-01

    Characterization of the behavior of atomic oxygen interaction with materials on the Long Duration Exposure Facility (LDEF) will assist in understanding the mechanisms involved, and will lead to improved reliability in predicting in-space durability of materials based on ground laboratory testing. A computational simulation of atomic oxygen interaction with protected polymers was developed using Monte Carlo techniques. Through the use of assumed mechanistic behavior of atomic oxygen and results of both ground laboratory and LDEF data, a predictive Monte Carlo model was developed which simulates the oxidation processes that occur on polymers with applied protective coatings that have defects. The use of high atomic oxygen fluence-directed ram LDEF results has enabled mechanistic implications to be made by adjusting Monte Carlo modeling assumptions to match observed results based on scanning electron microscopy. Modeling assumptions, implications, and predictions are presented, along with comparison of observed ground laboratory and LDEF results.

  10. A probabilistic framework to infer brain functional connectivity from anatomical connections.

    PubMed

    Deligianni, Fani; Varoquaux, Gael; Thirion, Bertrand; Robinson, Emma; Sharp, David J; Edwards, A David; Rueckert, Daniel

    2011-01-01

    We present a novel probabilistic framework to learn across several subjects a mapping from brain anatomical connectivity to functional connectivity, i.e. the covariance structure of brain activity. This prediction problem must be formulated as a structured-output learning task, as the predicted parameters are strongly correlated. We introduce a model selection framework based on cross-validation with a parametrization-independent loss function suitable to the manifold of covariance matrices. Our model is based on constraining the conditional independence structure of functional activity by the anatomical connectivity. Subsequently, we learn a linear predictor of a stationary multivariate autoregressive model. This natural parameterization of functional connectivity also enforces the positive-definiteness of the predicted covariance and thus matches the structure of the output space. Our results show that functional connectivity can be explained by anatomical connectivity on a rigorous statistical basis, and that a proper model of functional connectivity is essential to assess this link.

  11. Chill Down Process of Hydrogen Transport Pipelines

    NASA Technical Reports Server (NTRS)

    Mei, Renwei; Klausner, James

    2006-01-01

    A pseudo-steady model has been developed to predict the chilldown history of pipe wall temperature in the horizontal transport pipeline for cryogenic fluids. A new film boiling heat transfer model is developed by incorporating the stratified flow structure for cryogenic chilldown. A modified nucleate boiling heat transfer correlation for cryogenic chilldown process inside a horizontal pipe is proposed. The efficacy of the correlations is assessed by comparing the model predictions with measured values of wall temperature in several azimuthal positions in a well controlled experiment by Chung et al. (2004). The computed pipe wall temperature histories match well with the measured results. The present model captures important features of thermal interaction between the pipe wall and the cryogenic fluid, provides a simple and robust platform for predicting pipe wall chilldown history in long horizontal pipe at relatively low computational cost, and builds a foundation to incorporate the two-phase hydrodynamic interaction in the chilldown process.

  12. Data matching for free-surface multiple attenuation by multidimensional deconvolution

    NASA Astrophysics Data System (ADS)

    van der Neut, Joost; Frijlink, Martijn; van Borselen, Roald

    2012-09-01

    A common strategy for surface-related multiple elimination of seismic data is to predict multiples by a convolutional model and subtract these adaptively from the input gathers. Problems can be posed by interfering multiples and primaries. Removing multiples by multidimensional deconvolution (MDD) (inversion) does not suffer from these problems. However, this approach requires data to be consistent, which is often not the case, especially not at interpolated near-offsets. A novel method is proposed to improve data consistency prior to inversion. This is done by backpropagating first-order multiples with a time-gated reference primary event and matching these with early primaries in the input gather. After data matching, multiple elimination by MDD can be applied with a deterministic inversion scheme.

  13. Can pair-instability supernova models match the observations of superluminous supernovae?

    NASA Astrophysics Data System (ADS)

    Kozyreva, Alexandra; Blinnikov, S.

    2015-12-01

    An increasing number of so-called superluminous supernovae (SLSNe) are discovered. It is believed that at least some of them with slowly fading light curves originate in stellar explosions induced by the pair instability mechanism. Recent stellar evolution models naturally predict pair instability supernovae (PISNe) from very massive stars at wide range of metallicities (up to Z = 0.006, Yusof et al.). In the scope of this study, we analyse whether PISN models can match the observational properties of SLSNe with various light-curve shapes. Specifically, we explore the influence of different degrees of macroscopic chemical mixing in PISN explosive products on the resulting observational properties. We artificially apply mixing to the 250 M⊙ PISN evolutionary model from Kozyreva et al. and explore its supernova evolution with the one-dimensional radiation hydrodynamics code STELLA. The greatest success in matching SLSN observations is achieved in the case of an extreme macroscopic mixing, where all radioactive material is ejected into the hydrogen-helium outer layer. Such an extreme macroscopic redistribution of chemicals produces events with faster light curves with high photospheric temperatures and high photospheric velocities. These properties fit a wider range of SLSNe than non-mixed PISN model. Our mixed models match the light curves, colour temperature, and photospheric velocity evolution of two well-observed SLSNe PTF12dam and LSQ12dlf. However, these models' extreme chemical redistribution may be hard to realize in massive PISNe. Therefore, alternative models such as the magnetar mechanism or wind-interaction may still to be favourable to interpret rapidly rising SLSNe.

  14. Interplanetary Alfvenic fluctuations: A statistical study of the directional variations of the magnetic field

    NASA Technical Reports Server (NTRS)

    Bavassano, B.; Mariani, F.

    1983-01-01

    Magnetic field data from HELIOS 1 and 2 are used to test a stochastic model for Alfvenic fluctuations recently proposed. A reasonable matching between observations and predictions is found. A rough estimate of the correlation length of the observed fluctuations is inferred.

  15. Logistic Mixed Models to Investigate Implicit and Explicit Belief Tracking

    PubMed Central

    Lages, Martin; Scheel, Anne

    2016-01-01

    We investigated the proposition of a two-systems Theory of Mind in adults’ belief tracking. A sample of N = 45 participants predicted the choice of one of two opponent players after observing several rounds in an animated card game. Three matches of this card game were played and initial gaze direction on target and subsequent choice predictions were recorded for each belief task and participant. We conducted logistic regressions with mixed effects on the binary data and developed Bayesian logistic mixed models to infer implicit and explicit mentalizing in true belief and false belief tasks. Although logistic regressions with mixed effects predicted the data well a Bayesian logistic mixed model with latent task- and subject-specific parameters gave a better account of the data. As expected explicit choice predictions suggested a clear understanding of true and false beliefs (TB/FB). Surprisingly, however, model parameters for initial gaze direction also indicated belief tracking. We discuss why task-specific parameters for initial gaze directions are different from choice predictions yet reflect second-order perspective taking. PMID:27853440

  16. Rayleigh matches in carriers of inherited color vision defects: the contribution from the third L/M photopigment.

    PubMed

    Sun, Yang; Shevell, Steven K

    2008-01-01

    The mother or daughter of a male with an X-chromosome-linked red/green color defect is an obligate carrier of the color deficient gene array. According to the Lyonization hypothesis, a female carrier's defective gene is expressed and thus carriers may have more than two types of pigments in the L/M photopigment range. An open question is how a carrier's third cone pigment in the L/M range affects the postreceptoral neural signals encoding color. Here, a model considered how the signal from the third pigment pools with signals from the normal's two pigments in the L/M range. Three alternative assumptions were considered for the signal from the third cone pigment: it pools with the signal from (1) L cones, (2) M cones, or (3) both types of cones. Spectral-sensitivity peak, optical density, and the relative number of each cone type were factors in the model. The model showed that differences in Rayleigh matches among carriers can be due to individual differences in the number of the third type of L/M cone, and the spectral sensitivity peak and optical density of the third L/M pigment; surprisingly, however, individual differences in the cone ratio of the other two cone types (one L and the other M) did not affect the match. The predicted matches were compared to Schmidt's (1934/1955) report of carriers' Rayleigh matches. For carriers of either protanomaly or deuteranomaly, these matches were not consistent with the signal from the third L/M pigment combining with only the signal from M cones. The matches could be accounted for by pooling the third-pigment's response with L-cone signals, either exclusively or randomly with M-cone responses as well.

  17. A numerical/empirical technique for history matching and predicting cyclic steam performance in Canadian oil sands reservoirs

    NASA Astrophysics Data System (ADS)

    Leshchyshyn, Theodore Henry

    The oil sands of Alberta contain some one trillion barrels of bitumen-in-place, most contained in the McMurray, Wabiskaw, Clearwater, and Grand Rapids formations. Depth of burial is 0--550 m, 10% of which is surface mineable, the rest recoverable by in-situ technology-driven enhanced oil recovery schemes. To date, significant commercial recovery has been attributed to Cyclic Steam Stimulation (CSS) using vertical wellbores. Other techniques, such as Steam Assisted Gravity Drainage (SAGD) are proving superior to other recovery methods for increasing early oil production but at initial higher development and/or operating costs. Successful optimization of bitumen production rates from the entire reservoir is ultimately decided by the operator's understanding of the reservoir in its original state and/or the positive and negative changes which occur in oil sands and heavy oil deposits upon heat stimulation. Reservoir description is the single most important factor in attaining satisfactory history matches and forecasts for optimized production of the commercially-operated processes. Reservoir characterization which lacks understanding can destroy a project. For example, incorrect assumptions in the geological model for the Wolf Lake Project in northeast Alberta resulted in only about one-half of the predicted recovery by the original field process. It will be shown here why the presence of thin calcite streaks within oil sands can determine the success or failure of a commercial cyclic steam project. A vast amount of field data, mostly from the Primrose Heavy Oil Project (PHOP) near Cold Lake, Alberta, enabled the development a simple set of correlation curves for predicting bitumen production using CSS. A previously calibtrated thermal numerical simulation model was used in its simplist form, that is, a single layer, radial grid blocks, "fingering" or " dilation" adjusted permeability curves, and no simulated fracture, to generate the first cycle production correlation curves. The key reservoir property used to develop a specific curve was to vary the initial mobile water saturation. Individual pilot wells were then history-matched using these correlation curves, adjusting for thermal net pay using perforation height and a fundamentally derived "net pay factor". Operating days (injection plus production) were required to complete the history matching calculations. Subsequent cycles were then history-matched by applying an Efficiency Multiplication Factor (EMF) to the original first cycle prediction method as well as selecting the proper correlation curve for the specific cycle under analysis by using the appropriate steam injection rates and slug sizes. History matches were performed on eight PHOP wells (two back-to-back, five-spot patterns) completed in the Wabiskaw and, three single-well tests completed just below in the McMurray Formation. Predictions for the PHOP Wabiskaw Formation first cycle bitumen production averaged within 1% of the actual pilot total. Bitumen recovery from individual wells for second cycle onwards, was within 20% of actual values. For testing the correlations, matching was also performed on cyclic steam data from British Petroleum's Wolf Lake Project, the Esso Cold Lake Project, and the PCEJ Fort McMurray Pilot, a joint venture of Petro-Canada, Cities Services (Canadian Occidental), Esso, and Japan-Canada Oil Sands with reasonable results.

  18. An empirically-based model for the lift coefficients of twisted airfoils with leading-edge tubercles

    NASA Astrophysics Data System (ADS)

    Ni, Zao; Su, Tsung-chow; Dhanak, Manhar

    2018-04-01

    Experimental data for untwisted airfoils are utilized to propose a model for predicting the lift coefficients of twisted airfoils with leading-edge tubercles. The effectiveness of the empirical model is verified through comparison with results of a corresponding computational fluid-dynamic (CFD) study. The CFD study is carried out for both twisted and untwisted airfoils with tubercles, the latter shown to compare well with available experimental data. Lift coefficients of twisted airfoils predicted from the proposed empirically-based model match well with the corresponding coefficients determined using the verified CFD study. Flow details obtained from the latter provide better insight into the underlying mechanism and behavior at stall of twisted airfoils with leading edge tubercles.

  19. Antidepressant Use After Aneurysmal Subarachnoid Hemorrhage: A Population-Based Case-Control Study.

    PubMed

    Huttunen, Jukka; Lindgren, Antti; Kurki, Mitja I; Huttunen, Terhi; Frösen, Juhana; von Und Zu Fraunberg, Mikael; Koivisto, Timo; Kälviäinen, Reetta; Räikkönen, Katri; Viinamäki, Heimo; Jääskeläinen, Juha E; Immonen, Arto

    2016-09-01

    To elucidate the predictors of antidepressant use after subarachnoid hemorrhage from saccular intracranial aneurysm (sIA-SAH) in a population-based cohort with matched controls. The Kuopio sIA database includes all unruptured and ruptured sIA cases admitted to the Kuopio University Hospital from its defined catchment population in Eastern Finland, with 3 matched controls for each patient. The use of all prescribed medicines has been fused from the Finnish national registry of prescribed medicines. In the present study, 2 or more purchases of antidepressant medication indicated antidepressant use. The risk factors of the antidepressant use were analyzed in 940 patients alive 12 months after sIA-SAH, and the classification tree analysis was used to create a predicting model for antidepressant use after sIA-SAH. The 940 12-month survivors of sIA-SAH had significantly more antidepressant use (odds ratio, 2.6; 95% confidence interval, 2.2-3.1) than their 2676 matched controls (29% versus 14%). Classification tree analysis, based on independent risk factors, was used for the best prediction model of antidepressant use after sIA-SAH. Modified Rankin Scale until 12 months was the most potent predictor, followed by condition (Hunt and Hess Scale) and age on admission for sIA-SAH. The sIA-SAH survivors use significantly more often antidepressants, indicative of depression, than their matched population controls. Even with a seemingly good recovery (modified Rankin Scale score, 0) at 12 months after sIA-SAH, there is a significant risk of depression requiring antidepressant medication. © 2016 American Heart Association, Inc.

  20. Performance prediction using geostatistics and window reservoir simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fontanilla, J.P.; Al-Khalawi, A.A.; Johnson, S.G.

    1995-11-01

    This paper is the first window model study in the northern area of a large carbonate reservoir in Saudi Arabia. It describes window reservoir simulation with geostatistics to model uneven water encroachment in the southwest producing area of the northern portion of the reservoir. In addition, this paper describes performance predictions that investigate the sweep efficiency of the current peripheral waterflood. A 50 x 50 x 549 (240 m. x 260 m. x 0.15 m. average grid block size) geological model was constructed with geostatistics software. Conditional simulation was used to obtain spatial distributions of porosity and volume of dolomite.more » Core data transforms were used to obtain horizontal and vertical permeability distributions. Simple averaging techniques were used to convert the 549-layer geological model to a 50 x 50 x 10 (240 m. x 260 m. x 8 m. average grid block size) window reservoir simulation model. Flux injectors and flux producers were assigned to the outermost grid blocks. Historical boundary flux rates were obtained from a coarsely-ridded full-field model. Pressure distribution, water cuts, GORs, and recent flowmeter data were history matched. Permeability correction factors and numerous parameter adjustments were required to obtain the final history match. The permeability correction factors were based on pressure transient permeability-thickness analyses. The prediction phase of the study evaluated the effects of infill drilling, the use of artificial lifts, workovers, horizontal wells, producing rate constraints, and tight zone development to formulate depletion strategies for the development of this area. The window model will also be used to investigate day-to-day reservoir management problems in this area.« less

  1. Predicting the names of the best teams after the knock-out phase of a cricket series.

    PubMed

    Lemmer, Hermanus Hofmeyr

    2014-01-01

    Cricket players' performances can best be judged after a large number of matches had been played. For test or one-day international (ODI) players, career data are normally used to calculate performance measures. These are normally good indicators of future performances, although various factors influence the performance of a player in a specific match. It is often necessary to judge players' performances based on a small number of scores, e.g. to identify the best players after a short series of matches. The challenge then is to use the best available criteria in order to assess performances as accurately and fairly as possible. In the present study the results of the knock-out phase of an International Cricket Council (ICC) World Cup ODI Series are used to predict the names of the best teams by means of a suitably formulated logistic regression model. Despite using very sparse data, the methods used are reasonably successful. It is also shown that if the same technique is applied to career ratings, very good results are obtained.

  2. Study of moving object detecting and tracking algorithm for video surveillance system

    NASA Astrophysics Data System (ADS)

    Wang, Tao; Zhang, Rongfu

    2010-10-01

    This paper describes a specific process of moving target detecting and tracking in the video surveillance.Obtain high-quality background is the key to achieving differential target detecting in the video surveillance.The paper is based on a block segmentation method to build clear background,and using the method of background difference to detecing moving target,after a series of treatment we can be extracted the more comprehensive object from original image,then using the smallest bounding rectangle to locate the object.In the video surveillance system, the delay of camera and other reasons lead to tracking lag,the model of Kalman filter based on template matching was proposed,using deduced and estimated capacity of Kalman,the center of smallest bounding rectangle for predictive value,predicted the position in the next moment may appare,followed by template matching in the region as the center of this position,by calculate the cross-correlation similarity of current image and reference image,can determine the best matching center.As narrowed the scope of searching,thereby reduced the searching time,so there be achieve fast-tracking.

  3. A Single Mechanism Can Account for Human Perception of Depth in Mixed Correlation Random Dot Stereograms

    PubMed Central

    Cumming, Bruce G.

    2016-01-01

    In order to extract retinal disparity from a visual scene, the brain must match corresponding points in the left and right retinae. This computationally demanding task is known as the stereo correspondence problem. The initial stage of the solution to the correspondence problem is generally thought to consist of a correlation-based computation. However, recent work by Doi et al suggests that human observers can see depth in a class of stimuli where the mean binocular correlation is 0 (half-matched random dot stereograms). Half-matched random dot stereograms are made up of an equal number of correlated and anticorrelated dots, and the binocular energy model—a well-known model of V1 binocular complex cells—fails to signal disparity here. This has led to the proposition that a second, match-based computation must be extracting disparity in these stimuli. Here we show that a straightforward modification to the binocular energy model—adding a point output nonlinearity—is by itself sufficient to produce cells that are disparity-tuned to half-matched random dot stereograms. We then show that a simple decision model using this single mechanism can reproduce psychometric functions generated by human observers, including reduced performance to large disparities and rapidly updating dot patterns. The model makes predictions about how performance should change with dot size in half-matched stereograms and temporal alternation in correlation, which we test in human observers. We conclude that a single correlation-based computation, based directly on already-known properties of V1 neurons, can account for the literature on mixed correlation random dot stereograms. PMID:27196696

  4. A unified internal model theory to resolve the paradox of active versus passive self-motion sensation

    PubMed Central

    Angelaki, Dora E

    2017-01-01

    Brainstem and cerebellar neurons implement an internal model to accurately estimate self-motion during externally generated (‘passive’) movements. However, these neurons show reduced responses during self-generated (‘active’) movements, indicating that predicted sensory consequences of motor commands cancel sensory signals. Remarkably, the computational processes underlying sensory prediction during active motion and their relationship to internal model computations during passive movements remain unknown. We construct a Kalman filter that incorporates motor commands into a previously established model of optimal passive self-motion estimation. The simulated sensory error and feedback signals match experimentally measured neuronal responses during active and passive head and trunk rotations and translations. We conclude that a single sensory internal model can combine motor commands with vestibular and proprioceptive signals optimally. Thus, although neurons carrying sensory prediction error or feedback signals show attenuated modulation, the sensory cues and internal model are both engaged and critically important for accurate self-motion estimation during active head movements. PMID:29043978

  5. A model of two-way selection system for human behavior.

    PubMed

    Zhou, Bin; Qin, Shujia; Han, Xiao-Pu; He, Zhe; Xie, Jia-Rong; Wang, Bing-Hong

    2014-01-01

    Two-way selection is a common phenomenon in nature and society. It appears in the processes like choosing a mate between men and women, making contracts between job hunters and recruiters, and trading between buyers and sellers. In this paper, we propose a model of two-way selection system, and present its analytical solution for the expectation of successful matching total and the regular pattern that the matching rate trends toward an inverse proportion to either the ratio between the two sides or the ratio of the state total to the smaller group's people number. The proposed model is verified by empirical data of the matchmaking fairs. Results indicate that the model well predicts this typical real-world two-way selection behavior to the bounded error extent, thus it is helpful for understanding the dynamics mechanism of the real-world two-way selection system.

  6. Questioning the Faith - Models and Prediction in Stream Restoration (Invited)

    NASA Astrophysics Data System (ADS)

    Wilcock, P.

    2013-12-01

    River management and restoration demand prediction at and beyond our present ability. Management questions, framed appropriately, can motivate fundamental advances in science, although the connection between research and application is not always easy, useful, or robust. Why is that? This presentation considers the connection between models and management, a connection that requires critical and creative thought on both sides. Essential challenges for managers include clearly defining project objectives and accommodating uncertainty in any model prediction. Essential challenges for the research community include matching the appropriate model to project duration, space, funding, information, and social constraints and clearly presenting answers that are actually useful to managers. Better models do not lead to better management decisions or better designs if the predictions are not relevant to and accepted by managers. In fact, any prediction may be irrelevant if the need for prediction is not recognized. The predictive target must be developed in an active dialog between managers and modelers. This relationship, like any other, can take time to develop. For example, large segments of stream restoration practice have remained resistant to models and prediction because the foundational tenet - that channels built to a certain template will be able to transport the supplied sediment with the available flow - has no essential physical connection between cause and effect. Stream restoration practice can be steered in a predictive direction in which project objectives are defined as predictable attributes and testable hypotheses. If stream restoration design is defined in terms of the desired performance of the channel (static or dynamic, sediment surplus or deficit), then channel properties that provide these attributes can be predicted and a basis exists for testing approximations, models, and predictions.

  7. Importance of the habitat choice behavior assumed when modeling the effects of food and temperature on fish populations

    USGS Publications Warehouse

    Wildhaber, Mark L.; Lamberson, Peter J.

    2004-01-01

    Various mechanisms of habitat choice in fishes based on food and/or temperature have been proposed: optimal foraging for food alone; behavioral thermoregulation for temperature alone; and behavioral energetics and discounted matching for food and temperature combined. Along with development of habitat choice mechanisms, there has been a major push to develop and apply to fish populations individual-based models that incorporate various forms of these mechanisms. However, it is not known how the wide variation in observed and hypothesized mechanisms of fish habitat choice could alter fish population predictions (e.g. growth, size distributions, etc.). We used spatially explicit, individual-based modeling to compare predicted fish populations using different submodels of patch choice behavior under various food and temperature distributions. We compared predicted growth, temperature experience, food consumption, and final spatial distribution using the different models. Our results demonstrated that the habitat choice mechanism assumed in fish population modeling simulations was critical to predictions of fish distribution and growth rates. Hence, resource managers who use modeling results to predict fish population trends should be very aware of and understand the underlying patch choice mechanisms used in their models to assure that those mechanisms correctly represent the fish populations being modeled.

  8. Euler Technology Assessment - SPLITFLOW Code Applications for Stability and Control Analysis on an Advanced Fighter Model Employing Innovative Control Concepts

    NASA Technical Reports Server (NTRS)

    Jordan, Keith J.

    1998-01-01

    This report documents results from the NASA-Langley sponsored Euler Technology Assessment Study conducted by Lockheed-Martin Tactical Aircraft Systems (LMTAS). The purpose of the study was to evaluate the ability of the SPLITFLOW code using viscous and inviscid flow models to predict aerodynamic stability and control of an advanced fighter model. The inviscid flow model was found to perform well at incidence angles below approximately 15 deg, but not as well at higher angles of attack. The results using a turbulent, viscous flow model matched the trends of the wind tunnel data, but did not show significant improvement over the Euler solutions. Overall, the predictions were found to be useful for stability and control design purposes.

  9. The relationship between collective efficacy and precompetitive affect in rugby players: testing Bandura's model of collective efficacy.

    PubMed

    Greenlees, I A; Nunn, R L; Graydon, J K; Maynard, I W

    1999-10-01

    This study extended research examining Bandura's (1997) proposed model of collective efficacy. Specifically, it examined the relationships between groups' collective efficacy and the precompetitive anxiety and affect they experienced. Prior to a competitive match 66 male Rugby Union footballers from 6 teams (2 university teams and 4 county league teams) completed a single-item measure of confidence in their team winning the forthcoming match, a 10-item measure of confidence in their team performing well in the forthcoming match, the modified Competitive State Anxiety Inventory-2, and the Positive and Negative Affect Schedule. Stepwise (forward) multiple regression analyses indicated that scores for collective efficacy accounted for only 6.3% of the variance in the intensities of cognitive state anxiety and only 22% of the variance in the positive affect experienced prior to the rugby match. The results indicate that concerns with the team's ability to win a match were associated with high cognitive state anxiety and that doubts regarding the team's ability to perform well were related to low positive affect. Given the magnitude of predicted variances, the findings seem to give some support to Bandura's proposal that the beliefs in collective efficacy of individuals engaged in a team task are related to precompetitive affective reactions and the experience of state anxiety.

  10. Mesoscale Particle-Based Model of Electrophoresis

    DOE PAGES

    Giera, Brian; Zepeda-Ruiz, Luis A.; Pascall, Andrew J.; ...

    2015-07-31

    Here, we develop and evaluate a semi-empirical particle-based model of electrophoresis using extensive mesoscale simulations. We parameterize the model using only measurable quantities from a broad set of colloidal suspensions with properties that span the experimentally relevant regime. With sufficient sampling, simulated diffusivities and electrophoretic velocities match predictions of the ubiquitous Stokes-Einstein and Henry equations, respectively. This agreement holds for non-polar and aqueous solvents or ionic liquid colloidal suspensions under a wide range of applied electric fields.

  11. Mesoscale Particle-Based Model of Electrophoresis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giera, Brian; Zepeda-Ruiz, Luis A.; Pascall, Andrew J.

    Here, we develop and evaluate a semi-empirical particle-based model of electrophoresis using extensive mesoscale simulations. We parameterize the model using only measurable quantities from a broad set of colloidal suspensions with properties that span the experimentally relevant regime. With sufficient sampling, simulated diffusivities and electrophoretic velocities match predictions of the ubiquitous Stokes-Einstein and Henry equations, respectively. This agreement holds for non-polar and aqueous solvents or ionic liquid colloidal suspensions under a wide range of applied electric fields.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phillips, William Scott

    This seminar presentation describes amplitude models and yield estimations that look at the data in order to inform legislation. The following points were brought forth in the summary: global models that will predict three-component amplitudes (R-T-Z) were produced; Q models match regional geology; corrected source spectra can be used for discrimination and yield estimation; three-component data increase coverage and reduce scatter in source spectral estimates; three-component efforts must include distance-dependent effects; a community effort on instrument calibration is needed.

  13. DNA methylation markers for diagnosis and prognosis of common cancers

    PubMed Central

    Hao, Xiaoke; Luo, Huiyan; Krawczyk, Michal; Wei, Wei; Wang, Wenqiu; Wang, Juan; Flagg, Ken; Hou, Jiayi; Zhang, Heng; Yi, Shaohua; Jafari, Maryam; Lin, Danni; Chung, Christopher; Caughey, Bennett A.; Li, Gen; Dhar, Debanjan; Shi, William; Zheng, Lianghong; Hou, Rui; Zhu, Jie; Zhao, Liang; Fu, Xin; Zhang, Edward; Zhang, Charlotte; Zhu, Jian-Kang; Karin, Michael; Xu, Rui-Hua; Zhang, Kang

    2017-01-01

    The ability to identify a specific cancer using minimally invasive biopsy holds great promise for improving the diagnosis, treatment selection, and prediction of prognosis in cancer. Using whole-genome methylation data from The Cancer Genome Atlas (TCGA) and machine learning methods, we evaluated the utility of DNA methylation for differentiating tumor tissue and normal tissue for four common cancers (breast, colon, liver, and lung). We identified cancer markers in a training cohort of 1,619 tumor samples and 173 matched adjacent normal tissue samples. We replicated our findings in a separate TCGA cohort of 791 tumor samples and 93 matched adjacent normal tissue samples, as well as an independent Chinese cohort of 394 tumor samples and 324 matched adjacent normal tissue samples. The DNA methylation analysis could predict cancer versus normal tissue with more than 95% accuracy in these three cohorts, demonstrating accuracy comparable to typical diagnostic methods. This analysis also correctly identified 29 of 30 colorectal cancer metastases to the liver and 32 of 34 colorectal cancer metastases to the lung. We also found that methylation patterns can predict prognosis and survival. We correlated differential methylation of CpG sites predictive of cancer with expression of associated genes known to be important in cancer biology, showing decreased expression with increased methylation, as expected. We verified gene expression profiles in a mouse model of hepatocellular carcinoma. Taken together, these findings demonstrate the utility of methylation biomarkers for the molecular characterization of cancer, with implications for diagnosis and prognosis. PMID:28652331

  14. A hybrid approach to estimate the complex motions of clouds in sky images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, Zhenzhou; Yu, Dantong; Huang, Dong

    Tracking the motion of clouds is essential to forecasting the weather and to predicting the short-term solar energy generation. Existing techniques mainly fall into two categories: variational optical flow, and block matching. In this article, we summarize recent advances in estimating cloud motion using ground-based sky imagers and quantitatively evaluate state-of-the-art approaches. Then we propose a hybrid tracking framework to incorporate the strength of both block matching and optical flow models. To validate the accuracy of the proposed approach, we introduce a series of synthetic images to simulate the cloud movement and deformation, and thereafter comprehensively compare our hybrid approachmore » with several representative tracking algorithms over both simulated and real images collected from various sites/imagers. The results show that our hybrid approach outperforms state-of-the-art models by reducing at least 30% motion estimation errors compared with the ground-truth motions in most of simulated image sequences. Furthermore, our hybrid model demonstrates its superior efficiency in several real cloud image datasets by lowering at least 15% Mean Absolute Error (MAE) between predicted images and ground-truth images.« less

  15. A hybrid approach to estimate the complex motions of clouds in sky images

    DOE PAGES

    Peng, Zhenzhou; Yu, Dantong; Huang, Dong; ...

    2016-09-14

    Tracking the motion of clouds is essential to forecasting the weather and to predicting the short-term solar energy generation. Existing techniques mainly fall into two categories: variational optical flow, and block matching. In this article, we summarize recent advances in estimating cloud motion using ground-based sky imagers and quantitatively evaluate state-of-the-art approaches. Then we propose a hybrid tracking framework to incorporate the strength of both block matching and optical flow models. To validate the accuracy of the proposed approach, we introduce a series of synthetic images to simulate the cloud movement and deformation, and thereafter comprehensively compare our hybrid approachmore » with several representative tracking algorithms over both simulated and real images collected from various sites/imagers. The results show that our hybrid approach outperforms state-of-the-art models by reducing at least 30% motion estimation errors compared with the ground-truth motions in most of simulated image sequences. Furthermore, our hybrid model demonstrates its superior efficiency in several real cloud image datasets by lowering at least 15% Mean Absolute Error (MAE) between predicted images and ground-truth images.« less

  16. Probabilistic parameter estimation in a 2-step chemical kinetics model for n-dodecane jet autoignition

    NASA Astrophysics Data System (ADS)

    Hakim, Layal; Lacaze, Guilhem; Khalil, Mohammad; Sargsyan, Khachik; Najm, Habib; Oefelein, Joseph

    2018-05-01

    This paper demonstrates the development of a simple chemical kinetics model designed for autoignition of n-dodecane in air using Bayesian inference with a model-error representation. The model error, i.e. intrinsic discrepancy from a high-fidelity benchmark model, is represented by allowing additional variability in selected parameters. Subsequently, we quantify predictive uncertainties in the results of autoignition simulations of homogeneous reactors at realistic diesel engine conditions. We demonstrate that these predictive error bars capture model error as well. The uncertainty propagation is performed using non-intrusive spectral projection that can also be used in principle with larger scale computations, such as large eddy simulation. While the present calibration is performed to match a skeletal mechanism, it can be done with equal success using experimental data only (e.g. shock-tube measurements). Since our method captures the error associated with structural model simplifications, we believe that the optimised model could then lead to better qualified predictions of autoignition delay time in high-fidelity large eddy simulations than the existing detailed mechanisms. This methodology provides a way to reduce the cost of reaction kinetics in simulations systematically, while quantifying the accuracy of predictions of important target quantities.

  17. Population-specific FST values for forensic STR markers: A worldwide survey.

    PubMed

    Buckleton, John; Curran, James; Goudet, Jérôme; Taylor, Duncan; Thiery, Alexandre; Weir, B S

    2016-07-01

    The interpretation of matching between DNA profiles of a person of interest and an item of evidence is undertaken using population genetic models to predict the probability of matching by chance. Calculation of matching probabilities is straightforward if allelic probabilities are known, or can be estimated, in the relevant population. It is more often the case, however, that the relevant population has not been sampled and allele frequencies are available only from a broader collection of populations as might be represented in a national or regional database. Variation of allele probabilities among the relevant populations is quantified by the population structure quantity FST and this quantity affects matching proportions. Matching within a population can be interpreted only with respect to matching between populations and we show here that FST, can be estimated from sample allelic matching proportions within and between populations. We report such estimates from data we extracted from 250 papers in the forensic literature, representing STR profiles at up to 24 loci from nearly 500,000 people in 446 different populations. The results suggest that theta values in current forensic use do not have the buffer of conservatism often thought. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. Population-specific FST values for forensic STR markers: A worldwide survey

    PubMed Central

    Buckleton, John; Curran, James; Goudet, Jérôme; Taylor, Duncan; Thiery, Alexandre; Weir, B.S.

    2016-01-01

    The interpretation of matching between DNA profiles of a person of interest and an item of evidence is undertaken using population genetic models to predict the probability of matching by chance. Calculation of matching probabilities is straightforward if allelic probabilities are known, or can be estimated, in the relevant population. It is more often the case, however, that the relevant population has not been sampled and allele frequencies are available only from a broader collection of populations as might be represented in a national or regional database. Variation of allele probabilities among the relevant populations is quantified by the population structure quantity FST and this quanity affects matching propoptions. Matching within a population can be interpreted only with respect to matching between populations and we show here that FST, can be estimated from sample allelic matching proportions within and between populations. We report such estimates from data we extracted from 250 papers in the forensic literature, representing STR profiles at up to 24 loci from nearly 500,000 people in 446 different populations. The results suggest that theta values in current forensic use do not have the buffer of conservativism often thought. PMID:27082756

  19. Sustained sensorimotor control as intermittent decisions about prediction errors: computational framework and application to ground vehicle steering.

    PubMed

    Markkula, Gustav; Boer, Erwin; Romano, Richard; Merat, Natasha

    2018-06-01

    A conceptual and computational framework is proposed for modelling of human sensorimotor control and is exemplified for the sensorimotor task of steering a car. The framework emphasises control intermittency and extends on existing models by suggesting that the nervous system implements intermittent control using a combination of (1) motor primitives, (2) prediction of sensory outcomes of motor actions, and (3) evidence accumulation of prediction errors. It is shown that approximate but useful sensory predictions in the intermittent control context can be constructed without detailed forward models, as a superposition of simple prediction primitives, resembling neurobiologically observed corollary discharges. The proposed mathematical framework allows straightforward extension to intermittent behaviour from existing one-dimensional continuous models in the linear control and ecological psychology traditions. Empirical data from a driving simulator are used in model-fitting analyses to test some of the framework's main theoretical predictions: it is shown that human steering control, in routine lane-keeping and in a demanding near-limit task, is better described as a sequence of discrete stepwise control adjustments, than as continuous control. Results on the possible roles of sensory prediction in control adjustment amplitudes, and of evidence accumulation mechanisms in control onset timing, show trends that match the theoretical predictions; these warrant further investigation. The results for the accumulation-based model align with other recent literature, in a possibly converging case against the type of threshold mechanisms that are often assumed in existing models of intermittent control.

  20. A Quantitative Climate-Match Score for Risk-Assessment Screening of Reptile and Amphibian Introductions

    NASA Astrophysics Data System (ADS)

    van Wilgen, Nicola J.; Roura-Pascual, Núria; Richardson, David M.

    2009-09-01

    Assessing climatic suitability provides a good preliminary estimate of the invasive potential of a species to inform risk assessment. We examined two approaches for bioclimatic modeling for 67 reptile and amphibian species introduced to California and Florida. First, we modeled the worldwide distribution of the biomes found in the introduced range to highlight similar areas worldwide from which invaders might arise. Second, we modeled potentially suitable environments for species based on climatic factors in their native ranges, using three sources of distribution data. Performance of the three datasets and both approaches were compared for each species. Climate match was positively correlated with species establishment success (maximum predicted suitability in the introduced range was more strongly correlated with establishment success than mean suitability). Data assembled from the Global Amphibian Assessment through NatureServe provided the most accurate models for amphibians, while ecoregion data compiled by the World Wide Fund for Nature yielded models which described reptile climatic suitability better than available point-locality data. We present three methods of assigning a climate-match score for use in risk assessment using both the mean and maximum climatic suitabilities. Managers may choose to use different methods depending on the stringency of the assessment and the available data, facilitating higher resolution and accuracy for herpetofaunal risk assessment. Climate-matching has inherent limitations and other factors pertaining to ecological interactions and life-history traits must also be considered for thorough risk assessment.

  1. Recommendations for the U.S. Coast Guard Survival Prediction Tool

    DTIC Science & Technology

    2009-04-01

    model. Not enough data to support modeling of how alcohol impairs swimming ability. Experimental evidence shows no significant cooling effect 50...equation. When matched for physical attributes, females cool more quickly than males due to lower metabolic response and greater surface-area-to-mass...April 2009 However, the average female has about 10% more body fat than the average male so, on average, males cool faster than females. (Tipton

  2. The evolution of imperfect floral mimicry

    PubMed Central

    Vereecken, Nicolas J.; Schiestl, Florian P.

    2008-01-01

    The theory of mimicry predicts that selection favors signal refinement in mimics to optimally match the signals released by their specific model species. We provide here chemical and behavioral evidence that a sexually deceptive orchid benefits from its mimetic imperfection to its co-occurring and specific bee model by triggering a stronger response in male bees, which react more intensively to the similar, but novel, scent stimulus provided by the orchid. PMID:18508972

  3. The evolution of imperfect floral mimicry.

    PubMed

    Vereecken, Nicolas J; Schiestl, Florian P

    2008-05-27

    The theory of mimicry predicts that selection favors signal refinement in mimics to optimally match the signals released by their specific model species. We provide here chemical and behavioral evidence that a sexually deceptive orchid benefits from its mimetic imperfection to its co-occurring and specific bee model by triggering a stronger response in male bees, which react more intensively to the similar, but novel, scent stimulus provided by the orchid.

  4. Wettability of graphitic-carbon and silicon surfaces: MD modeling and theoretical analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramos-Alvarado, Bladimir; Kumar, Satish; Peterson, G. P.

    2015-07-28

    The wettability of graphitic carbon and silicon surfaces was numerically and theoretically investigated. A multi-response method has been developed for the analysis of conventional molecular dynamics (MD) simulations of droplets wettability. The contact angle and indicators of the quality of the computations are tracked as a function of the data sets analyzed over time. This method of analysis allows accurate calculations of the contact angle obtained from the MD simulations. Analytical models were also developed for the calculation of the work of adhesion using the mean-field theory, accounting for the interfacial entropy changes. A calibration method is proposed to providemore » better predictions of the respective contact angles under different solid-liquid interaction potentials. Estimations of the binding energy between a water monomer and graphite match those previously reported. In addition, a breakdown in the relationship between the binding energy and the contact angle was observed. The macroscopic contact angles obtained from the MD simulations were found to match those predicted by the mean-field model for graphite under different wettability conditions, as well as the contact angles of Si(100) and Si(111) surfaces. Finally, an assessment of the effect of the Lennard-Jones cutoff radius was conducted to provide guidelines for future comparisons between numerical simulations and analytical models of wettability.« less

  5. Integrated Physics-based Modeling and Experiments for Improved Prediction of Combustion Dynamics in Low-Emission Systems

    NASA Technical Reports Server (NTRS)

    Anderson, William E.; Lucht, Robert P.; Mongia, Hukam

    2015-01-01

    Concurrent simulation and experiment was undertaken to assess the ability of a hybrid RANS-LES model to predict combustion dynamics in a single-element lean direct-inject (LDI) combustor showing self-excited instabilities. High frequency pressure modes produced by Fourier and modal decomposition analysis were compared quantitatively, and trends with equivalence ratio and inlet temperature were compared qualitatively. High frequency OH PLIF and PIV measurements were also taken. Submodels for chemical kinetics and primary and secondary atomization were also tested against the measured behavior. For a point-wise comparison, the amplitudes matched within a factor of two. The dependence on equivalence ratio was matched. Preliminary results from simulation using an 18-reaction kinetics model indicated instability amplitudes closer to measurement. Analysis of the simulations suggested a band of modes around 1400 Hz were due to a vortex bubble breakdown and a band of modes around 6 kHz were due to a precessing vortex core hydrodynamic instability. The primary needs are directly coupled and validated ab initio models of the atomizer free surface flow and the primary atomization processes, and more detailed study of the coupling between the 3D swirling flow and the local thermoacoustics in the diverging venturi section.

  6. 2D imaging of helium ion velocity in the DIII-D divertor

    NASA Astrophysics Data System (ADS)

    Samuell, C. M.; Porter, G. D.; Meyer, W. H.; Rognlien, T. D.; Allen, S. L.; Briesemeister, A.; Mclean, A. G.; Zeng, L.; Jaervinen, A. E.; Howard, J.

    2018-05-01

    Two-dimensional imaging of parallel ion velocities is compared to fluid modeling simulations to understand the role of ions in determining divertor conditions and benchmark the UEDGE fluid modeling code. Pure helium discharges are used so that spectroscopic He+ measurements represent the main-ion population at small electron temperatures. Electron temperatures and densities in the divertor match simulated values to within about 20%-30%, establishing the experiment/model match as being at least as good as those normally obtained in the more regularly simulated deuterium plasmas. He+ brightness (HeII) comparison indicates that the degree of detachment is captured well by UEDGE, principally due to the inclusion of E ×B drifts. Tomographically inverted Coherence Imaging Spectroscopy measurements are used to determine the He+ parallel velocities which display excellent agreement between the model and the experiment near the divertor target where He+ is predicted to be the main-ion species and where electron-dominated physics dictates the parallel momentum balance. Upstream near the X-point where He+ is a minority species and ion-dominated physics plays a more important role, there is an underestimation of the flow velocity magnitude by a factor of 2-3. These results indicate that more effort is required to be able to correctly predict ion momentum in these challenging regimes.

  7. Explaining neural signals in human visual cortex with an associative learning model.

    PubMed

    Jiang, Jiefeng; Schmajuk, Nestor; Egner, Tobias

    2012-08-01

    "Predictive coding" models posit a key role for associative learning in visual cognition, viewing perceptual inference as a process of matching (learned) top-down predictions (or expectations) against bottom-up sensory evidence. At the neural level, these models propose that each region along the visual processing hierarchy entails one set of processing units encoding predictions of bottom-up input, and another set computing mismatches (prediction error or surprise) between predictions and evidence. This contrasts with traditional views of visual neurons operating purely as bottom-up feature detectors. In support of the predictive coding hypothesis, a recent human neuroimaging study (Egner, Monti, & Summerfield, 2010) showed that neural population responses to expected and unexpected face and house stimuli in the "fusiform face area" (FFA) could be well-described as a summation of hypothetical face-expectation and -surprise signals, but not by feature detector responses. Here, we used computer simulations to test whether these imaging data could be formally explained within the broader framework of a mathematical neural network model of associative learning (Schmajuk, Gray, & Lam, 1996). Results show that FFA responses could be fit very closely by model variables coding for conditional predictions (and their violations) of stimuli that unconditionally activate the FFA. These data document that neural population signals in the ventral visual stream that deviate from classic feature detection responses can formally be explained by associative prediction and surprise signals.

  8. A simple-source model of military jet aircraft noise

    NASA Astrophysics Data System (ADS)

    Morgan, Jessica; Gee, Kent L.; Neilsen, Tracianne; Wall, Alan T.

    2010-10-01

    The jet plumes produced by military jet aircraft radiate significant amounts of noise. A need to better understand the characteristics of the turbulence-induced aeroacoustic sources has motivated the present study. The purpose of the study is to develop a simple-source model of jet noise that can be compared to the measured data. The study is based off of acoustic data collected near a tied-down F-22 Raptor. The simplest model consisted of adjusting the origin of a monopole above a rigid planar reflector until the locations of the predicted and measured interference nulls matched. The model has developed into an extended Rayleigh distribution of partially correlated monopoles which fits the measured data from the F-22 significantly better. The results and basis for the model match the current prevailing theory that jet noise consists of both correlated and uncorrelated sources. In addition, this simple-source model conforms to the theory that the peak source location moves upstream with increasing frequency and lower engine conditions.

  9. The prediction in computer color matching of dentistry based on GA+BP neural network.

    PubMed

    Li, Haisheng; Lai, Long; Chen, Li; Lu, Cheng; Cai, Qiang

    2015-01-01

    Although the use of computer color matching can reduce the influence of subjective factors by technicians, matching the color of a natural tooth with a ceramic restoration is still one of the most challenging topics in esthetic prosthodontics. Back propagation neural network (BPNN) has already been introduced into the computer color matching in dentistry, but it has disadvantages such as unstable and low accuracy. In our study, we adopt genetic algorithm (GA) to optimize the initial weights and threshold values in BPNN for improving the matching precision. To our knowledge, we firstly combine the BPNN with GA in computer color matching in dentistry. Extensive experiments demonstrate that the proposed method improves the precision and prediction robustness of the color matching in restorative dentistry.

  10. Baseline Characteristics Predicting Very Good Outcome of Allogeneic Hematopoietic Cell Transplantation in Young Patients With High Cytogenetic Risk Chronic Lymphocytic Leukemia - A Retrospective Analysis From the Chronic Malignancies Working Party of the EBMT.

    PubMed

    van Gelder, Michel; Ziagkos, Dimitris; de Wreede, Liesbeth; van Biezen, Anja; Dreger, Peter; Gramatzki, Martin; Stelljes, Matthias; Andersen, Niels Smedegaard; Schaap, Nicolaas; Vitek, Antonin; Beelen, Dietrich; Lindström, Vesa; Finke, Jürgen; Passweg, Jacob; Eder, Matthias; Machaczka, Maciej; Delgado, Julio; Krüger, William; Raida, Luděk; Socié, Gerard; Jindra, Pavel; Afanasyev, Boris; Wagner, Eva; Chalandon, Yves; Henseler, Anja; Schoenland, Stefan; Kröger, Nicolaus; Schetelig, Johannes

    2017-10-01

    Patients with genetically high-risk relapsed/refractory chronic lymphocytic leukemia have shorter median progression-free survival (PFS) with kinase- and BCL2-inhibitors (KI, BCL2i). Allogeneic hematopoietic stem cell transplantation (alloHCT) may result in sustained PFS, especially in younger patients because of its age-dependent non-relapse mortality (NRM) risk, but outcome data are lacking for this population. Risk factors for 2-year NRM and 8-year PFS were identified in patients < 50 years in an updated European Society for Blood and Marrow Transplantation registry cohort (n = 197; median follow-up, 90.4 months) by Cox regression modeling, and predicted probabilities of NRM and PFS of 2 reference patients with favorable or unfavorable characteristics were plotted. Predictors for poor 8-year PFS were no remission at the time of alloHCT (hazard ratio [HR], 1.7; 95% confidence interval [CI], 1.1-2.5) and partially human leukocyte antigen (HLA)-mismatched unrelated donor (HR, 2.8; 95% CI, 1.5-5.2). The latter variable also predicted a higher risk of 2-year NRM (HR, 4.0; 95% CI, 1.4-11.6) compared with HLA-matched sibling donors. Predicted 2-year NRM and 8-year PFS of a high cytogenetic risk (del(17p) and/or del(11q)) patient in remission with a matched related donor were 12% (95% CI, 3%-22%) and 54% (95% CI, 38%-69%), and for an unresponsive patient with a female partially HLA-matched unrelated donor 37% (95% CI, 12%-62%) and 38% (95% CI, 13%-63%). Low predicted NRM and high 8-year PFS in favorable transplant high cytogenetic risk patients compares favorably with outcomes with KI or BCL2i. Taking into account the amount of uncertainty for predicting survival after alloHCT and after sequential administration of KI and BCL2i, alloHCT remains a valid option for younger patients with high cytogenetic risk chronic lymphocytic leukemia with a well-HLA-matched donor. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Early Urinary Markers of Diabetic Kidney Disease: A Nested Case-Control Study From the Diabetes Control and Complications Trial (DCCT)

    PubMed Central

    Kern, Elizabeth O; Erhard, Penny; Sun, Wanjie; Genuth, Saul; Weiss, Miriam F

    2010-01-01

    Background Urinary markers were tested as predictors of macroalbuminuria or microalbuminuria in type 1 diabetes. Study Design Nested case:control of participants in the Diabetes Control and Complications Trial (DCCT) Setting & Participants Eighty-seven cases of microalbuminuria were matched to 174 controls in a 1:2 ratio, while 4 cases were matched to 4 controls in a 1:1 ratio, resulting in 91 cases and 178 controls for microalbuminuria. Fifty-five cases of macroalbuminuria were matched to 110 controls in a 1:2 ratio. Controls were free of micro/macroalbuminuria when their matching case first developed micro/macroalbuminuria. Predictors Urinary N-acetyl-β-D-glucosaminidase, pentosidine, AGE fluorescence, albumin excretion rate (AER) Outcomes Incident microalbuminuria (two consecutive annual AER > 40 but <= 300 mg/day), or macroalbuminuria (AER > 300 mg/day) Measurements Stored urine samples from DCCT entry, and 1–9 years later when macroalbuminuria or microalbuminuria occurred, were measured for the lysosomal enzyme, N-acetyl-β-D-glucosaminidase, and the advanced glycosylation end-products (AGEs) pentosidine and AGE-fluorescence. AER and adjustor variables were obtained from the DCCT. Results Sub-microalbuminuric levels of AER at baseline independently predicted microalbuminuria (adjusted OR 1.83; p<.001) and macroalbuminuria (adjusted OR 1.82; p<.001). Baseline N-acetyl-β-D-glucosaminidase independently predicted macroalbuminuria (adjusted OR 2.26; p<.001), and microalbuminuria (adjusted OR 1.86; p<.001). Baseline pentosidine predicted macroalbuminuria (adjusted OR 6.89; p=.002). Baseline AGE fluorescence predicted microalbuminuria (adjusted OR 1.68; p=.02). However, adjusted for N-acetyl-β-D-glucosaminidase, pentosidine and AGE-fluorescence lost predictive association with macroalbuminuria and microalbuminuria, respectively. Limitations Use of angiotensin converting-enzyme inhibitors was not directly ascertained, although their use was proscribed during the DCCT. Conclusions Early in type 1 diabetes, repeated measurements of AER and urinary NAG may identify individuals susceptible to future diabetic nephropathy. Combining the two markers may yield a better predictive model than either one alone. Renal tubule stress may be more severe, reflecting abnormal renal tubule processing of AGE-modified proteins, among individuals susceptible to diabetic nephropathy. PMID:20138413

  12. Computation of Alfvèn eigenmode stability and saturation through a reduced fast ion transport model in the TRANSP tokamak transport code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Podestà, M.; Gorelenkova, M.; Gorelenkov, N. N.

    Alfvénic instabilities (AEs) are well known as a potential cause of enhanced fast ion transport in fusion devices. Given a specific plasma scenario, quantitative predictions of (i) expected unstable AE spectrum and (ii) resulting fast ion transport are required to prevent or mitigate the AE-induced degradation in fusion performance. Reduced models are becoming an attractive tool to analyze existing scenarios as well as for scenario prediction in time-dependent simulations. Here, in this work, a neutral beam heated NSTX discharge is used as reference to illustrate the potential of a reduced fast ion transport model, known as kick model, that hasmore » been recently implemented for interpretive and predictive analysis within the framework of the time-dependent tokamak transport code TRANSP. Predictive capabilities for AE stability and saturation amplitude are first assessed, based on given thermal plasma profiles only. Predictions are then compared to experimental results, and the interpretive capabilities of the model further discussed. Overall, the reduced model captures the main properties of the instabilities and associated effects on the fast ion population. Finally, additional information from the actual experiment enables further tuning of the model's parameters to achieve a close match with measurements.« less

  13. Computation of Alfvèn eigenmode stability and saturation through a reduced fast ion transport model in the TRANSP tokamak transport code

    DOE PAGES

    Podestà, M.; Gorelenkova, M.; Gorelenkov, N. N.; ...

    2017-07-20

    Alfvénic instabilities (AEs) are well known as a potential cause of enhanced fast ion transport in fusion devices. Given a specific plasma scenario, quantitative predictions of (i) expected unstable AE spectrum and (ii) resulting fast ion transport are required to prevent or mitigate the AE-induced degradation in fusion performance. Reduced models are becoming an attractive tool to analyze existing scenarios as well as for scenario prediction in time-dependent simulations. Here, in this work, a neutral beam heated NSTX discharge is used as reference to illustrate the potential of a reduced fast ion transport model, known as kick model, that hasmore » been recently implemented for interpretive and predictive analysis within the framework of the time-dependent tokamak transport code TRANSP. Predictive capabilities for AE stability and saturation amplitude are first assessed, based on given thermal plasma profiles only. Predictions are then compared to experimental results, and the interpretive capabilities of the model further discussed. Overall, the reduced model captures the main properties of the instabilities and associated effects on the fast ion population. Finally, additional information from the actual experiment enables further tuning of the model's parameters to achieve a close match with measurements.« less

  14. Matching next-to-leading order predictions to parton showers in supersymmetric QCD

    DOE PAGES

    Degrande, Céline; Fuks, Benjamin; Hirschi, Valentin; ...

    2016-02-03

    We present a fully automated framework based on the FeynRules and MadGraph5_aMC@NLO programs that allows for accurate simulations of supersymmetric QCD processes at the LHC. Starting directly from a model Lagrangian that features squark and gluino interactions, event generation is achieved at the next-to-leading order in QCD, matching short-distance events to parton showers and including the subsequent decay of the produced supersymmetric particles. As an application, we study the impact of higher-order corrections in gluino pair-production in a simplified benchmark scenario inspired by current gluino LHC searches.

  15. Matching next-to-leading order predictions to parton showers in supersymmetric QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Degrande, Céline; Fuks, Benjamin; Hirschi, Valentin

    We present a fully automated framework based on the FeynRules and MadGraph5_aMC@NLO programs that allows for accurate simulations of supersymmetric QCD processes at the LHC. Starting directly from a model Lagrangian that features squark and gluino interactions, event generation is achieved at the next-to-leading order in QCD, matching short-distance events to parton showers and including the subsequent decay of the produced supersymmetric particles. As an application, we study the impact of higher-order corrections in gluino pair-production in a simplified benchmark scenario inspired by current gluino LHC searches.

  16. A mode matching method for modeling dissipative silencers lined with poroelastic materials and containing mean flow.

    PubMed

    Nennig, Benoit; Perrey-Debain, Emmanuel; Ben Tahar, Mabrouk

    2010-12-01

    A mode matching method for predicting the transmission loss of a cylindrical shaped dissipative silencer partially filled with a poroelastic foam is developed. The model takes into account the solid phase elasticity of the sound-absorbing material, the mounting conditions of the foam, and the presence of a uniform mean flow in the central airway. The novelty of the proposed approach lies in the fact that guided modes of the silencer have a composite nature containing both compressional and shear waves as opposed to classical mode matching methods in which only acoustic pressure waves are present. Results presented demonstrate good agreement with finite element calculations provided a sufficient number of modes are retained. In practice, it is found that the time for computing the transmission loss over a large frequency range takes a few minutes on a personal computer. This makes the present method a reliable tool for tackling dissipative silencers lined with poroelastic materials.

  17. Characterizing the genetic structure of a forensic DNA database using a latent variable approach.

    PubMed

    Kruijver, Maarten

    2016-07-01

    Several problems in forensic genetics require a representative model of a forensic DNA database. Obtaining an accurate representation of the offender database can be difficult, since databases typically contain groups of persons with unregistered ethnic origins in unknown proportions. We propose to estimate the allele frequencies of the subpopulations comprising the offender database and their proportions from the database itself using a latent variable approach. We present a model for which parameters can be estimated using the expectation maximization (EM) algorithm. This approach does not rely on relatively small and possibly unrepresentative population surveys, but is driven by the actual genetic composition of the database only. We fit the model to a snapshot of the Dutch offender database (2014), which contains close to 180,000 profiles, and find that three subpopulations suffice to describe a large fraction of the heterogeneity in the database. We demonstrate the utility and reliability of the approach with three applications. First, we use the model to predict the number of false leads obtained in database searches. We assess how well the model predicts the number of false leads obtained in mock searches in the Dutch offender database, both for the case of familial searching for first degree relatives of a donor and searching for contributors to three-person mixtures. Second, we study the degree of partial matching between all pairs of profiles in the Dutch database and compare this to what is predicted using the latent variable approach. Third, we use the model to provide evidence to support that the Dutch practice of estimating match probabilities using the Balding-Nichols formula with a native Dutch reference database and θ=0.03 is conservative. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. Biased ART: a neural architecture that shifts attention toward previously disregarded features following an incorrect prediction.

    PubMed

    Carpenter, Gail A; Gaddam, Sai Chaitanya

    2010-04-01

    Memories in Adaptive Resonance Theory (ART) networks are based on matched patterns that focus attention on those portions of bottom-up inputs that match active top-down expectations. While this learning strategy has proved successful for both brain models and applications, computational examples show that attention to early critical features may later distort memory representations during online fast learning. For supervised learning, biased ARTMAP (bARTMAP) solves the problem of over-emphasis on early critical features by directing attention away from previously attended features after the system makes a predictive error. Small-scale, hand-computed analog and binary examples illustrate key model dynamics. Two-dimensional simulation examples demonstrate the evolution of bARTMAP memories as they are learned online. Benchmark simulations show that featural biasing also improves performance on large-scale examples. One example, which predicts movie genres and is based, in part, on the Netflix Prize database, was developed for this project. Both first principles and consistent performance improvements on all simulation studies suggest that featural biasing should be incorporated by default in all ARTMAP systems. Benchmark datasets and bARTMAP code are available from the CNS Technology Lab Website: http://techlab.bu.edu/bART/. Copyright 2009 Elsevier Ltd. All rights reserved.

  19. Stochastic point-source modeling of ground motions in the Cascadia region

    USGS Publications Warehouse

    Atkinson, G.M.; Boore, D.M.

    1997-01-01

    A stochastic model is used to develop preliminary ground motion relations for the Cascadia region for rock sites. The model parameters are derived from empirical analyses of seismographic data from the Cascadia region. The model is based on a Brune point-source characterized by a stress parameter of 50 bars. The model predictions are compared to ground-motion data from the Cascadia region and to data from large earthquakes in other subduction zones. The point-source simulations match the observations from moderate events (M 100 km). The discrepancy at large magnitudes suggests further work on modeling finite-fault effects and regional attenuation is warranted. In the meantime, the preliminary equations are satisfactory for predicting motions from events of M < 7 and provide conservative estimates of motions from larger events at distances less than 100 km.

  20. Thermal Testing and Model Correlation for Advanced Topographic Laser Altimeter Instrument (ATLAS)

    NASA Technical Reports Server (NTRS)

    Patel, Deepak

    2016-01-01

    The Advanced Topographic Laser Altimeter System (ATLAS) part of the Ice Cloud and Land Elevation Satellite 2 (ICESat-2) is an upcoming Earth Science mission focusing on the effects of climate change. The flight instrument passed all environmental testing at GSFC (Goddard Space Flight Center) and is now ready to be shipped to the spacecraft vendor for integration and testing. This topic covers the analysis leading up to the test setup for ATLAS thermal testing as well as model correlation to flight predictions. Test setup analysis section will include areas where ATLAS could not meet flight like conditions and what were the limitations. Model correlation section will walk through changes that had to be made to the thermal model in order to match test results. The correlated model will then be integrated with spacecraft model for on-orbit predictions.

  1. A General, Synthetic Model for Predicting Biodiversity Gradients from Environmental Geometry.

    PubMed

    Gross, Kevin; Snyder-Beattie, Andrew

    2016-10-01

    Latitudinal and elevational biodiversity gradients fascinate ecologists, and have inspired dozens of explanations. The geometry of the abiotic environment is sometimes thought to contribute to these gradients, yet evaluations of geometric explanations are limited by a fragmented understanding of the diversity patterns they predict. This article presents a mathematical model that synthesizes multiple pathways by which environmental geometry can drive diversity gradients. The model characterizes species ranges by their environmental niches and limits on range sizes and places those ranges onto the simplified geometries of a sphere or cone. The model predicts nuanced and realistic species-richness gradients, including latitudinal diversity gradients with tropical plateaus and mid-latitude inflection points and elevational diversity gradients with low-elevation diversity maxima. The model also illustrates the importance of a mid-environment effect that augments species richness at locations with intermediate environments. Model predictions match multiple empirical biodiversity gradients, depend on ecological traits in a testable fashion, and formally synthesize elements of several geometric models. Together, these results suggest that previous assessments of geometric hypotheses should be reconsidered and that environmental geometry may play a deeper role in driving biodiversity gradients than is currently appreciated.

  2. Broadband moth-eye antireflection coatings on silicon

    NASA Astrophysics Data System (ADS)

    Sun, Chih-Hung; Jiang, Peng; Jiang, Bin

    2008-02-01

    We report a bioinspired templating technique for fabricating broadband antireflection coatings that mimic antireflective moth eyes. Wafer-scale, subwavelength-structured nipple arrays are directly patterned on silicon using spin-coated silica colloidal monolayers as etching masks. The templated gratings exhibit excellent broadband antireflection properties and the normal-incidence specular reflection matches with the theoretical prediction using a rigorous coupled-wave analysis (RCWA) model. We further demonstrate that two common simulation methods, RCWA and thin-film multilayer models, generate almost identical prediction for the templated nipple arrays. This simple bottom-up technique is compatible with standard microfabrication, promising for reducing the manufacturing cost of crystalline silicon solar cells.

  3. The Identities Hidden in the Matching Laws, and Their Uses

    ERIC Educational Resources Information Center

    Thorne, David R.

    2010-01-01

    Various theoretical equations have been proposed to predict response rate as a function of the rate of reinforcement. If both the rate and probability of reinforcement are considered, a simple identity, defining equation, or "law" holds. This identity places algebraic constraints on the allowable forms of our mathematical models and can help…

  4. In Vivo Validation of Numerical Prediction for Turbulence Intensity in an Aortic Coarctation

    PubMed Central

    Arzani, Amirhossein; Dyverfeldt, Petter; Ebbers, Tino; Shadden, Shawn C.

    2013-01-01

    This paper compares numerical predictions of turbulence intensity with in vivo measurement. Magnetic resonance imaging (MRI) was carried out on a 60-year-old female with a restenosed aortic coarctation. Time-resolved three-directional phase-contrast (PC) MRI data was acquired to enable turbulence intensity estimation. A contrast-enhanced MR angiography (MRA) and a time-resolved 2D PCMRI measurement were also performed to acquire data needed to perform subsequent image-based computational fluid dynamics (CFD) modeling. A 3D model of the aortic coarctation and surrounding vasculature was constructed from the MRA data, and physiologic boundary conditions were modeled to match 2D PCMRI and pressure pulse measurements. Blood flow velocity data was subsequently obtained by numerical simulation. Turbulent kinetic energy (TKE) was computed from the resulting CFD data. Results indicate relative agreement (error ≈10%) between the in vivo measurements and the CFD predictions of TKE. The discrepancies in modeled vs. measured TKE values were within expectations due to modeling and measurement errors. PMID:22016327

  5. Acoustical transmission-line model of the middle-ear cavities and mastoid air cells.

    PubMed

    Keefe, Douglas H

    2015-04-01

    An acoustical transmission line model of the middle-ear cavities and mastoid air cell system (MACS) was constructed for the adult human middle ear with normal function. The air-filled cavities comprised the tympanic cavity, aditus, antrum, and MACS. A binary symmetrical airway branching model of the MACS was constructed using an optimization procedure to match the average total volume and surface area of human temporal bones. The acoustical input impedance of the MACS was calculated using a recursive procedure, and used to predict the input impedance of the middle-ear cavities at the location of the tympanic membrane. The model also calculated the ratio of the acoustical pressure in the antrum to the pressure in the middle-ear cavities at the location of the tympanic membrane. The predicted responses were sensitive to the magnitude of the viscothermal losses within the MACS. These predicted input impedance and pressure ratio functions explained the presence of multiple resonances reported in published data, which were not explained by existing MACS models.

  6. Predicting invasiveness of species in trade: Climate match, trophic guild and fecundity influence establishment and impact of non-native freshwater fishes

    USGS Publications Warehouse

    Howeth, Jennifer G.; Gantz, Crysta A.; Angermeier, Paul; Frimpong, Emmanuel A.; Hoff, Michael H.; Keller, Reuben P.; Mandrak, Nicholas E.; Marchetti, Michael P.; Olden, Julian D.; Romagosa, Christina M.; Lodge, David M.

    2016-01-01

    AimImpacts of non-native species have motivated development of risk assessment tools for identifying introduced species likely to become invasive. Here, we develop trait-based models for the establishment and impact stages of freshwater fish invasion, and use them to screen non-native species common in international trade. We also determine which species in the aquarium, biological supply, live bait, live food and water garden trades are likely to become invasive. Results are compared to historical patterns of non-native fish establishment to assess the relative importance over time of pathways in causing invasions.LocationLaurentian Great Lakes region.MethodsTrait-based classification trees for the establishment and impact stages of invasion were developed from data on freshwater fish species that established or failed to establish in the Great Lakes. Fishes in trade were determined from import data from Canadian and United States regulatory agencies, assigned to specific trades and screened through the developed models.ResultsClimate match between a species’ native range and the Great Lakes region predicted establishment success with 75–81% accuracy. Trophic guild and fecundity predicted potential harmful impacts of established non-native fishes with 75–83% accuracy. Screening outcomes suggest the water garden trade poses the greatest risk of introducing new invasive species, followed by the live food and aquarium trades. Analysis of historical patterns of introduction pathways demonstrates the increasing importance of these trades relative to other pathways. Comparisons among trades reveal that model predictions parallel historical patterns; all fishes previously introduced from the water garden trade have established. The live bait, biological supply, aquarium and live food trades have also contributed established non-native fishes.Main conclusionsOur models predict invasion risk of potential fish invaders to the Great Lakes region and could help managers prioritize efforts among species and pathways to minimize such risk. Similar approaches could be applied to other taxonomic groups and geographic regions.

  7. Comment on "Advective transport in heterogeneous aquifers: Are proxy models predictive?" by A. Fiori, A. Zarlenga, H. Gotovac, I. Jankovic, E. Volpi, V. Cvetkovic, and G. Dagan

    NASA Astrophysics Data System (ADS)

    Neuman, Shlomo P.

    2016-07-01

    Fiori et al. (2015) examine the predictive capabilities of (among others) two "proxy" non-Fickian transport models, MRMT (Multi-Rate Mass Transfer) and CTRW (Continuous-Time Random Walk). In particular, they compare proxy model predictions of mean breakthrough curves (BTCs) at a sequence of control planes with near-ergodic BTCs generated through two- and three-dimensional simulations of nonreactive, mean-uniform advective transport in single realizations of stationary, randomly heterogeneous porous media. The authors find fitted proxy model parameters to be nonunique and devoid of clear physical meaning. This notwithstanding, they conclude optimistically that "i. Fitting the proxy models to match the BTC at [one control plane] automatically ensures prediction at downstream control planes [and thus] ii. … the measured BTC can be used directly for prediction, with no need to use models underlain by fitting." I show that (a) the authors' findings follow directly from (and thus confirm) theoretical considerations discussed earlier by Neuman and Tartakovsky (2009), which (b) additionally demonstrate that proxy models will lack similar predictive capabilities under more realistic, non-Markovian flow and transport conditions that prevail under flow through nonstationary (e.g., multiscale) media in the presence of boundaries and/or nonuniformly distributed sources, and/or when flow/transport are conditioned on measurements.

  8. Model calibration and issues related to validation, sensitivity analysis, post-audit, uncertainty evaluation and assessment of prediction data needs

    USGS Publications Warehouse

    Tiedeman, Claire; Hill, Mary C.

    2007-01-01

    When simulating natural and engineered groundwater flow and transport systems, one objective is to produce a model that accurately represents important aspects of the true system. However, using direct measurements of system characteristics, such as hydraulic conductivity, to construct a model often produces simulated values that poorly match observations of the system state, such as hydraulic heads, flows and concentrations (for example, Barth et al., 2001). This occurs because of inaccuracies in the direct measurements and because the measurements commonly characterize system properties at different scales from that of the model aspect to which they are applied. In these circumstances, the conservation of mass equations represented by flow and transport models can be used to test the applicability of the direct measurements, such as by comparing model simulated values to the system state observations. This comparison leads to calibrating the model, by adjusting the model construction and the system properties as represented by model parameter values, so that the model produces simulated values that reasonably match the observations.

  9. Perceptual precision of passive body tilt is consistent with statistically optimal cue integration

    PubMed Central

    Karmali, Faisal; Nicoucar, Keyvan; Merfeld, Daniel M.

    2017-01-01

    When making perceptual decisions, humans have been shown to optimally integrate independent noisy multisensory information, matching maximum-likelihood (ML) limits. Such ML estimators provide a theoretic limit to perceptual precision (i.e., minimal thresholds). However, how the brain combines two interacting (i.e., not independent) sensory cues remains an open question. To study the precision achieved when combining interacting sensory signals, we measured perceptual roll tilt and roll rotation thresholds between 0 and 5 Hz in six normal human subjects. Primary results show that roll tilt thresholds between 0.2 and 0.5 Hz were significantly lower than predicted by a ML estimator that includes only vestibular contributions that do not interact. In this paper, we show how other cues (e.g., somatosensation) and an internal representation of sensory and body dynamics might independently contribute to the observed performance enhancement. In short, a Kalman filter was combined with an ML estimator to match human performance, whereas the potential contribution of nonvestibular cues was assessed using published bilateral loss patient data. Our results show that a Kalman filter model including previously proven canal-otolith interactions alone (without nonvestibular cues) can explain the observed performance enhancements as can a model that includes nonvestibular contributions. NEW & NOTEWORTHY We found that human whole body self-motion direction-recognition thresholds measured during dynamic roll tilts were significantly lower than those predicted by a conventional maximum-likelihood weighting of the roll angular velocity and quasistatic roll tilt cues. Here, we show that two models can each match this “apparent” better-than-optimal performance: 1) inclusion of a somatosensory contribution and 2) inclusion of a dynamic sensory interaction between canal and otolith cues via a Kalman filter model. PMID:28179477

  10. Hepatocellular carcinoma in uremic patients: is there evidence for an increased risk of mortality?

    PubMed

    Lee, Yun-Hsuan; Hsu, Chia-Yang; Hsia, Cheng-Yuan; Huang, Yi-Hsiang; Su, Chien-Wei; Lin, Han-Chieh; Lee, Rheun-Chuan; Chiou, Yi-You; Huo, Teh-Ia

    2013-02-01

    The clinical aspects of patients with hepatocellular carcinoma (HCC) undergoing maintenance dialysis are largely unknown. We aimed to investigate the long-term survival and prognostic determinants of dialysis patients with HCC. A total of 2502 HCC patients, including 30 dialysis patients and 90 age, sex, and treatment-matched controls were retrospectively analyzed. Dialysis patients more often had dual viral hepatitis B and C, lower serum α-fetoprotein level, worse performance status, higher model for end-stage liver disease (MELD) score than non-dialysis patients and matched controls (P all < 0.05). There was no significant difference in long-term survival between dialysis and non-dialysis patients and matched controls (P = 0.684 and 0.373, respectively). In the Cox proportional hazards model, duration of dialysis < 40 months (hazard ratio [HR]: 6.67, P = 0.019) and ascites (HR: 5.275, P = 0.019) were independent predictors of poor prognosis for dialysis patients with HCC. Survival analysis disclosed that the Child-Turcotte-Pugh (CTP) provided a better prognostic ability than the MELD system. Among the four currently used staging systems, the Japan Integrated Scoring (JIS) system was a more accurate prognostic model for dialysis patients; a JIS score ≥ 2 significantly predicted a worse survival (P = 0.024). Patients with HCC undergoing maintenance dialysis do not have a worse long-term survival. A longer duration of dialysis and absence of ascites formation are associated with a better outcome in dialysis patients. The CTP classification is a more feasible prognostic marker to indicate the severity of cirrhosis, and the JIS system may be a better staging model for outcome prediction. © 2012 Journal of Gastroenterology and Hepatology Foundation and Wiley Publishing Asia Pty Ltd.

  11. How Sensitive Are Transdermal Transport Predictions by Microscopic Stratum Corneum Models to Geometric and Transport Parameter Input?

    PubMed

    Wen, Jessica; Koo, Soh Myoung; Lape, Nancy

    2018-02-01

    While predictive models of transdermal transport have the potential to reduce human and animal testing, microscopic stratum corneum (SC) model output is highly dependent on idealized SC geometry, transport pathway (transcellular vs. intercellular), and penetrant transport parameters (e.g., compound diffusivity in lipids). Most microscopic models are limited to a simple rectangular brick-and-mortar SC geometry and do not account for variability across delivery sites, hydration levels, and populations. In addition, these models rely on transport parameters obtained from pure theory, parameter fitting to match in vivo experiments, and time-intensive diffusion experiments for each compound. In this work, we develop a microscopic finite element model that allows us to probe model sensitivity to variations in geometry, transport pathway, and hydration level. Given the dearth of experimentally-validated transport data and the wide range in theoretically-predicted transport parameters, we examine the model's response to a variety of transport parameters reported in the literature. Results show that model predictions are strongly dependent on all aforementioned variations, resulting in order-of-magnitude differences in lag times and permeabilities for distinct structure, hydration, and parameter combinations. This work demonstrates that universally predictive models cannot fully succeed without employing experimentally verified transport parameters and individualized SC structures. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  12. Validation of an Algorithm to Predict the Likelihood of an 8/8 HLA-Matched Unrelated Donor at Search Initiation.

    PubMed

    Davis, Eric; Devlin, Sean; Cooper, Candice; Nhaissi, Melissa; Paulson, Jennifer; Wells, Deborah; Scaradavou, Andromachi; Giralt, Sergio; Papadopoulos, Esperanza; Kernan, Nancy A; Byam, Courtney; Barker, Juliet N

    2018-05-01

    A strategy to rapidly determine if a matched unrelated donor (URD) can be secured for allograft recipients is needed. We sought to validate the accuracy of (1) HapLogic match predictions and (2) a resultant novel Search Prognosis (SP) patient categorization that could predict 8/8 HLA-matched URD(s) likelihood at search initiation. Patient prognosis categories at search initiation were correlated with URD confirmatory typing results. HapLogic-based SP categorizations accurately predicted the likelihood of an 8/8 HLA-match in 830 patients (1530 donors tested). Sixty percent of patients had 8/8 URD(s) identified. Patient SP categories (217 very good, 104 good, 178 fair, 33 poor, 153 very poor, 145 futile) were associated with a marked progressive decrease in 8/8 URD identification and transplantation. Very good to good categories were highly predictive of identifying and receiving an 8/8 URD regardless of ancestry. Europeans in fair/poor categories were more likely to identify and receive an 8/8 URD compared with non-Europeans. In all ancestries very poor and futile categories predicted no 8/8 URDs. HapLogic permits URD search results to be predicted once patient HLA typing and ancestry is obtained, dramatically improving search efficiency. Poor, very poor, andfutile searches can be immediately recognized, thereby facilitating prompt pursuit of alternative donors. Copyright © 2017 The American Society for Blood and Marrow Transplantation. Published by Elsevier Inc. All rights reserved.

  13. On combination of strict Bayesian principles with model reduction technique or how stochastic model calibration can become feasible for large-scale applications

    NASA Astrophysics Data System (ADS)

    Oladyshkin, S.; Schroeder, P.; Class, H.; Nowak, W.

    2013-12-01

    Predicting underground carbon dioxide (CO2) storage represents a challenging problem in a complex dynamic system. Due to lacking information about reservoir parameters, quantification of uncertainties may become the dominant question in risk assessment. Calibration on past observed data from pilot-scale test injection can improve the predictive power of the involved geological, flow, and transport models. The current work performs history matching to pressure time series from a pilot storage site operated in Europe, maintained during an injection period. Simulation of compressible two-phase flow and transport (CO2/brine) in the considered site is computationally very demanding, requiring about 12 days of CPU time for an individual model run. For that reason, brute-force approaches for calibration are not feasible. In the current work, we explore an advanced framework for history matching based on the arbitrary polynomial chaos expansion (aPC) and strict Bayesian principles. The aPC [1] offers a drastic but accurate stochastic model reduction. Unlike many previous chaos expansions, it can handle arbitrary probability distribution shapes of uncertain parameters, and can therefore handle directly the statistical information appearing during the matching procedure. We capture the dependence of model output on these multipliers with the expansion-based reduced model. In our study we keep the spatial heterogeneity suggested by geophysical methods, but consider uncertainty in the magnitude of permeability trough zone-wise permeability multipliers. Next combined the aPC with Bootstrap filtering (a brute-force but fully accurate Bayesian updating mechanism) in order to perform the matching. In comparison to (Ensemble) Kalman Filters, our method accounts for higher-order statistical moments and for the non-linearity of both the forward model and the inversion, and thus allows a rigorous quantification of calibrated model uncertainty. The usually high computational costs of accurate filtering become very feasible for our suggested aPC-based calibration framework. However, the power of aPC-based Bayesian updating strongly depends on the accuracy of prior information. In the current study, the prior assumptions on the model parameters were not satisfactory and strongly underestimate the reservoir pressure. Thus, the aPC-based response surface used in Bootstrap filtering is fitted to a distant and poorly chosen region within the parameter space. Thanks to the iterative procedure suggested in [2] we overcome this drawback with small computational costs. The iteration successively improves the accuracy of the expansion around the current estimation of the posterior distribution. The final result is a calibrated model of the site that can be used for further studies, with an excellent match to the data. References [1] Oladyshkin S. and Nowak W. Data-driven uncertainty quantification using the arbitrary polynomial chaos expansion. Reliability Engineering and System Safety, 106:179-190, 2012. [2] Oladyshkin S., Class H., Nowak W. Bayesian updating via Bootstrap filtering combined with data-driven polynomial chaos expansions: methodology and application to history matching for carbon dioxide storage in geological formations. Computational Geosciences, 17 (4), 671-687, 2013.

  14. Development of mathematical techniques for the assimilation of remote sensing data into atmospheric models

    NASA Technical Reports Server (NTRS)

    Seinfeld, J. H. (Principal Investigator)

    1982-01-01

    The problem of the assimilation of remote sensing data into mathematical models of atmospheric pollutant species was investigated. The data assimilation problem is posed in terms of the matching of spatially integrated species burden measurements to the predicted three-dimensional concentration fields from atmospheric diffusion models. General conditions were derived for the reconstructability of atmospheric concentration distributions from data typical of remote sensing applications, and a computational algorithm (filter) for the processing of remote sensing data was developed.

  15. Development of mathematical techniques for the assimilation of remote sensing data into atmospheric models

    NASA Technical Reports Server (NTRS)

    Seinfeld, J. H. (Principal Investigator)

    1982-01-01

    The problem of the assimilation of remote sensing data into mathematical models of atmospheric pollutant species was investigated. The problem is posed in terms of the matching of spatially integrated species burden measurements to the predicted three dimensional concentration fields from atmospheric diffusion models. General conditions are derived for the "reconstructability' of atmospheric concentration distributions from data typical of remote sensing applications, and a computational algorithm (filter) for the processing of remote sensing data is developed.

  16. Using waveform information in nonlinear data assimilation

    NASA Astrophysics Data System (ADS)

    Rey, Daniel; Eldridge, Michael; Morone, Uriel; Abarbanel, Henry D. I.; Parlitz, Ulrich; Schumann-Bischoff, Jan

    2014-12-01

    Information in measurements of a nonlinear dynamical system can be transferred to a quantitative model of the observed system to establish its fixed parameters and unobserved state variables. After this learning period is complete, one may predict the model response to new forces and, when successful, these predictions will match additional observations. This adjustment process encounters problems when the model is nonlinear and chaotic because dynamical instability impedes the transfer of information from the data to the model when the number of measurements at each observation time is insufficient. We discuss the use of information in the waveform of the data, realized through a time delayed collection of measurements, to provide additional stability and accuracy to this search procedure. Several examples are explored, including a few familiar nonlinear dynamical systems and small networks of Colpitts oscillators.

  17. Learning to Select Actions with Spiking Neurons in the Basal Ganglia

    PubMed Central

    Stewart, Terrence C.; Bekolay, Trevor; Eliasmith, Chris

    2012-01-01

    We expand our existing spiking neuron model of decision making in the cortex and basal ganglia to include local learning on the synaptic connections between the cortex and striatum, modulated by a dopaminergic reward signal. We then compare this model to animal data in the bandit task, which is used to test rodent learning in conditions involving forced choice under rewards. Our results indicate a good match in terms of both behavioral learning results and spike patterns in the ventral striatum. The model successfully generalizes to learning the utilities of multiple actions, and can learn to choose different actions in different states. The purpose of our model is to provide both high-level behavioral predictions and low-level spike timing predictions while respecting known neurophysiology and neuroanatomy. PMID:22319465

  18. Beef quality grading using machine vision

    NASA Astrophysics Data System (ADS)

    Jeyamkondan, S.; Ray, N.; Kranzler, Glenn A.; Biju, Nisha

    2000-12-01

    A video image analysis system was developed to support automation of beef quality grading. Forty images of ribeye steaks were acquired. Fat and lean meat were differentiated using a fuzzy c-means clustering algorithm. Muscle longissimus dorsi (l.d.) was segmented from the ribeye using morphological operations. At the end of each iteration of erosion and dilation, a convex hull was fitted to the image and compactness was measured. The number of iterations was selected to yield the most compact l.d. Match between the l.d. muscle traced by an expert grader and that segmented by the program was 95.9%. Marbling and color features were extracted from the l.d. muscle and were used to build regression models to predict marbling and color scores. Quality grade was predicted using another regression model incorporating all features. Grades predicted by the model were statistically equivalent to the grades assigned by expert graders.

  19. Derivation and calibration of a gas metal arc welding (GMAW) dynamic droplet model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reutzel, E.W.; Einerson, C.J.; Johnson, J.A.

    1996-12-31

    A rudimentary, existing dynamic model for droplet growth and detachment in gas metal arc welding (GMAW) was improved and calibrated to match experimental data. The model simulates droplets growing at the end of an imaginary spring. Mass is added to the drop as the electrode melts, the droplet grows, and the spring is displaced. Detachment occurs when one of two criteria is met, and the amount of mass that is detached is a function of the droplet velocity at the time of detachment. Improvements to the model include the addition of a second criterion for drop detachment, a more sophisticatedmore » model of the power supply and secondary electric circuit, and the incorporation of a variable electrode resistance. Relevant physical parameters in the model were adjusted during model calibration. The average current, droplet frequency, and parameter-space location of globular-to-streaming mode transition were used as criteria for tuning the model. The average current predicted by the calibrated model matched the experimental average current to within 5% over a wide range of operating conditions.« less

  20. Generalisability of vaccine effectiveness estimates: an analysis of cases included in a postlicensure evaluation of 13-valent pneumococcal conjugate vaccine in the USA

    PubMed Central

    Link-Gelles, Ruth; Westreich, Daniel; Aiello, Allison E; Shang, Nong; Weber, David J; Rosen, Jennifer B; Motala, Tasneem; Mascola, Laurene; Eason, Jeffery; Scherzinger, Karen; Holtzman, Corinne; Reingold, Arthur L; Barnes, Meghan; Petit, Susan; Farley, Monica M; Harrison, Lee H; Zansky, Shelley; Thomas, Ann; Schaffner, William; McGee, Lesley; Whitney, Cynthia G; Moore, Matthew R

    2017-01-01

    Objectives External validity, or generalisability, is the measure of how well results from a study pertain to individuals in the target population. We assessed generalisability, with respect to socioeconomic status, of estimates from a matched case–control study of 13-valent pneumococcal conjugate vaccine effectiveness for the prevention of invasive pneumococcal disease in children in the USA. Design Matched case–control study. Setting Thirteen active surveillance sites for invasive pneumococcal disease in the USA. Participants Cases were identified from active surveillance and controls were age and zip code matched. Outcome measures Socioeconomic status was assessed at the individual level via parent interview (for enrolled individuals only) and birth certificate data (for both enrolled and unenrolled individuals) and at the neighbourhood level by geocoding to the census tract (for both enrolled and unenrolled individuals). Prediction models were used to determine if socioeconomic status was associated with enrolment. Results We enrolled 54.6% of 1211 eligible cases and found a trend toward enrolled cases being more affluent than unenrolled cases. Enrolled cases were slightly more likely to have private insurance at birth (p=0.08) and have mothers with at least some college education (p<0.01). Enrolled cases also tended to come from more affluent census tracts. Despite these differences, our best predictive model for enrolment yielded a concordance statistic of only 0.703, indicating mediocre predictive value. Variables retained in the final model were assessed for effect measure modification, and none were found to be significant modifiers of vaccine effectiveness. Conclusions We conclude that although enrolled cases are somewhat more affluent than unenrolled cases, our estimates are externally valid with respect to socioeconomic status. Our analysis provides evidence that this study design can yield valid estimates and the assessing generalisability of observational data is feasible, even when unenrolled individuals cannot be contacted. PMID:28851801

  1. Gas Generator Feedline Orifice Sizing Methodology: Effects of Unsteadiness and Non-Axisymmetric Flow

    NASA Technical Reports Server (NTRS)

    Rothermel, Jeffry; West, Jeffrey S.

    2011-01-01

    Engine LH2 and LO2 gas generator feed assemblies were modeled with computational fluid dynamics (CFD) methods at 100% rated power level, using on-center square- and round-edge orifices. The purpose of the orifices is to regulate the flow of fuel and oxidizer to the gas generator, enabling optimal power supply to the turbine and pump assemblies. The unsteady Reynolds-Averaged Navier-Stokes equations were solved on unstructured grids at second-order spatial and temporal accuracy. The LO2 model was validated against published experimental data and semi-empirical relationships for thin-plate orifices over a range of Reynolds numbers. Predictions for the LO2 square- and round-edge orifices precisely match experiment and semi-empirical formulas, despite complex feedline geometry whereby a portion of the flow from the engine main feedlines travels at a right-angle through a smaller-diameter pipe containing the orifice. Predictions for LH2 square- and round-edge orifice designs match experiment and semi-empirical formulas to varying degrees depending on the semi-empirical formula being evaluated. LO2 mass flow rate through the square-edge orifice is predicted to be 25 percent less than the flow rate budgeted in the original engine balance, which was subsequently modified. LH2 mass flow rate through the square-edge orifice is predicted to be 5 percent greater than the flow rate budgeted in the engine balance. Since CFD predictions for LO2 and LH2 square-edge orifice pressure loss coefficients, K, both agree with published data, the equation for K has been used to define a procedure for orifice sizing.

  2. Predicting the relative binding affinity of mineralocorticoid receptor antagonists by density functional methods

    NASA Astrophysics Data System (ADS)

    Roos, Katarina; Hogner, Anders; Ogg, Derek; Packer, Martin J.; Hansson, Eva; Granberg, Kenneth L.; Evertsson, Emma; Nordqvist, Anneli

    2015-12-01

    In drug discovery, prediction of binding affinity ahead of synthesis to aid compound prioritization is still hampered by the low throughput of the more accurate methods and the lack of general pertinence of one method that fits all systems. Here we show the applicability of a method based on density functional theory using core fragments and a protein model with only the first shell residues surrounding the core, to predict relative binding affinity of a matched series of mineralocorticoid receptor (MR) antagonists. Antagonists of MR are used for treatment of chronic heart failure and hypertension. Marketed MR antagonists, spironolactone and eplerenone, are also believed to be highly efficacious in treatment of chronic kidney disease in diabetes patients, but is contra-indicated due to the increased risk for hyperkalemia. These findings and a significant unmet medical need among patients with chronic kidney disease continues to stimulate efforts in the discovery of new MR antagonist with maintained efficacy but low or no risk for hyperkalemia. Applied on a matched series of MR antagonists the quantum mechanical based method gave an R2 = 0.76 for the experimental lipophilic ligand efficiency versus relative predicted binding affinity calculated with the M06-2X functional in gas phase and an R2 = 0.64 for experimental binding affinity versus relative predicted binding affinity calculated with the M06-2X functional including an implicit solvation model. The quantum mechanical approach using core fragments was compared to free energy perturbation calculations using the full sized compound structures.

  3. A user-friendly model for spray drying to aid pharmaceutical product development.

    PubMed

    Grasmeijer, Niels; de Waard, Hans; Hinrichs, Wouter L J; Frijlink, Henderik W

    2013-01-01

    The aim of this study was to develop a user-friendly model for spray drying that can aid in the development of a pharmaceutical product, by shifting from a trial-and-error towards a quality-by-design approach. To achieve this, a spray dryer model was developed in commercial and open source spreadsheet software. The output of the model was first fitted to the experimental output of a Büchi B-290 spray dryer and subsequently validated. The predicted outlet temperatures of the spray dryer model matched the experimental values very well over the entire range of spray dryer settings that were tested. Finally, the model was applied to produce glassy sugars by spray drying, an often used excipient in formulations of biopharmaceuticals. For the production of glassy sugars, the model was extended to predict the relative humidity at the outlet, which is not measured in the spray dryer by default. This extended model was then successfully used to predict whether specific settings were suitable for producing glassy trehalose and inulin by spray drying. In conclusion, a spray dryer model was developed that is able to predict the output parameters of the spray drying process. The model can aid the development of spray dried pharmaceutical products by shifting from a trial-and-error towards a quality-by-design approach.

  4. Numerical simulation of flow in a high head Francis turbine with prediction of efficiency, rotor stator interaction and vortex structures in the draft tube

    NASA Astrophysics Data System (ADS)

    Jošt, D.; Škerlavaj, A.; Morgut, M.; Mežnar, P.; Nobile, E.

    2015-01-01

    The paper presents numerical simulations of flow in a model of a high head Francis turbine and comparison of results to the measurements. Numerical simulations were done by two CFD (Computational Fluid Dynamics) codes, Ansys CFX and OpenFOAM. Steady-state simulations were performed by k-epsilon and SST model, while for transient simulations the SAS SST ZLES model was used. With proper grid refinement in distributor and runner and with taking into account losses in labyrinth seals very accurate prediction of torque on the shaft, head and efficiency was obtained. Calculated axial and circumferential velocity components on two planes in the draft tube matched well with experimental results.

  5. MED: a new non-supervised gene prediction algorithm for bacterial and archaeal genomes.

    PubMed

    Zhu, Huaiqiu; Hu, Gang-Qing; Yang, Yi-Fan; Wang, Jin; She, Zhen-Su

    2007-03-16

    Despite a remarkable success in the computational prediction of genes in Bacteria and Archaea, a lack of comprehensive understanding of prokaryotic gene structures prevents from further elucidation of differences among genomes. It continues to be interesting to develop new ab initio algorithms which not only accurately predict genes, but also facilitate comparative studies of prokaryotic genomes. This paper describes a new prokaryotic genefinding algorithm based on a comprehensive statistical model of protein coding Open Reading Frames (ORFs) and Translation Initiation Sites (TISs). The former is based on a linguistic "Entropy Density Profile" (EDP) model of coding DNA sequence and the latter comprises several relevant features related to the translation initiation. They are combined to form a so-called Multivariate Entropy Distance (MED) algorithm, MED 2.0, that incorporates several strategies in the iterative program. The iterations enable us to develop a non-supervised learning process and to obtain a set of genome-specific parameters for the gene structure, before making the prediction of genes. Results of extensive tests show that MED 2.0 achieves a competitive high performance in the gene prediction for both 5' and 3' end matches, compared to the current best prokaryotic gene finders. The advantage of the MED 2.0 is particularly evident for GC-rich genomes and archaeal genomes. Furthermore, the genome-specific parameters given by MED 2.0 match with the current understanding of prokaryotic genomes and may serve as tools for comparative genomic studies. In particular, MED 2.0 is shown to reveal divergent translation initiation mechanisms in archaeal genomes while making a more accurate prediction of TISs compared to the existing gene finders and the current GenBank annotation.

  6. Linking melodic expectation to expressive performance timing and perceived musical tension.

    PubMed

    Gingras, Bruno; Pearce, Marcus T; Goodchild, Meghan; Dean, Roger T; Wiggins, Geraint; McAdams, Stephen

    2016-04-01

    This research explored the relations between the predictability of musical structure, expressive timing in performance, and listeners' perceived musical tension. Studies analyzing the influence of expressive timing on listeners' affective responses have been constrained by the fact that, in most pieces, the notated durations limit performers' interpretive freedom. To circumvent this issue, we focused on the unmeasured prelude, a semi-improvisatory genre without notated durations. In Experiment 1, 12 professional harpsichordists recorded an unmeasured prelude on a harpsichord equipped with a MIDI console. Melodic expectation was assessed using a probabilistic model (IDyOM [Information Dynamics of Music]) whose expectations have been previously shown to match closely those of human listeners. Performance timing information was extracted from the MIDI data using a score-performance matching algorithm. Time-series analyses showed that, in a piece with unspecified note durations, the predictability of melodic structure measurably influenced tempo fluctuations in performance. In Experiment 2, another 10 harpsichordists, 20 nonharpsichordist musicians, and 20 nonmusicians listened to the recordings from Experiment 1 and rated the perceived tension continuously. Granger causality analyses were conducted to investigate predictive relations among melodic expectation, expressive timing, and perceived tension. Although melodic expectation, as modeled by IDyOM, modestly predicted perceived tension for all participant groups, neither of its components, information content or entropy, was Granger causal. In contrast, expressive timing was a strong predictor and was Granger causal. However, because melodic expectation was also predictive of expressive timing, our results outline a complete chain of influence from predictability of melodic structure via expressive performance timing to perceived musical tension. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  7. Experiments on the applicability of MAE techniques for predicting sound diffraction by irregular terrains. [Matched Asymptotic Expansion

    NASA Technical Reports Server (NTRS)

    Berthelot, Yves H.; Pierce, Allan D.; Kearns, James A.

    1987-01-01

    The sound field diffracted by a single smooth hill of finite impedance is studied both analytically, within the context of the theory of Matched Asymptotic Expansions (MAE), and experimentally, under laboratory scale modeling conditions. Special attention is given to the sound field on the diffracting surface and throughout the transition region between the illuminated and the shadow zones. The MAE theory yields integral equations that are amenable to numerical computations. Experimental results are obtained with a spark source producing a pulse of 42 microsec duration and about 130 Pa at 1 m. The insertion loss of the hill is inferred from measurements of the acoustic signals at two locations in the field, with subsequent Fourier analysis on an IBM PC/AT. In general, experimental results support the predictions of the MAE theory, and provide a basis for the analysis of more complicated geometries.

  8. Numerical exploration of dissimilar supersonic coaxial jets mixing

    NASA Astrophysics Data System (ADS)

    Dharavath, Malsur; Manna, P.; Chakraborty, Debasis

    2015-06-01

    Mixing of two coaxial supersonic dissimilar gases in free jet environment is numerically explored. Three dimensional RANS equations with a k-ε turbulence model are solved using commercial CFD software. Two important experimental cases (RELIEF experiments) representing compressible mixing flow phenomenon under scramjet operating conditions for which detail profiles of thermochemical variables are available are taken as validation cases. Two different convective Mach numbers 0.16 and 0.70 are considered for simulations. The computed growth rate, pitot pressure and mass fraction profiles for both these cases match extremely well with experimental values and results of other high fidelity numerical results both in far field and near field regions. For higher convective Mach number predicted growth rate matches nicely with empirical Dimotakis curve; whereas for lower convective Mach number, predicted growth rate is higher. It is shown that well resolved RANS calculation can capture the mixing of two supersonic dissimilar gases better than high fidelity LES calculations.

  9. Region-Based Prediction for Image Compression in the Cloud.

    PubMed

    Begaint, Jean; Thoreau, Dominique; Guillotel, Philippe; Guillemot, Christine

    2018-04-01

    Thanks to the increasing number of images stored in the cloud, external image similarities can be leveraged to efficiently compress images by exploiting inter-images correlations. In this paper, we propose a novel image prediction scheme for cloud storage. Unlike current state-of-the-art methods, we use a semi-local approach to exploit inter-image correlation. The reference image is first segmented into multiple planar regions determined from matched local features and super-pixels. The geometric and photometric disparities between the matched regions of the reference image and the current image are then compensated. Finally, multiple references are generated from the estimated compensation models and organized in a pseudo-sequence to differentially encode the input image using classical video coding tools. Experimental results demonstrate that the proposed approach yields significant rate-distortion performance improvements compared with the current image inter-coding solutions such as high efficiency video coding.

  10. Computation of Alfvèn eigenmode stability and saturation through a reduced fast ion transport model in the TRANSP tokamak transport code

    NASA Astrophysics Data System (ADS)

    Podestà, M.; Gorelenkova, M.; Gorelenkov, N. N.; White, R. B.

    2017-09-01

    Alfvénic instabilities (AEs) are well known as a potential cause of enhanced fast ion transport in fusion devices. Given a specific plasma scenario, quantitative predictions of (i) expected unstable AE spectrum and (ii) resulting fast ion transport are required to prevent or mitigate the AE-induced degradation in fusion performance. Reduced models are becoming an attractive tool to analyze existing scenarios as well as for scenario prediction in time-dependent simulations. In this work, a neutral beam heated NSTX discharge is used as reference to illustrate the potential of a reduced fast ion transport model, known as kick model, that has been recently implemented for interpretive and predictive analysis within the framework of the time-dependent tokamak transport code TRANSP. Predictive capabilities for AE stability and saturation amplitude are first assessed, based on given thermal plasma profiles only. Predictions are then compared to experimental results, and the interpretive capabilities of the model further discussed. Overall, the reduced model captures the main properties of the instabilities and associated effects on the fast ion population. Additional information from the actual experiment enables further tuning of the model’s parameters to achieve a close match with measurements.

  11. Loss model for off-design performance analysis of radial turbines with pivoting-vane, variable-area stators

    NASA Technical Reports Server (NTRS)

    Meitner, P. L.; Glassman, A. J.

    1980-01-01

    An off-design performance loss model for a radial turbine with pivoting, variable-area stators is developed through a combination of analytical modeling and experimental data analysis. A viscous loss model is used for the variation in stator loss with setting angle, and stator vane end-clearance leakage effects are predicted by a clearance flow model. The variation of rotor loss coefficient with stator setting angle is obtained by means of an analytical matching of experimental data for a rotor that was tested with six stators, having throat areas from 20 to 144% of the design area. An incidence loss model is selected to obtain best agreement with experimental data. The stator vane end-clearance leakage model predicts increasing mass flow and decreasing efficiency as a result of end-clearances, with changes becoming significantly larger with decreasing stator area.

  12. Comparison of different two-pathway models for describing the combined effect of DO and nitrite on the nitrous oxide production by ammonia-oxidizing bacteria.

    PubMed

    Lang, Longqi; Pocquet, Mathieu; Ni, Bing-Jie; Yuan, Zhiguo; Spérandio, Mathieu

    2017-02-01

    The aim of this work is to compare the capability of two recently proposed two-pathway models for predicting nitrous oxide (N 2 O) production by ammonia-oxidizing bacteria (AOB) for varying ranges of dissolved oxygen (DO) and nitrite. The first model includes the electron carriers whereas the second model is based on direct coupling of electron donors and acceptors. Simulations are confronted to extensive sets of experiments (43 batches) from different studies with three different microbial systems. Despite their different mathematical structures, both models could well and similarly describe the combined effect of DO and nitrite on N 2 O production rate and emission factor. The model-predicted contributions for nitrifier denitrification pathway and hydroxylamine pathway also matched well with the available isotopic measurements. Based on sensitivity analysis, calibration procedures are described and discussed for facilitating the future use of those models.

  13. Thermal therapy in urologic systems: a comparison of arrhenius and thermal isoeffective dose models in predicting hyperthermic injury.

    PubMed

    He, Xiaoming; Bhowmick, Sankha; Bischof, John C

    2009-07-01

    The Arrhenius and thermal isoeffective dose (TID) models are the two most commonly used models for predicting hyperthermic injury. The TID model is essentially derived from the Arrhenius model, but due to a variety of assumptions and simplifications now leads to different predictions, particularly at temperatures higher than 50 degrees C. In the present study, the two models are compared and their appropriateness tested for predicting hyperthermic injury in both the traditional hyperthermia (usually, 43-50 degrees C) and thermal surgery (or thermal therapy/thermal ablation, usually, >50 degrees C) regime. The kinetic parameters of thermal injury in both models were obtained from the literature (or literature data), tabulated, and analyzed for various prostate and kidney systems. It was found that the kinetic parameters vary widely, and were particularly dependent on the cell or tissue type, injury assay used, and the time when the injury assessment was performed. In order to compare the capability of the two models for thermal injury prediction, thermal thresholds for complete killing (i.e., 99% cell or tissue injury) were predicted using the models in two important urologic systems, viz., the benign prostatic hyperplasia tissue and the normal porcine kidney tissue. The predictions of the two models matched well at temperatures below 50 degrees C. At higher temperatures, however, the thermal thresholds predicted using the TID model with a constant R value of 0.5, the value commonly used in the traditional hyperthermia literature, are much lower than those predicted using the Arrhenius model. This suggests that traditional use of the TID model (i.e., R=0.5) is inappropriate for predicting hyperthermic injury in the thermal surgery regime (>50 degrees C). Finally, the time-temperature relationships for complete killing (i.e., 99% injury) were calculated and analyzed using the Arrhenius model for the various prostate and kidney systems.

  14. Using Bayesian Networks for Candidate Generation in Consistency-based Diagnosis

    NASA Technical Reports Server (NTRS)

    Narasimhan, Sriram; Mengshoel, Ole

    2008-01-01

    Consistency-based diagnosis relies heavily on the assumption that discrepancies between model predictions and sensor observations can be detected accurately. When sources of uncertainty like sensor noise and model abstraction exist robust schemes have to be designed to make a binary decision on whether predictions are consistent with observations. This risks the occurrence of false alarms and missed alarms when an erroneous decision is made. Moreover when multiple sensors (with differing sensing properties) are available the degree of match between predictions and observations can be used to guide the search for fault candidates. In this paper we propose a novel approach to handle this problem using Bayesian networks. In the consistency- based diagnosis formulation, automatically generated Bayesian networks are used to encode a probabilistic measure of fit between predictions and observations. A Bayesian network inference algorithm is used to compute most probable fault candidates.

  15. The predictability of a lake phytoplankton community, over time-scales of hours to years.

    PubMed

    Thomas, Mridul K; Fontana, Simone; Reyes, Marta; Kehoe, Michael; Pomati, Francesco

    2018-05-01

    Forecasting changes to ecological communities is one of the central challenges in ecology. However, nonlinear dependencies, biotic interactions and data limitations have limited our ability to assess how predictable communities are. Here, we used a machine learning approach and environmental monitoring data (biological, physical and chemical) to assess the predictability of phytoplankton cell density in one lake across an unprecedented range of time-scales. Communities were highly predictable over hours to months: model R 2 decreased from 0.89 at 4 hours to 0.74 at 1 month, and in a long-term dataset lacking fine spatial resolution, from 0.46 at 1 month to 0.32 at 10 years. When cyanobacterial and eukaryotic algal cell densities were examined separately, model-inferred environmental growth dependencies matched laboratory studies, and suggested novel trade-offs governing their competition. High-frequency monitoring and machine learning can set prediction targets for process-based models and help elucidate the mechanisms underlying ecological dynamics. © 2018 John Wiley & Sons Ltd/CNRS.

  16. Improving density functional tight binding predictions of free energy surfaces for peptide condensation reactions in solution

    NASA Astrophysics Data System (ADS)

    Kroonblawd, Matthew; Goldman, Nir

    First principles molecular dynamics using highly accurate density functional theory (DFT) is a common tool for predicting chemistry, but the accessible time and space scales are often orders of magnitude beyond the resolution of experiments. Semi-empirical methods such as density functional tight binding (DFTB) offer up to a thousand-fold reduction in required CPU hours and can approach experimental scales. However, standard DFTB parameter sets lack good transferability and calibration for a particular system is usually necessary. Force matching the pairwise repulsive energy term in DFTB to short DFT trajectories can improve the former's accuracy for chemistry that is fast relative to DFT simulation times (<10 ps), but the effects on slow chemistry and the free energy surface are not well-known. We present a force matching approach to increase the accuracy of DFTB predictions for free energy surfaces. Accelerated sampling techniques are combined with path collective variables to generate the reference DFT data set and validate fitted DFTB potentials without a priori knowledge of transition states. Accuracy of force-matched DFTB free energy surfaces is assessed for slow peptide-forming reactions by direct comparison to DFT results for particular paths. Extensions to model prebiotic chemistry under shock conditions are discussed. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  17. The Child as Econometrician: A Rational Model of Preference Understanding in Children

    PubMed Central

    Lucas, Christopher G.; Griffiths, Thomas L.; Xu, Fei; Fawcett, Christine; Gopnik, Alison; Kushnir, Tamar; Markson, Lori; Hu, Jane

    2014-01-01

    Recent work has shown that young children can learn about preferences by observing the choices and emotional reactions of other people, but there is no unified account of how this learning occurs. We show that a rational model, built on ideas from economics and computer science, explains the behavior of children in several experiments, and offers new predictions as well. First, we demonstrate that when children use statistical information to learn about preferences, their inferences match the predictions of a simple econometric model. Next, we show that this same model can explain children's ability to learn that other people have preferences similar to or different from their own and use that knowledge to reason about the desirability of hidden objects. Finally, we use the model to explain a developmental shift in preference understanding. PMID:24667309

  18. The child as econometrician: a rational model of preference understanding in children.

    PubMed

    Lucas, Christopher G; Griffiths, Thomas L; Xu, Fei; Fawcett, Christine; Gopnik, Alison; Kushnir, Tamar; Markson, Lori; Hu, Jane

    2014-01-01

    Recent work has shown that young children can learn about preferences by observing the choices and emotional reactions of other people, but there is no unified account of how this learning occurs. We show that a rational model, built on ideas from economics and computer science, explains the behavior of children in several experiments, and offers new predictions as well. First, we demonstrate that when children use statistical information to learn about preferences, their inferences match the predictions of a simple econometric model. Next, we show that this same model can explain children's ability to learn that other people have preferences similar to or different from their own and use that knowledge to reason about the desirability of hidden objects. Finally, we use the model to explain a developmental shift in preference understanding.

  19. Size matters: abundance matching, galaxy sizes, and the Tully-Fisher relation in EAGLE

    NASA Astrophysics Data System (ADS)

    Ferrero, Ismael; Navarro, Julio F.; Abadi, Mario G.; Sales, Laura V.; Bower, Richard G.; Crain, Robert A.; Frenk, Carlos S.; Schaller, Matthieu; Schaye, Joop; Theuns, Tom

    2017-02-01

    The Tully-Fisher relation (TFR) links the stellar mass of a disc galaxy, Mstr, to its rotation speed: it is well approximated by a power law, shows little scatter, and evolves weakly with redshift. The relation has been interpreted as reflecting the mass-velocity scaling (M ∝ V3) of dark matter haloes, but this interpretation has been called into question by abundance-matching (AM) models, which predict the galaxy-halo mass relation to deviate substantially from a single power law and to evolve rapidly with redshift. We study the TFR of luminous spirals and its relation to AM using the EAGLE set of Λ cold dark matter (ΛCDM) cosmological simulations. Matching both relations requires disc sizes to satisfy constraints given by the concentration of haloes and their response to galaxy assembly. EAGLE galaxies approximately match these constraints and show a tight mass-velocity scaling that compares favourably with the observed TFR. The TFR is degenerate to changes in galaxy formation efficiency and the mass-size relation; simulations that fail to match the galaxy stellar mass function may fit the observed TFR if galaxies follow a different mass-size relation. The small scatter in the simulated TFR results because, at fixed halo mass, galaxy mass and rotation speed correlate strongly, scattering galaxies along the main relation. EAGLE galaxies evolve with lookback time following approximately the prescriptions of AM models and the observed mass-size relation of bright spirals, leading to a weak TFR evolution consistent with observation out to z = 1. ΛCDM models that match both the abundance and size of galaxies as a function of stellar mass have no difficulty reproducing the observed TFR and its evolution.

  20. Thermo-hydraulics of the Peruvian accretionary complex at 12°S

    USGS Publications Warehouse

    Kukowski, Nina; Pecher, Ingo

    1999-01-01

    The models were constrained by the thermal gradient obtained from the depth of bottomsimulating reflectors (BSRs) at the lower slope and some conventional measurements. We foundthat significant frictional heating is required to explain the observed strong landward increase ofheat flux. This is consistent with results from sandbox modelling which predict strong basalfriction at this margin. A significantly higher heat source is needed to match the observed thermalgradient in the southern line.

  1. Verification of relationships between anthropometric variables among ureteral stents recipients and ureteric lengths: a challenge for Vitruvian-da Vinci theory.

    PubMed

    Acelam, Philip A

    2015-01-01

    To determine and verify how anthropometric variables correlate to ureteric lengths and how well statistical models approximate the actual ureteric lengths. In this work, 129 charts of endourological patients (71 females and 58 males) were studied retrospectively. Data were gathered from various research centers from North and South America. Continuous data were studied using descriptive statistics. Anthropometric variables (age, body surface area, body weight, obesity, and stature) were utilized as predictors of ureteric lengths. Linear regressions and correlations were used for studying relationships between the predictors and the outcome variables (ureteric lengths); P-value was set at 0.05. To assess how well statistical models were capable of predicting the actual ureteric lengths, percentages (or ratios of matched to mismatched results) were employed. The results of the study show that anthropometric variables do not correlate well to ureteric lengths. Statistical models can partially estimate ureteric lengths. Out of the five anthropometric variables studied, three of them: body frame, stature, and weight, each with a P<0.0001, were significant. Two of the variables: age (R (2)=0.01; P=0.20) and obesity (R (2)=0.03; P=0.06), were found to be poor estimators of ureteric lengths. None of the predictors reached the expected (match:above:below) ratio of 1:0:0 to qualify as reliable predictors of ureteric lengths. There is not sufficient evidence to conclude that anthropometric variables can reliably predict ureteric lengths. These variables appear to lack adequate specificity as they failed to reach the expected (match:above:below) ratio of 1:0:0. Consequently, selections of ureteral stents continue to remain a challenge. However, height (R (2)=0.68) with the (match:above:below) ratio of 3:3:4 appears suited for use as estimator, but on the basis of decision rule. Additional research is recommended for stent improvements and ureteric length determinations.

  2. Verification of relationships between anthropometric variables among ureteral stents recipients and ureteric lengths: a challenge for Vitruvian-da Vinci theory

    PubMed Central

    Acelam, Philip A

    2015-01-01

    Objective To determine and verify how anthropometric variables correlate to ureteric lengths and how well statistical models approximate the actual ureteric lengths. Materials and methods In this work, 129 charts of endourological patients (71 females and 58 males) were studied retrospectively. Data were gathered from various research centers from North and South America. Continuous data were studied using descriptive statistics. Anthropometric variables (age, body surface area, body weight, obesity, and stature) were utilized as predictors of ureteric lengths. Linear regressions and correlations were used for studying relationships between the predictors and the outcome variables (ureteric lengths); P-value was set at 0.05. To assess how well statistical models were capable of predicting the actual ureteric lengths, percentages (or ratios of matched to mismatched results) were employed. Results The results of the study show that anthropometric variables do not correlate well to ureteric lengths. Statistical models can partially estimate ureteric lengths. Out of the five anthropometric variables studied, three of them: body frame, stature, and weight, each with a P<0.0001, were significant. Two of the variables: age (R2=0.01; P=0.20) and obesity (R2=0.03; P=0.06), were found to be poor estimators of ureteric lengths. None of the predictors reached the expected (match:above:below) ratio of 1:0:0 to qualify as reliable predictors of ureteric lengths. Conclusion There is not sufficient evidence to conclude that anthropometric variables can reliably predict ureteric lengths. These variables appear to lack adequate specificity as they failed to reach the expected (match:above:below) ratio of 1:0:0. Consequently, selections of ureteral stents continue to remain a challenge. However, height (R2=0.68) with the (match:above:below) ratio of 3:3:4 appears suited for use as estimator, but on the basis of decision rule. Additional research is recommended for stent improvements and ureteric length determinations. PMID:26317082

  3. The use of GIS and multi-criteria evaluation (MCE) to identify agricultural land management practices which cause surface water pollution in drinking water supply catchments.

    PubMed

    Grayson, Richard; Kay, Paul; Foulger, Miles

    2008-01-01

    Diffuse pollution poses a threat to water quality and results in the need for treatment for potable water supplies which can prove costly. Within the Yorkshire region, UK, nitrates, pesticides and water colour present particular treatment problems. Catchment management techniques offer an alternative to 'end of pipe' solutions and allow resources to be targeted to the most polluting areas. This project has attempted to identify such areas using GIS based modelling approaches in catchments where water quality data were available. As no model exists to predict water colour a model was created using an MCE method which is capable of predicting colour concentrations at the catchment scale. CatchIS was used to predict pesticide and nitrate N concentrations and was found to be generally capable of reliably predicting nitrate N loads at the catchment scale. The pesticides results did not match the historic data possibly due to problems with the historic pesticide data and temporal and spatially variability in pesticide usage. The use of these models can be extended to predict water quality problems in catchments where water quality data are unavailable and highlight areas of concern. IWA Publishing 2008.

  4. Extension of HCDstruct for Transonic Aeroservoelastic Analysis of Unconventional Aircraft Concepts

    NASA Technical Reports Server (NTRS)

    Quinlan, Jesse R.; Gern, Frank H.

    2017-01-01

    A substantial effort has been made to implement an enhanced aerodynamic modeling capability in the Higher-fidelity Conceptual Design and structural optimization tool. This additional capability is needed for a rapid, physics-based method of modeling advanced aircraft concepts at risk of structural failure due to dynamic aeroelastic instabilities. To adequately predict these instabilities, in particular for transonic applications, a generalized aerodynamic matching algorithm was implemented to correct the doublet-lattice model available in Nastran using solution data from a priori computational fluid dynamics anal- ysis. This new capability is demonstrated for two tube-and-wing aircraft configurations, including a Boeing 737-200 for implementation validation and the NASA D8 as a first use case. Results validate the current implementation of the aerodynamic matching utility and demonstrate the importance of using such a method for aircraft configurations featuring fuselage-wing aerodynamic interaction.

  5. Data-resolution matrix and model-resolution matrix for Rayleigh-wave inversion using a damped least-squares method

    USGS Publications Warehouse

    Xia, J.; Miller, R.D.; Xu, Y.

    2008-01-01

    Inversion of multimode surface-wave data is of increasing interest in the near-surface geophysics community. For a given near-surface geophysical problem, it is essential to understand how well the data, calculated according to a layered-earth model, might match the observed data. A data-resolution matrix is a function of the data kernel (determined by a geophysical model and a priori information applied to the problem), not the data. A data-resolution matrix of high-frequency (>2 Hz) Rayleigh-wave phase velocities, therefore, offers a quantitative tool for designing field surveys and predicting the match between calculated and observed data. We employed a data-resolution matrix to select data that would be well predicted and we find that there are advantages of incorporating higher modes in inversion. The resulting discussion using the data-resolution matrix provides insight into the process of inverting Rayleigh-wave phase velocities with higher-mode data to estimate S-wave velocity structure. Discussion also suggested that each near-surface geophysical target can only be resolved using Rayleigh-wave phase velocities within specific frequency ranges, and higher-mode data are normally more accurately predicted than fundamental-mode data because of restrictions on the data kernel for the inversion system. We used synthetic and real-world examples to demonstrate that selected data with the data-resolution matrix can provide better inversion results and to explain with the data-resolution matrix why incorporating higher-mode data in inversion can provide better results. We also calculated model-resolution matrices in these examples to show the potential of increasing model resolution with selected surface-wave data. ?? Birkhaueser 2008.

  6. An Evaluation of the FLAG Friction Model frictmultiscale2 using the Experiments of Juanicotena and Szarynski

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zocher, Marvin Anthony; Hammerberg, James Edward

    The experiments of Juanicotena and Szarynski, namely T101, T102, and T105 are modeled for purposes of gaining a better understanding of the FLAG friction model frictmultiscale2. This exercise has been conducted as a first step toward model validation. It is shown that with inclusion of the friction model in the numerical analysis, the results of Juanicotena and Szarynski are predicted reasonably well. Without the friction model, simulation results do not match the experimental data nearly as well. Suggestions for follow-on work are included.

  7. Impact of topographic mask models on scanner matching solutions

    NASA Astrophysics Data System (ADS)

    Tyminski, Jacek K.; Pomplun, Jan; Renwick, Stephen P.

    2014-03-01

    Of keen interest to the IC industry are advanced computational lithography applications such as Optical Proximity Correction of IC layouts (OPC), scanner matching by optical proximity effect matching (OPEM), and Source Optimization (SO) and Source-Mask Optimization (SMO) used as advanced reticle enhancement techniques. The success of these tasks is strongly dependent on the integrity of the lithographic simulators used in computational lithography (CL) optimizers. Lithographic mask models used by these simulators are key drivers impacting the accuracy of the image predications, and as a consequence, determine the validity of these CL solutions. Much of the CL work involves Kirchhoff mask models, a.k.a. thin masks approximation, simplifying the treatment of the mask near-field images. On the other hand, imaging models for hyper-NA scanner require that the interactions of the illumination fields with the mask topography be rigorously accounted for, by numerically solving Maxwell's Equations. The simulators used to predict the image formation in the hyper-NA scanners must rigorously treat the masks topography and its interaction with the scanner illuminators. Such imaging models come at a high computational cost and pose challenging accuracy vs. compute time tradeoffs. Additional complication comes from the fact that the performance metrics used in computational lithography tasks show highly non-linear response to the optimization parameters. Finally, the number of patterns used for tasks such as OPC, OPEM, SO, or SMO range from tens to hundreds. These requirements determine the complexity and the workload of the lithography optimization tasks. The tools to build rigorous imaging optimizers based on first-principles governing imaging in scanners are available, but the quantifiable benefits they might provide are not very well understood. To quantify the performance of OPE matching solutions, we have compared the results of various imaging optimization trials obtained with Kirchhoff mask models to those obtained with rigorous models involving solutions of Maxwell's Equations. In both sets of trials, we used sets of large numbers of patterns, with specifications representative of CL tasks commonly encountered in hyper-NA imaging. In this report we present OPEM solutions based on various mask models and discuss the models' impact on hyper- NA scanner matching accuracy. We draw conclusions on the accuracy of results obtained with thin mask models vs. the topographic OPEM solutions. We present various examples representative of the scanner image matching for patterns representative of the current generation of IC designs.

  8. Orion Parachute Riser Cutter Development

    NASA Technical Reports Server (NTRS)

    Oguz, Sirri; Salazar, Frank

    2011-01-01

    This paper presents the tests and analytical approach used on the development of a steel riser cutter for the CEV Parachute Assembly System (CPAS) used on the Orion crew module. Figure 1 shows the riser cutter and the steel riser bundle which consists of six individual cables. Due to the highly compressed schedule, initial unavailability of the riser material and the Orion Forward Bay mechanical constraints, JSC primarily relied on a combination of internal ballistics analysis and LS-DYNA simulation for this project. Various one dimensional internal ballistics codes that use standard equation of state and conservation of energy have commonly used in the development of CAD devices for initial first order estimates and as an enhancement to the test program. While these codes are very accurate for propellant performance prediction, they usually lack a fully defined kinematic model for dynamic predictions. A simple piston device can easily and accurately be modeled using an equation of motion. However, the accuracy of analytical models is greatly reduced on more complicated devices with complex external loads, nonlinear trajectories or unique unlocking features. A 3D finite element model of CAD device with all critical features included can vastly improve the analytical ballistic predictions when it is used as a supplement to the ballistic code. During this project, LS-DYNA structural 3D model was used to predict the riser resisting load that was needed for the ballistic code. A Lagrangian model with eroding elements shown in Figure 2 was used for the blade, steel riser and the anvil. The riser material failure strain was fine tuned by matching the dent depth on the anvil with the actual test data. LS-DYNA model was also utilized to optimize the blade tip design for the most efficient cut. In parallel, the propellant type and the amount were determined by using CADPROG internal ballistics code. Initial test results showed a good match with LS-DYNA and CADPROG simulations. Final paper will present a detailed roadmap from initial ballistic modeling and LS-DYNA simulation to the performance testing. Blade shape optimization study will also be presented.

  9. Phylogenies support out-of-equilibrium models of biodiversity.

    PubMed

    Manceau, Marc; Lambert, Amaury; Morlon, Hélène

    2015-04-01

    There is a long tradition in ecology of studying models of biodiversity at equilibrium. These models, including the influential Neutral Theory of Biodiversity, have been successful at predicting major macroecological patterns, such as species abundance distributions. But they have failed to predict macroevolutionary patterns, such as those captured in phylogenetic trees. Here, we develop a model of biodiversity in which all individuals have identical demographic rates, metacommunity size is allowed to vary stochastically according to population dynamics, and speciation arises naturally from the accumulation of point mutations. We show that this model generates phylogenies matching those observed in nature if the metacommunity is out of equilibrium. We develop a likelihood inference framework that allows fitting our model to empirical phylogenies, and apply this framework to various mammalian families. Our results corroborate the hypothesis that biodiversity dynamics are out of equilibrium. © 2015 John Wiley & Sons Ltd/CNRS.

  10. Alterations in choice behavior by manipulations of world model.

    PubMed

    Green, C S; Benson, C; Kersten, D; Schrater, P

    2010-09-14

    How to compute initially unknown reward values makes up one of the key problems in reinforcement learning theory, with two basic approaches being used. Model-free algorithms rely on the accumulation of substantial amounts of experience to compute the value of actions, whereas in model-based learning, the agent seeks to learn the generative process for outcomes from which the value of actions can be predicted. Here we show that (i) "probability matching"-a consistent example of suboptimal choice behavior seen in humans-occurs in an optimal Bayesian model-based learner using a max decision rule that is initialized with ecologically plausible, but incorrect beliefs about the generative process for outcomes and (ii) human behavior can be strongly and predictably altered by the presence of cues suggestive of various generative processes, despite statistically identical outcome generation. These results suggest human decision making is rational and model based and not consistent with model-free learning.

  11. Healthcare costs among adults with type 2 diabetes initiating saxagliptin or linagliptin: a US-based claims analysis.

    PubMed

    Kong, Amanda M; Farahbakhshian, Sepehr; Pendergraft, Trudy; Brouillette, Matthew A; Mukherjee, Biswarup; Smith, David M; Sheehan, John J

    2017-10-01

    To compare healthcare costs of adults with type 2 diabetes (T2D) after initiation of saxagliptin or linagliptin, two antidiabetic medications in the dipeptidyl peptidase-4 inhibitor medication class. Patients with T2D who were at least 18 years old and initiated saxagliptin or linagliptin (index date) between 1 June 2011 and 30 June 2014 were identified in the MarketScan Commercial and Medicare Supplemental Databases. All-cause healthcare costs and diabetes-related costs (T2D diagnosis on a medical claim and/or an antidiabetic medication claim) were measured in the 1 year follow-up period. Saxagliptin and linagliptin initiators were matched using propensity score methods. Cost ratios (CRs) and predicted costs were estimated from generalized linear models and recycled predictions. There were 34,560 saxagliptin initiators and 18,175 linagliptin initiators identified (mean ages 57 and 59; 55% and 56% male, respectively). Before matching, saxagliptin initiators had significantly lower all-cause total healthcare costs than linagliptin initiators (mean = $15,335 [SD $28,923] vs. mean = $20,069 [SD $48,541], p < .001) and significantly lower diabetes-related total healthcare costs (mean = $6109 [SD $13,851] vs. mean = $7393 [SD $26,041], p < .001). In matched analyses (n = 16,069 per cohort), saxagliptin initiators had lower all-cause follow-up costs than linagliptin initiators (CR = 0.953, 95% CI = 0.932-0.974, p < .001; predicted costs = $17,211 vs. $18,068). There was no significant difference in diabetes-related total costs after matching; however, diabetes-related medical costs were significantly lower for saxagliptin initiators (CR = 0.959, 95% CI = 0.927-0.993, p = 0.017; predicted costs = $3989 vs. $4159). Adult patients with T2D initiating treatment with saxagliptin had lower total all-cause healthcare costs and diabetes-related medical costs over 1 year compared with patients initiating treatment with linagliptin.

  12. Application of MUSLE for the prediction of phosphorus losses.

    PubMed

    Noor, Hamze; Mirnia, Seyed Khalagh; Fazli, Somaye; Raisi, Mohamad Bagher; Vafakhah, Mahdi

    2010-01-01

    Soil erosion in forestlands affects not only land productivity but also the water body down stream. The Universal Soil Loss Equation (USLE) has been applied broadly for the prediction of soil loss from upland fields. However, there are few reports concerning the prediction of nutrient (P) losses based on the USLE and its versions. The present study was conducted to evaluate the applicability of the deterministic model Modified Universal Soil Loss Equation (MUSLE) to estimation of phosphorus losses in the Kojor forest watershed, northern Iran. The model was tested and calibrated using accurate continuous P loss data collected during seven storm events in 2008. Results of the original model simulations for storm-wise P loss did not match the observed data, while the revised version of the model could imitate the observed values well. The results of the study approved the efficient application of the revised MUSLE in estimating storm-wise P losses in the study area with a high level of agreement of beyond 93%, an acceptable estimation error of some 35%.

  13. Predicting bias in perceived position using attention field models.

    PubMed

    Klein, Barrie P; Paffen, Chris L E; Pas, Susan F Te; Dumoulin, Serge O

    2016-05-01

    Attention is the mechanism through which we select relevant information from our visual environment. We have recently demonstrated that attention attracts receptive fields across the visual hierarchy (Klein, Harvey, & Dumoulin, 2014). We captured this receptive field attraction using an attention field model. Here, we apply this model to human perception: We predict that receptive field attraction results in a bias in perceived position, which depends on the size of the underlying receptive fields. We instructed participants to compare the relative position of Gabor stimuli, while we manipulated the focus of attention using exogenous cueing. We varied the eccentric position and spatial frequency of the Gabor stimuli to vary underlying receptive field size. The positional biases as a function of eccentricity matched the predictions by an attention field model, whereas the bias as a function of spatial frequency did not. As spatial frequency and eccentricity are encoded differently across the visual hierarchy, we speculate that they might interact differently with the attention field that is spatially defined.

  14. Carpooling: status and potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kendall, D.C.

    1975-06-01

    Studies were conducted to analyze the status and potential of work-trip carpooling as a means of achieving more efficient use of the automobile. Current and estimated maximum potential levels of carpooling are presented together with analyses revealing characteristics of carpool trips, incentives, impacts of increased carpooling and issues related to carpool matching services. National survey results indicate the average auto occupancy for urban work-trip is 1.2 passengers per auto. This value, and average carpool occupancy of 2.5, have been relatively stable over the last five years. An increase in work-trip occupancy from 1.2 to 1.8 would require a 100% increasemore » in the number of carpoolers. A model was developed to predict the maximum potential level of carpooling in an urban area. Results from applying the model to the Boston region were extrapolated to estimate a maximum nationwide potential between 47 and 71% of peak period auto commuters. Maximum benefits of increased carpooling include up to 10% savings in auto fuel consumption. A technique was developed for estimating the number of participants required in a carpool matching service to achieve a chosen level of matching among respondents, providing insight into tradeoffs between employer and regional or centralized matching services. Issues recommended for future study include incentive policies and their impacts on other modes, and the evaluation of new and ongoing carpool matching services. (11 references) (GRA)« less

  15. Support vector regression to predict porosity and permeability: Effect of sample size

    NASA Astrophysics Data System (ADS)

    Al-Anazi, A. F.; Gates, I. D.

    2012-02-01

    Porosity and permeability are key petrophysical parameters obtained from laboratory core analysis. Cores, obtained from drilled wells, are often few in number for most oil and gas fields. Porosity and permeability correlations based on conventional techniques such as linear regression or neural networks trained with core and geophysical logs suffer poor generalization to wells with only geophysical logs. The generalization problem of correlation models often becomes pronounced when the training sample size is small. This is attributed to the underlying assumption that conventional techniques employing the empirical risk minimization (ERM) inductive principle converge asymptotically to the true risk values as the number of samples increases. In small sample size estimation problems, the available training samples must span the complexity of the parameter space so that the model is able both to match the available training samples reasonably well and to generalize to new data. This is achieved using the structural risk minimization (SRM) inductive principle by matching the capability of the model to the available training data. One method that uses SRM is support vector regression (SVR) network. In this research, the capability of SVR to predict porosity and permeability in a heterogeneous sandstone reservoir under the effect of small sample size is evaluated. Particularly, the impact of Vapnik's ɛ-insensitivity loss function and least-modulus loss function on generalization performance was empirically investigated. The results are compared to the multilayer perception (MLP) neural network, a widely used regression method, which operates under the ERM principle. The mean square error and correlation coefficients were used to measure the quality of predictions. The results demonstrate that SVR yields consistently better predictions of the porosity and permeability with small sample size than the MLP method. Also, the performance of SVR depends on both kernel function type and loss functions used.

  16. Validation of a coupled core-transport, pedestal-structure, current-profile and equilibrium model

    NASA Astrophysics Data System (ADS)

    Meneghini, O.

    2015-11-01

    The first workflow capable of predicting the self-consistent solution to the coupled core-transport, pedestal structure, and equilibrium problems from first-principles and its experimental tests are presented. Validation with DIII-D discharges in high confinement regimes shows that the workflow is capable of robustly predicting the kinetic profiles from on axis to the separatrix and matching the experimental measurements to within their uncertainty, with no prior knowledge of the pedestal height nor of any measurement of the temperature or pressure. Self-consistent coupling has proven to be essential to match the experimental results, and capture the non-linear physics that governs the core and pedestal solutions. In particular, clear stabilization of the pedestal peeling ballooning instabilities by the global Shafranov shift and destabilization by additional edge bootstrap current, and subsequent effect on the core plasma profiles, have been clearly observed and documented. In our model, self-consistency is achieved by iterating between the TGYRO core transport solver (with NEO and TGLF for neoclassical and turbulent flux), and the pedestal structure predicted by the EPED model. A self-consistent equilibrium is calculated by EFIT, while the ONETWO transport package evolves the current profile and calculates the particle and energy sources. The capabilities of such workflow are shown to be critical for the design of future experiments such as ITER and FNSF, which operate in a regime where the equilibrium, the pedestal, and the core transport problems are strongly coupled, and for which none of these quantities can be assumed to be known. Self-consistent core-pedestal predictions for ITER, as well as initial optimizations, will be presented. Supported by the US Department of Energy under DE-FC02-04ER54698, DE-SC0012652.

  17. Robust human body model injury prediction in simulated side impact crashes.

    PubMed

    Golman, Adam J; Danelson, Kerry A; Stitzel, Joel D

    2016-01-01

    This study developed a parametric methodology to robustly predict occupant injuries sustained in real-world crashes using a finite element (FE) human body model (HBM). One hundred and twenty near-side impact motor vehicle crashes were simulated over a range of parameters using a Toyota RAV4 (bullet vehicle), Ford Taurus (struck vehicle) FE models and a validated human body model (HBM) Total HUman Model for Safety (THUMS). Three bullet vehicle crash parameters (speed, location and angle) and two occupant parameters (seat position and age) were varied using a Latin hypercube design of Experiments. Four injury metrics (head injury criterion, half deflection, thoracic trauma index and pelvic force) were used to calculate injury risk. Rib fracture prediction and lung strain metrics were also analysed. As hypothesized, bullet speed had the greatest effect on each injury measure. Injury risk was reduced when bullet location was further from the B-pillar or when the bullet angle was more oblique. Age had strong correlation to rib fractures frequency and lung strain severity. The injuries from a real-world crash were predicted using two different methods by (1) subsampling the injury predictors from the 12 best crush profile matching simulations and (2) using regression models. Both injury prediction methods successfully predicted the case occupant's low risk for pelvic injury, high risk for thoracic injury, rib fractures and high lung strains with tight confidence intervals. This parametric methodology was successfully used to explore crash parameter interactions and to robustly predict real-world injuries.

  18. Prediction of progressive damage and strength of plain weave composites using the finite element method

    NASA Astrophysics Data System (ADS)

    Srirengan, Kanthikannan

    The overall objective of this research was to develop the finite element code required to efficiently predict the strength of plain weave composite structures. Towards which, three-dimensional conventional progressive damage analysis was implemented to predict the strength of plain weave composites subjected to periodic boundary conditions. Also, modal technique for three-dimensional global/local stress analysis was developed to predict the failure initiation in plain weave composite structures. The progressive damage analysis was used to study the effect of quadrature order, mesh refinement and degradation models on the predicted damage and strength of plain weave composites subjected to uniaxial tension in the warp tow direction. A 1/32sp{nd} part of the representative volume element of a symmetrically stacked configuration was analyzed. The tow geometry was assumed to be sinusoidal. Graphite/Epoxy system was used. Maximum stress criteria and combined stress criteria were used to predict failure in the tows and maximum principal stress criterion was used to predict failure in the matrix. Degradation models based on logical reasoning, micromechanics idealization and experimental comparisons were used to calculate the effective material properties with of damage. Modified Newton-Raphson method was used to determine the incremental solution for each applied strain level. Using a refined mesh and the discount method based on experimental comparisons, the progressive damage and the strength of plain weave composites of waviness ratios 1/3 and 1/6 subjected to uniaxial tension in the warp direction have been characterized. Plain weave composites exhibit a brittle response in uniaxial tension. The strength decreases significantly with the increase in waviness ratio. Damage initiation and collapse were caused dominantly due to intra-tow cracking and inter-tow debonding respectively. The predicted strength of plain weave composites of racetrack geometry and waviness ratio 1/25.7 was compared with analytical predictions and experimental findings and was found to match well. To evaluate the performance of the modal technique, failure initiation in a short woven composite cantilevered plate subjected to end moment and transverse end load was predicted. The global/local predictions were found to reasonably match well with the conventional finite element predictions.

  19. The effects of layers in dry snow on its passive microwave emissions using dense media radiative transfer theory based on the quasicrystalline approximation (QCA/DMRT)

    USGS Publications Warehouse

    Liang, D.; Xu, X.; Tsang, L.; Andreadis, K.M.; Josberger, E.G.

    2008-01-01

    A model for the microwave emissions of multilayer dry snowpacks, based on dense media radiative transfer (DMRT) theory with the quasicrystalline approximation (QCA), provides more accurate results when compared to emissions determined by a homogeneous snowpack and other scattering models. The DMRT model accounts for adhesive aggregate effects, which leads to dense media Mie scattering by using a sticky particle model. With the multilayer model, we examined both the frequency and polarization dependence of brightness temperatures (Tb's) from representative snowpacks and compared them to results from a single-layer model and found that the multilayer model predicts higher polarization differences, twice as much, and weaker frequency dependence. We also studied the temporal evolution of Tb from multilayer snowpacks. The difference between Tb's at 18.7 and 36.5 GHz can be S K lower than the single-layer model prediction in this paper. By using the snowpack observations from the Cold Land Processes Field Experiment as input for both multi- and single-layer models, it shows that the multilayer Tb's are in better agreement with the data than the single-layer model. With one set of physical parameters, the multilayer QCA/DMRT model matched all four channels of Tb observations simultaneously, whereas the single-layer model could only reproduce vertically polarized Tb's. Also, the polarization difference and frequency dependence were accurately matched by the multilayer model using the same set of physical parameters. Hence, algorithms for the retrieval of snowpack depth or water equivalent should be based on multilayer scattering models to achieve greater accuracy. ?? 2008 IEEE.

  20. Differentiation of low-attenuation intracranial hemorrhage and calcification using dual-energy computed tomography in a phantom system

    PubMed Central

    Nute, Jessica L.; Roux, Lucia Le; Chandler, Adam G.; Baladandayuthapani, Veera; Schellingerhout, Dawid; Cody, Dianna D.

    2015-01-01

    Objectives Calcific and hemorrhagic intracranial lesions with attenuation levels of <100 Hounsfield Units (HU) cannot currently be reliably differentiated by single-energy computed tomography (SECT). The proper differentiation of these lesion types would have a multitude of clinical applications. A phantom model was used to test the ability of dual-energy CT (DECT) to differentiate such lesions. Materials and Methods Agar gel-bound ferric oxide and hydroxyapatite were used to model hemorrhage and calcification, respectively. Gel models were scanned using SECT and DECT and organized into SECT attenuation-matched pairs at 16 attenuation levels between 0 and 100 HU. DECT data were analyzed using 3D Gaussian mixture models (GMMs), as well as a simplified threshold plane metric derived from the 3D GMM, to assign voxels to hemorrhagic or calcific categories. Accuracy was calculated by comparing predicted voxel assignments with actual voxel identities. Results We measured 6,032 voxels from each gel model, for a total of 193,024 data points (16 matched model pairs). Both the 3D GMM and its more clinically implementable threshold plane derivative yielded similar results, with >90% accuracy at matched SECT attenuation levels ≥50 HU. Conclusions Hemorrhagic and calcific lesions with attenuation levels between 50 and 100 HU were differentiable using DECT in a clinically relevant phantom system with >90% accuracy. This method warrants further testing for potential clinical applications. PMID:25162534

  1. Predicting the outbreak of hand, foot, and mouth disease in Nanjing, China: a time-series model based on weather variability

    NASA Astrophysics Data System (ADS)

    Liu, Sijun; Chen, Jiaping; Wang, Jianming; Wu, Zhuchao; Wu, Weihua; Xu, Zhiwei; Hu, Wenbiao; Xu, Fei; Tong, Shilu; Shen, Hongbing

    2017-10-01

    Hand, foot, and mouth disease (HFMD) is a significant public health issue in China and an accurate prediction of epidemic can improve the effectiveness of HFMD control. This study aims to develop a weather-based forecasting model for HFMD using the information on climatic variables and HFMD surveillance in Nanjing, China. Daily data on HFMD cases and meteorological variables between 2010 and 2015 were acquired from the Nanjing Center for Disease Control and Prevention, and China Meteorological Data Sharing Service System, respectively. A multivariate seasonal autoregressive integrated moving average (SARIMA) model was developed and validated by dividing HFMD infection data into two datasets: the data from 2010 to 2013 were used to construct a model and those from 2014 to 2015 were used to validate it. Moreover, we used weekly prediction for the data between 1 January 2014 and 31 December 2015 and leave-1-week-out prediction was used to validate the performance of model prediction. SARIMA (2,0,0)52 associated with the average temperature at lag of 1 week appeared to be the best model (R 2 = 0.936, BIC = 8.465), which also showed non-significant autocorrelations in the residuals of the model. In the validation of the constructed model, the predicted values matched the observed values reasonably well between 2014 and 2015. There was a high agreement rate between the predicted values and the observed values (sensitivity 80%, specificity 96.63%). This study suggests that the SARIMA model with average temperature could be used as an important tool for early detection and prediction of HFMD outbreaks in Nanjing, China.

  2. Health-related quality-of-life in low-income, uninsured men with prostate cancer.

    PubMed

    Krupski, Tracey L; Fink, Arlene; Kwan, Lorna; Maliski, Sally; Connor, Sarah E; Clerkin, Barbara; Litwin, Mark S

    2005-05-01

    The objective was to describe health-related quality-of-life (HRQOL) in low-income men with prostate cancer. Subjects were drawn from a statewide public assistance prostate cancer program. Telephone and mail surveys included the RAND 12-item Health Survey and UCLA Prostate Cancer Index Short Form and were compared with normative age-matched men without cancer from the general population reported on in the literature. Of 286 eligible men, 233 (81%) agreed to participate and completed the necessary items. The sample consisted of 51% Hispanics, 23% non-Hispanic whites, and 17% African Americans. The low-income men had worse scores in every domain of prostate-specific and general HRQOL than had the age-matched general population controls. The degree of disparity indicated substantial clinical differences in almost every domain of physical and emotional functioning between the sample group and the control group. Linear regression modeling determined that among the low-income men, Hispanic race, and income level were predictive of worse physical functioning, whereas only comorbidities predicted mental health. Low-income patients with prostate cancer appear to have quality-of-life profiles that are meaningfully worse than age-matched men from the general population without cancer reported on in the literature.

  3. A high-resolution integrated model of the National Ignition Campaign cryogenic layered experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, O. S.; Cerjan, C. J.; Marinak, M. M.

    A detailed simulation-based model of the June 2011 National Ignition Campaign cryogenic DT experiments is presented. The model is based on integrated hohlraum-capsule simulations that utilize the best available models for the hohlraum wall, ablator, and DT equations of state and opacities. The calculated radiation drive was adjusted by changing the input laser power to match the experimentally measured shock speeds, shock merger times, peak implosion velocity, and bangtime. The crossbeam energy transfer model was tuned to match the measured time-dependent symmetry. Mid-mode mix was included by directly modeling the ablator and ice surface perturbations up to mode 60. Simulatedmore » experimental values were extracted from the simulation and compared against the experiment. Although by design the model is able to reproduce the 1D in-flight implosion parameters and low-mode asymmetries, it is not able to accurately predict the measured and inferred stagnation properties and levels of mix. In particular, the measured yields were 15%-40% of the calculated yields, and the inferred stagnation pressure is about 3 times lower than simulated.« less

  4. Radiative decay engineering 5: metal-enhanced fluorescence and plasmon emission

    PubMed Central

    Lakowicz, Joseph R.

    2009-01-01

    Metallic particles and surfaces display diverse and complex optical properties. Examples include the intense colors of noble metal colloids, surface plasmon resonance absorption by thin metal films, and quenching of excited fluorophores near the metal surfaces. Recently, the interactions of fluorophores with metallic particles and surfaces (metals) have been used to obtain increased fluorescence intensities, to develop assays based on fluorescence quenching by gold colloids, and to obtain directional radiation from fluorophores near thin metal films. For metal-enhanced fluorescence it is difficult to predict whether a particular metal structure, such as a colloid, fractal, or continuous surface, will quench or enhance fluorescence. In the present report we suggest how the effects of metals on fluorescence can be explained using a simple concept, based on radiating plasmons (RPs). The underlying physics may be complex but the concept is simple to understand. According to the RP model, the emission or quenching of a fluorophore near the metal can be predicted from the optical properties of the metal structures as calculated from electrodynamics, Mie theory, and/or Maxwell’s equations. For example, according to Mie theory and the size and shape of the particle, the extinction of metal colloids can be due to either absorption or scattering. Incident energy is dissipated by absorption. Far-field radiation is created by scattering. Based on our model small colloids are expected to quench fluorescence because absorption is dominant over scattering. Larger colloids are expected to enhance fluorescence because the scattering component is dominant over absorption. The ability of a metal’s surface to absorb or reflect light is due to wavenumber matching requirements at the metal–sample interface. Wavenumber matching considerations can also be used to predict whether fluorophores at a given distance from a continuous planar surface will be emitted or quenched. These considerations suggest that the so called “lossy surface waves” which quench fluorescence are due to induced electron oscillations which cannot radiate to the far-field because wavevector matching is not possible. We suggest that the energy from the fluorophores thought to be lost by lossy surface waves can be recovered as emission by adjustment of the sample to allow wavevector matching. The RP model provides a rational approach for designing fluorophore–metal configurations with the desired emissive properties and a basis for nanophotonic fluorophore technology. PMID:15691498

  5. Development and validation of a computational model of the knee joint for the evaluation of surgical treatments for osteoarthritis

    PubMed Central

    Mootanah, R.; Imhauser, C.W.; Reisse, F.; Carpanen, D.; Walker, R.W.; Koff, M.F.; Lenhoff, M.W.; Rozbruch, S.R.; Fragomen, A.T.; Dewan, Z.; Kirane, Y.M.; Cheah, Pamela A.; Dowell, J.K.; Hillstrom, H.J.

    2014-01-01

    A three-dimensional (3D) knee joint computational model was developed and validated to predict knee joint contact forces and pressures for different degrees of malalignment. A 3D computational knee model was created from high-resolution radiological images to emulate passive sagittal rotation (full-extension to 65°-flexion) and weight acceptance. A cadaveric knee mounted on a six-degree-of-freedom robot was subjected to matching boundary and loading conditions. A ligament-tuning process minimised kinematic differences between the robotically loaded cadaver specimen and the finite element (FE) model. The model was validated by measured intra-articular force and pressure measurements. Percent full scale error between EE-predicted and in vitro-measured values in the medial and lateral compartments were 6.67% and 5.94%, respectively, for normalised peak pressure values, and 7.56% and 4.48%, respectively, for normalised force values. The knee model can accurately predict normalised intra-articular pressure and forces for different loading conditions and could be further developed for subject-specific surgical planning. PMID:24786914

  6. Development and validation of a computational model of the knee joint for the evaluation of surgical treatments for osteoarthritis.

    PubMed

    Mootanah, R; Imhauser, C W; Reisse, F; Carpanen, D; Walker, R W; Koff, M F; Lenhoff, M W; Rozbruch, S R; Fragomen, A T; Dewan, Z; Kirane, Y M; Cheah, K; Dowell, J K; Hillstrom, H J

    2014-01-01

    A three-dimensional (3D) knee joint computational model was developed and validated to predict knee joint contact forces and pressures for different degrees of malalignment. A 3D computational knee model was created from high-resolution radiological images to emulate passive sagittal rotation (full-extension to 65°-flexion) and weight acceptance. A cadaveric knee mounted on a six-degree-of-freedom robot was subjected to matching boundary and loading conditions. A ligament-tuning process minimised kinematic differences between the robotically loaded cadaver specimen and the finite element (FE) model. The model was validated by measured intra-articular force and pressure measurements. Percent full scale error between FE-predicted and in vitro-measured values in the medial and lateral compartments were 6.67% and 5.94%, respectively, for normalised peak pressure values, and 7.56% and 4.48%, respectively, for normalised force values. The knee model can accurately predict normalised intra-articular pressure and forces for different loading conditions and could be further developed for subject-specific surgical planning.

  7. Coarse-Graining Polymer Field Theory for Fast and Accurate Simulations of Directed Self-Assembly

    NASA Astrophysics Data System (ADS)

    Liu, Jimmy; Delaney, Kris; Fredrickson, Glenn

    To design effective manufacturing processes using polymer directed self-assembly (DSA), the semiconductor industry benefits greatly from having a complete picture of stable and defective polymer configurations. Field-theoretic simulations are an effective way to study these configurations and predict defect populations. Self-consistent field theory (SCFT) is a particularly successful theory for studies of DSA. Although other models exist that are faster to simulate, these models are phenomenological or derived through asymptotic approximations, often leading to a loss of accuracy relative to SCFT. In this study, we employ our recently-developed method to produce an accurate coarse-grained field theory for diblock copolymers. The method uses a force- and stress-matching strategy to map output from SCFT simulations into parameters for an optimized phase field model. This optimized phase field model is just as fast as existing phenomenological phase field models, but makes more accurate predictions of polymer self-assembly, both in bulk and in confined systems. We study the performance of this model under various conditions, including its predictions of domain spacing, morphology and defect formation energies. Samsung Electronics.

  8. Vehicle Surveillance with a Generic, Adaptive, 3D Vehicle Model.

    PubMed

    Leotta, Matthew J; Mundy, Joseph L

    2011-07-01

    In automated surveillance, one is often interested in tracking road vehicles, measuring their shape in 3D world space, and determining vehicle classification. To address these tasks simultaneously, an effective approach is the constrained alignment of a prior model of 3D vehicle shape to images. Previous 3D vehicle models are either generic but overly simple or rigid and overly complex. Rigid models represent exactly one vehicle design, so a large collection is needed. A single generic model can deform to a wide variety of shapes, but those shapes have been far too primitive. This paper uses a generic 3D vehicle model that deforms to match a wide variety of passenger vehicles. It is adjustable in complexity between the two extremes. The model is aligned to images by predicting and matching image intensity edges. Novel algorithms are presented for fitting models to multiple still images and simultaneous tracking while estimating shape in video. Experiments compare the proposed model to simple generic models in accuracy and reliability of 3D shape recovery from images and tracking in video. Standard techniques for classification are also used to compare the models. The proposed model outperforms the existing simple models at each task.

  9. Compositions and their application to the analysis of choice.

    PubMed

    Jensen, Greg

    2014-07-01

    Descriptions of steady-state patterns of choice allocation under concurrent schedules of reinforcement have long relied on the "generalized matching law" (Baum, 1974), a log-odds power function. Although a powerful model in some contexts, a series of conflicting empirical results have cast its generality in doubt. The relevance and analytic relevance of matching models can be greatly expanded by considering them in terms of compositions (Aitchison, 1986). A composition encodes a set of ratios (e.g., 5:3:2) as a vector with a constant sum, and this constraint (called closure) restricts the data to a nonstandard sample space. By exploiting this sample space, unbiased estimates of model parameters can be obtained to predict behavior given any number of choice alternatives. Additionally, the compositional analysis of choice provides tools that can accommodate both violations of scale invariance and unequal discriminability of stimuli signaling schedules of reinforcement. In order to demonstrate how choice data can be analyzed using the compositional approach, data from three previously published studies are reanalyzed. Additionally, new data is reported comparing matching behavior given four, six, and eight response alternatives. © Society for the Experimental Analysis of Behavior.

  10. Draft-camp predictors of subsequent career success in the Australian Football League.

    PubMed

    Burgess, Darren; Naughton, Geraldine; Hopkins, Will

    2012-11-01

    The National Draft Camp results are generally considered to be important for informing talent scouts about the physical performance capacities of talented young Australian Rules Football (AFL) players. The purpose of this project was to determine magnitude of associations between five year career success in the AFL and physical draft camp tests, final draft selection order and previous match physical performance. Physical testing data of 99 players from the National Under 18 (U 18) competition were retrospectively analysed across 2002 and 2003 National Draft Camps. Physical match data was collected on these players and links with subsequent early career success (AFL games played) were explored. TrakPerformance Software was used to quantify the movement of 92 players during competitive games of the National U 18 Championships. Linear modelling using results from draft camp data involving 95 U 18 players, along with final draft selection order, was used to predict five year career success in senior AFL. Multiple U 18 match variables demonstrated large associations (sprints/min=43% more games, % sprint=43% more games) with five year career success in AFL. Final draft order and single variable predictors had moderate associations with career success. Neither U 18 matches nor draft camp testing was predictive of injuries incurring over the five years. Variability in senior AFL career success had a large association with a combination of match physical variables and draft test results. The objective data available should be considered in the selection of prospective player success. Copyright © 2012 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  11. Inhibitory mechanism of the matching heuristic in syllogistic reasoning.

    PubMed

    Tse, Ping Ping; Moreno Ríos, Sergio; García-Madruga, Juan Antonio; Bajo Molina, María Teresa

    2014-11-01

    A number of heuristic-based hypotheses have been proposed to explain how people solve syllogisms with automatic processes. In particular, the matching heuristic employs the congruency of the quantifiers in a syllogism—by matching the quantifier of the conclusion with those of the two premises. When the heuristic leads to an invalid conclusion, successful solving of these conflict problems requires the inhibition of automatic heuristic processing. Accordingly, if the automatic processing were based on processing the set of quantifiers, no semantic contents would be inhibited. The mental model theory, however, suggests that people reason using mental models, which always involves semantic processing. Therefore, whatever inhibition occurs in the processing implies the inhibition of the semantic contents. We manipulated the validity of the syllogism and the congruency of the quantifier of its conclusion with those of the two premises according to the matching heuristic. A subsequent lexical decision task (LDT) with related words in the conclusion was used to test any inhibition of the semantic contents after each syllogistic evaluation trial. In the LDT, the facilitation effect of semantic priming diminished after correctly solved conflict syllogisms (match-invalid or mismatch-valid), but was intact after no-conflict syllogisms. The results suggest the involvement of an inhibitory mechanism of semantic contents in syllogistic reasoning when there is a conflict between the output of the syntactic heuristic and actual validity. Our results do not support a uniquely syntactic process of syllogistic reasoning but fit with the predictions based on mental model theory. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. A trade-off between model resolution and variance with selected Rayleigh-wave data

    USGS Publications Warehouse

    Xia, J.; Miller, R.D.; Xu, Y.

    2008-01-01

    Inversion of multimode surface-wave data is of increasing interest in the near-surface geophysics community. For a given near-surface geophysical problem, it is essential to understand how well the data, calculated according to a layered-earth model, might match the observed data. A data-resolution matrix is a function of the data kernel (determined by a geophysical model and a priori information applied to the problem), not the data. A data-resolution matrix of high-frequency (??? 2 Hz) Rayleigh-wave phase velocities, therefore, offers a quantitative tool for designing field surveys and predicting the match between calculated and observed data. First, we employed a data-resolution matrix to select data that would be well predicted and to explain advantages of incorporating higher modes in inversion. The resulting discussion using the data-resolution matrix provides insight into the process of inverting Rayleigh-wave phase velocities with higher mode data to estimate S-wave velocity structure. Discussion also suggested that each near-surface geophysical target can only be resolved using Rayleigh-wave phase velocities within specific frequency ranges, and higher mode data are normally more accurately predicted than fundamental mode data because of restrictions on the data kernel for the inversion system. Second, we obtained an optimal damping vector in a vicinity of an inverted model by the singular value decomposition of a trade-off function of model resolution and variance. In the end of the paper, we used a real-world example to demonstrate that selected data with the data-resolution matrix can provide better inversion results and to explain with the data-resolution matrix why incorporating higher mode data in inversion can provide better results. We also calculated model-resolution matrices of these examples to show the potential of increasing model resolution with selected surface-wave data. With the optimal damping vector, we can improve and assess an inverted model obtained by a damped least-square method.

  13. Optimal flight initiation distance.

    PubMed

    Cooper, William E; Frederick, William G

    2007-01-07

    Decisions regarding flight initiation distance have received scant theoretical attention. A graphical model by Ydenberg and Dill (1986. The economics of fleeing from predators. Adv. Stud. Behav. 16, 229-249) that has guided research for the past 20 years specifies when escape begins. In the model, a prey detects a predator, monitors its approach until costs of escape and of remaining are equal, and then flees. The distance between predator and prey when escape is initiated (approach distance = flight initiation distance) occurs where decreasing cost of remaining and increasing cost of fleeing intersect. We argue that prey fleeing as predicted cannot maximize fitness because the best prey can do is break even during an encounter. We develop two optimality models, one applying when all expected future contribution to fitness (residual reproductive value) is lost if the prey dies, the other when any fitness gained (increase in expected RRV) during the encounter is retained after death. Both models predict optimal flight initiation distance from initial expected fitness, benefits obtainable during encounters, costs of escaping, and probability of being killed. Predictions match extensively verified predictions of Ydenberg and Dill's (1986) model. Our main conclusion is that optimality models are preferable to break-even models because they permit fitness maximization, offer many new testable predictions, and allow assessment of prey decisions in many naturally occurring situations through modification of benefit, escape cost, and risk functions.

  14. Wavelet modeling and prediction of the stability of states: the Roman Empire and the European Union

    NASA Astrophysics Data System (ADS)

    Yaroshenko, Tatyana Y.; Krysko, Dmitri V.; Dobriyan, Vitalii; Zhigalov, Maksim V.; Vos, Hendrik; Vandenabeele, Peter; Krysko, Vadim A.

    2015-09-01

    How can the stability of a state be quantitatively determined and its future stability predicted? The rise and collapse of empires and states is very complex, and it is exceedingly difficult to understand and predict it. Existing theories are usually formulated as verbal models and, consequently, do not yield sharply defined, quantitative prediction that can be unambiguously validated with data. Here we describe a model that determines whether the state is in a stable or chaotic condition and predicts its future condition. The central model, which we test, is that growth and collapse of states is reflected by the changes of their territories, populations and budgets. The model was simulated within the historical societies of the Roman Empire (400 BC to 400 AD) and the European Union (1957-2007) by using wavelets and analysis of the sign change of the spectrum of Lyapunov exponents. The model matches well with the historical events. During wars and crises, the state becomes unstable; this is reflected in the wavelet analysis by a significant increase in the frequency ω (t) and wavelet coefficients W (ω, t) and the sign of the largest Lyapunov exponent becomes positive, indicating chaos. We successfully reconstructed and forecasted time series in the Roman Empire and the European Union by applying artificial neural network. The proposed model helps to quantitatively determine and forecast the stability of a state.

  15. Digital-computer model of ground-water flow in Tooele Valley, Utah

    USGS Publications Warehouse

    Razem, Allan C.; Bartholoma, Scott D.

    1980-01-01

    A two-dimensional, finite-difference digital-computer model was used to simulate the ground-water flow in the principal artesian aquifer in Tooele Valley, Utah. The parameters used in the model were obtained through field measurements and tests, from historical records, and by trial-and-error adjustments. The model was calibrated against observed water-level changes that occurred during 1941-50, 1951-60, 1961-66, 1967-73, and 1974-78. The reliability of the predictions is good in most parts of the valley, as is shown by the ability of the model to match historical water-level changes.

  16. Assessing waveform predictions of recent three-dimensional velocity models of the Tibetan Plateau

    NASA Astrophysics Data System (ADS)

    Bao, Xueyang; Shen, Yang

    2016-04-01

    Accurate velocity models are essential for both the determination of earthquake locations and source moments and the interpretation of Earth structures. With the increasing number of three-dimensional velocity models, it has become necessary to assess the models for accuracy in predicting seismic observations. Six models of the crustal and uppermost mantle structures in Tibet and surrounding regions are investigated in this study. Regional Rayleigh and Pn (or Pnl) waveforms from two ground truth events, including one nuclear explosion and one natural earthquake located in the study area, are simulated by using a three-dimensional finite-difference method. Synthetics are compared to observed waveforms in multiple period bands of 20-75 s for Rayleigh waves and 1-20 s for Pn/Pnl waves. The models are evaluated based on the phase delays and cross-correlation coefficients between synthetic and observed waveforms. A model generated from full-wave ambient noise tomography best predicts Rayleigh waves throughout the data set, as well as Pn/Pnl waves traveling from the Tarim Basin to the stations located in central Tibet. In general, the models constructed from P wave tomography are not well suited to predict Rayleigh waves, and vice versa. Possible causes of the differences between observed and synthetic waveforms, and frequency-dependent variations of the "best matching" models with the smallest prediction errors are discussed. This study suggests that simultaneous prediction for body and surface waves requires an integrated velocity model constructed with multiple seismic waveforms and consideration of other important properties, such as anisotropy.

  17. ASTEROSEISMIC ANALYSIS OF THE PRE-MAIN-SEQUENCE STARS IN NGC 2264

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guenther, D. B.; Casey, M. P.; Kallinger, T.

    2009-10-20

    NGC 2264 is a young open cluster lying above the Galactic plane in which six variable stars have previously been identified as possible pre-main-sequence (PMS) pulsators. Their oscillation spectra are relatively sparse with each star having from 2 to 12 unambiguous frequency identifications based on Microvariability and Oscillations of Stars satellite and multi-site ground-based photometry. We describe our efforts to find classical PMS stellar models (i.e., models evolved from the Hayashi track) whose oscillation spectra match the observed frequencies. We find model eigenspectra that match the observed frequencies and are consistent with the stars' locations in the HR diagram formore » the three faintest of the six stars. Not all the frequencies found in spectra of the three brightest stars can be matched to classical PMS model spectra possibly because of effects not included in our PMS models such as chemical and angular momentum stratification in the outer layers of the star. All the oscillation spectra contain both radial and nonradial p-modes. We argue that the PMS pulsating stars divide into two groups depending on whether or not they have undergone complete mixing (i.e., have gone through a Hayashi phase). Lower mass stars that do evolve through a Hayashi phase have oscillation spectra predicted by classical PMS models, whereas more massive stars that do not, retain mass infall effects in their surface layers and are not well modeled by classical PMS models.« less

  18. Creation of an in vitro biomechanical model of the trachea using rapid prototyping.

    PubMed

    Walenga, Ross L; Longest, P Worth; Sundaresan, Gobalakrishnan

    2014-06-03

    Previous in vitro models of the airways are either rigid or, if flexible, have not matched in vivo compliance characteristics. Rapid prototyping provides a quickly evolving approach that can be used to directly produce in vitro airway models using either rigid or flexible polymers. The objective of this study was to use rapid prototyping to directly produce a flexible hollow model that matches the biomechanical compliance of the trachea. The airway model consisted of a previously developed characteristic mouth-throat region, the trachea, and a portion of the main bronchi. Compliance of the tracheal region was known from a previous in vivo imaging study that reported cross-sectional areas over a range of internal pressures. The compliance of the tracheal region was matched to the in vivo data for a specific flexible resin by iteratively selecting the thicknesses and other dimensions of tracheal wall components. Seven iterative models were produced and illustrated highly non-linear expansion consisting of initial rapid size increase, a transition region, and continued slower size increase as pressure was increased. Thickness of the esophageal interface membrane and initial trachea indention were identified as key parameters with the final model correctly predicting all phases of expansion within a value of 5% of the in vivo data. Applications of the current biomechanical model are related to endotracheal intubation and include determination of effective mucus suctioning and evaluation of cuff sealing with respect to gases and secretions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Distributed Learning, Recognition, and Prediction by ART and ARTMAP Neural Networks.

    PubMed

    Carpenter, Gail A.

    1997-11-01

    A class of adaptive resonance theory (ART) models for learning, recognition, and prediction with arbitrarily distributed code representations is introduced. Distributed ART neural networks combine the stable fast learning capabilities of winner-take-all ART systems with the noise tolerance and code compression capabilities of multilayer perceptrons. With a winner-take-all code, the unsupervised model dART reduces to fuzzy ART and the supervised model dARTMAP reduces to fuzzy ARTMAP. With a distributed code, these networks automatically apportion learned changes according to the degree of activation of each coding node, which permits fast as well as slow learning without catastrophic forgetting. Distributed ART models replace the traditional neural network path weight with a dynamic weight equal to the rectified difference between coding node activation and an adaptive threshold. Thresholds increase monotonically during learning according to a principle of atrophy due to disuse. However, monotonic change at the synaptic level manifests itself as bidirectional change at the dynamic level, where the result of adaptation resembles long-term potentiation (LTP) for single-pulse or low frequency test inputs but can resemble long-term depression (LTD) for higher frequency test inputs. This paradoxical behavior is traced to dual computational properties of phasic and tonic coding signal components. A parallel distributed match-reset-search process also helps stabilize memory. Without the match-reset-search system, dART becomes a type of distributed competitive learning network.

  20. Qualitative Event-Based Diagnosis: Case Study on the Second International Diagnostic Competition

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Roychoudhury, Indranil

    2010-01-01

    We describe a diagnosis algorithm entered into the Second International Diagnostic Competition. We focus on the first diagnostic problem of the industrial track of the competition in which a diagnosis algorithm must detect, isolate, and identify faults in an electrical power distribution testbed and provide corresponding recovery recommendations. The diagnosis algorithm embodies a model-based approach, centered around qualitative event-based fault isolation. Faults produce deviations in measured values from model-predicted values. The sequence of these deviations is matched to those predicted by the model in order to isolate faults. We augment this approach with model-based fault identification, which determines fault parameters and helps to further isolate faults. We describe the diagnosis approach, provide diagnosis results from running the algorithm on provided example scenarios, and discuss the issues faced, and lessons learned, from implementing the approach

  1. Artificial neural network implementation of a near-ideal error prediction controller

    NASA Technical Reports Server (NTRS)

    Mcvey, Eugene S.; Taylor, Lynore Denise

    1992-01-01

    A theory has been developed at the University of Virginia which explains the effects of including an ideal predictor in the forward loop of a linear error-sampled system. It has been shown that the presence of this ideal predictor tends to stabilize the class of systems considered. A prediction controller is merely a system which anticipates a signal or part of a signal before it actually occurs. It is understood that an exact prediction controller is physically unrealizable. However, in systems where the input tends to be repetitive or limited, (i.e., not random) near ideal prediction is possible. In order for the controller to act as a stability compensator, the predictor must be designed in a way that allows it to learn the expected error response of the system. In this way, an unstable system will become stable by including the predicted error in the system transfer function. Previous and current prediction controller include pattern recognition developments and fast-time simulation which are applicable to the analysis of linear sampled data type systems. The use of pattern recognition techniques, along with a template matching scheme, has been proposed as one realizable type of near-ideal prediction. Since many, if not most, systems are repeatedly subjected to similar inputs, it was proposed that an adaptive mechanism be used to 'learn' the correct predicted error response. Once the system has learned the response of all the expected inputs, it is necessary only to recognize the type of input with a template matching mechanism and then to use the correct predicted error to drive the system. Suggested here is an alternate approach to the realization of a near-ideal error prediction controller, one designed using Neural Networks. Neural Networks are good at recognizing patterns such as system responses, and the back-propagation architecture makes use of a template matching scheme. In using this type of error prediction, it is assumed that the system error responses be known for a particular input and modeled plant. These responses are used in the error prediction controller. An analysis was done on the general dynamic behavior that results from including a digital error predictor in a control loop and these were compared to those including the near-ideal Neural Network error predictor. This analysis was done for a second and third order system.

  2. Predicting clinical symptoms of attention deficit hyperactivity disorder based on temporal patterns between and within intrinsic connectivity networks.

    PubMed

    Wang, Xun-Heng; Jiao, Yun; Li, Lihua

    2017-10-24

    Attention deficit hyperactivity disorder (ADHD) is a common brain disorder with high prevalence in school-age children. Previously developed machine learning-based methods have discriminated patients with ADHD from normal controls by providing label information of the disease for individuals. Inattention and impulsivity are the two most significant clinical symptoms of ADHD. However, predicting clinical symptoms (i.e., inattention and impulsivity) is a challenging task based on neuroimaging data. The goal of this study is twofold: to build predictive models for clinical symptoms of ADHD based on resting-state fMRI and to mine brain networks for predictive patterns of inattention and impulsivity. To achieve this goal, a cohort of 74 boys with ADHD and a cohort of 69 age-matched normal controls were recruited from the ADHD-200 Consortium. Both structural and resting-state fMRI images were obtained for each participant. Temporal patterns between and within intrinsic connectivity networks (ICNs) were applied as raw features in the predictive models. Specifically, sample entropy was taken asan intra-ICN feature, and phase synchronization (PS) was used asan inter-ICN feature. The predictive models were based on the least absolute shrinkage and selectionator operator (LASSO) algorithm. The performance of the predictive model for inattention is r=0.79 (p<10 -8 ), and the performance of the predictive model for impulsivity is r=0.48 (p<10 -8 ). The ICN-related predictive patterns may provide valuable information for investigating the brain network mechanisms of ADHD. In summary, the predictive models for clinical symptoms could be beneficial for personalizing ADHD medications. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  3. A perturbative approach to the redshift space correlation function: beyond the Standard Model

    NASA Astrophysics Data System (ADS)

    Bose, Benjamin; Koyama, Kazuya

    2017-08-01

    We extend our previous redshift space power spectrum code to the redshift space correlation function. Here we focus on the Gaussian Streaming Model (GSM). Again, the code accommodates a wide range of modified gravity and dark energy models. For the non-linear real space correlation function used in the GSM we use the Fourier transform of the RegPT 1-loop matter power spectrum. We compare predictions of the GSM for a Vainshtein screened and Chameleon screened model as well as GR. These predictions are compared to the Fourier transform of the Taruya, Nishimichi and Saito (TNS) redshift space power spectrum model which is fit to N-body data. We find very good agreement between the Fourier transform of the TNS model and the GSM predictions, with <= 6% deviations in the first two correlation function multipoles for all models for redshift space separations in 50Mpch <= s <= 180Mpc/h. Excellent agreement is found in the differences between the modified gravity and GR multipole predictions for both approaches to the redshift space correlation function, highlighting their matched ability in picking up deviations from GR. We elucidate the timeliness of such non-standard templates at the dawn of stage-IV surveys and discuss necessary preparations and extensions needed for upcoming high quality data.

  4. Does morality have a biological basis? An empirical test of the factors governing moral sentiments relating to incest.

    PubMed

    Lieberman, Debra; Tooby, John; Cosmides, Leda

    2003-04-22

    Kin-recognition systems have been hypothesized to exist in humans, and adaptively to regulate altruism and incest avoidance among close genetic kin. This latter function allows the architecture of the kin recognition system to be mapped by quantitatively matching individual variation in opposition to incest to individual variation in developmental parameters, such as family structure and co-residence patterns. Methodological difficulties that appear when subjects are asked to disclose incestuous inclinations can be circumvented by measuring their opposition to incest in third parties, i.e. morality. This method allows a direct test of Westermarck's original hypothesis that childhood co-residence with an opposite-sex individual predicts the strength of moral sentiments regarding third-party sibling incest. Results support Westermarck's hypothesis and the model of kin recognition that it implies. Co-residence duration objectively predicts genetic relatedness, making it a reliable cue to kinship. Co-residence duration predicts the strength of opposition to incest, even after controlling for relatedness and even when co-residing individuals are genetically unrelated. This undercuts kin-recognition models requiring matching to self (through, for example, major histocompatibility complex or phenotypic markers). Subjects' beliefs about relatedness had no effect after controlling for co-residence, indicating that systems regulating kin-relevant behaviours are non-conscious, and calibrated by co-residence, not belief.

  5. Does morality have a biological basis? An empirical test of the factors governing moral sentiments relating to incest.

    PubMed Central

    Lieberman, Debra; Tooby, John; Cosmides, Leda

    2003-01-01

    Kin-recognition systems have been hypothesized to exist in humans, and adaptively to regulate altruism and incest avoidance among close genetic kin. This latter function allows the architecture of the kin recognition system to be mapped by quantitatively matching individual variation in opposition to incest to individual variation in developmental parameters, such as family structure and co-residence patterns. Methodological difficulties that appear when subjects are asked to disclose incestuous inclinations can be circumvented by measuring their opposition to incest in third parties, i.e. morality. This method allows a direct test of Westermarck's original hypothesis that childhood co-residence with an opposite-sex individual predicts the strength of moral sentiments regarding third-party sibling incest. Results support Westermarck's hypothesis and the model of kin recognition that it implies. Co-residence duration objectively predicts genetic relatedness, making it a reliable cue to kinship. Co-residence duration predicts the strength of opposition to incest, even after controlling for relatedness and even when co-residing individuals are genetically unrelated. This undercuts kin-recognition models requiring matching to self (through, for example, major histocompatibility complex or phenotypic markers). Subjects' beliefs about relatedness had no effect after controlling for co-residence, indicating that systems regulating kin-relevant behaviours are non-conscious, and calibrated by co-residence, not belief. PMID:12737660

  6. Modeling the dynamics of choice.

    PubMed

    Baum, William M; Davison, Michael

    2009-06-01

    A simple linear-operator model both describes and predicts the dynamics of choice that may underlie the matching relation. We measured inter-food choice within components of a schedule that presented seven different pairs of concurrent variable-interval schedules for 12 food deliveries each with no signals indicating which pair was in force. This measure of local choice was accurately described and predicted as obtained reinforcer sequences shifted it to favor one alternative or the other. The effect of a changeover delay was reflected in one parameter, the asymptote, whereas the effect of a difference in overall rate of food delivery was reflected in the other parameter, rate of approach to the asymptote. The model takes choice as a primary dependent variable, not derived by comparison between alternatives-an approach that agrees with the molar view of behaviour.

  7. Theoretical model of impact damage in structural ceramics

    NASA Technical Reports Server (NTRS)

    Liaw, B. M.; Kobayashi, A. S.; Emery, A. G.

    1984-01-01

    This paper presents a mechanistically consistent model of impact damage based on elastic failures due to tensile and shear overloading. An elastic axisymmetric finite element model is used to determine the dynamic stresses generated by a single particle impact. Local failures in a finite element are assumed to occur when the primary/secondary principal stresses or the maximum shear stress reach critical tensile or shear stresses, respectively. The succession of failed elements thus models macrocrack growth. Sliding motions of cracks, which closed during unloading, are resisted by friction and the unrecovered deformation represents the 'plastic deformation' reported in the literature. The predicted ring cracks on the contact surface, as well as the cone cracks, median cracks, radial cracks, lateral cracks, and damage-induced porous zones in the interior of hot-pressed silicon nitride plates, matched those observed experimentally. The finite element model also predicted the uplifting of the free surface surrounding the impact site.

  8. A validated computational model for the design of surface textures in full-film lubricated sliding

    NASA Astrophysics Data System (ADS)

    Schuh, Jonathon; Lee, Yong Hoon; Allison, James; Ewoldt, Randy

    2016-11-01

    Our recent experimental work showed that asymmetry is needed for surface textures to decrease friction in full-film lubricated sliding (thrust bearings) with Newtonian fluids; textures reduce the shear load and produce a separating normal force. The sign of the separating normal force is not predicted by previous 1-D theories. Here we model the flow with the Reynolds equation in cylindrical coordinates, numerically implemented with a pseudo-spectral method. The model predictions match experiments, rationalize the sign of the normal force, and allow for design of surface texture geometry. To minimize sliding friction with angled cylindrical textures, an optimal angle of asymmetry β exists. The optimal angle depends on the film thickness but not the sliding velocity within the applicable range of the model. The model has also been used to optimize generalized surface texture topography while satisfying manufacturability constraints.

  9. Conditional dissipation of scalars in homogeneous turbulence: Closure for MMC modelling

    NASA Astrophysics Data System (ADS)

    Wandel, Andrew P.

    2013-08-01

    While the mean and unconditional variance are to be predicted well by any reasonable turbulent combustion model, these are generally not sufficient for the accurate modelling of complex phenomena such as extinction/reignition. An additional criterion has been recently introduced: accurate modelling of the dissipation timescales associated with fluctuations of scalars about their conditional mean (conditional dissipation timescales). Analysis of Direct Numerical Simulation (DNS) results for a passive scalar shows that the conditional dissipation timescale is of the order of the integral timescale and smaller than the unconditional dissipation timescale. A model is proposed: the conditional dissipation timescale is proportional to the integral timescale. This model is used in Multiple Mapping Conditioning (MMC) modelling for a passive scalar case and a reactive scalar case, comparing to DNS results for both. The results show that this model improves the accuracy of MMC predictions so as to match the DNS results more closely using a relatively-coarse spatial resolution compared to other turbulent combustion models.

  10. Load-adaptive bone remodeling simulations reveal osteoporotic microstructural and mechanical changes in whole human vertebrae.

    PubMed

    Badilatti, Sandro D; Christen, Patrik; Parkinson, Ian; Müller, Ralph

    2016-12-08

    Osteoporosis is a major medical burden and its impact is expected to increase in our aging society. It is associated with low bone density and microstructural deterioration. Treatments are available, but the critical factor is to define individuals at risk from osteoporotic fractures. Computational simulations investigating not only changes in net bone tissue volume, but also changes in its microstructure where osteoporotic deterioration occur might help to better predict the risk of fractures. In this study, bone remodeling simulations with a mechanical feedback loop were used to predict microstructural changes due to osteoporosis and their impact on bone fragility from 50 to 80 years of age. Starting from homeostatic bone remodeling of a group of seven, mixed sex whole vertebrae, five mechanostat models mimicking different biological alterations associated with osteoporosis were developed, leading to imbalanced bone formation and resorption with a total net loss of bone tissue. A model with reduced bone formation rate and cell sensitivity led to the best match of morphometric indices compared to literature data and was chosen to predict postmenopausal osteoporotic bone loss in the whole group. Thirty years of osteoporotic bone loss were predicted with changes in morphometric indices in agreement with experimental measurements, and only showing major deviations in trabecular number and trabecular separation. In particular, although being optimized to match to the morphometric indices alone, the predicted bone loss revealed realistic changes on the organ level and on biomechanical competence. While the osteoporotic bone was able to maintain the mechanical stability to a great extent, higher fragility towards error loads was found for the osteoporotic bones. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Correcting pervasive errors in RNA crystallography through enumerative structure prediction.

    PubMed

    Chou, Fang-Chieh; Sripakdeevong, Parin; Dibrov, Sergey M; Hermann, Thomas; Das, Rhiju

    2013-01-01

    Three-dimensional RNA models fitted into crystallographic density maps exhibit pervasive conformational ambiguities, geometric errors and steric clashes. To address these problems, we present enumerative real-space refinement assisted by electron density under Rosetta (ERRASER), coupled to Python-based hierarchical environment for integrated 'xtallography' (PHENIX) diffraction-based refinement. On 24 data sets, ERRASER automatically corrects the majority of MolProbity-assessed errors, improves the average R(free) factor, resolves functionally important discrepancies in noncanonical structure and refines low-resolution models to better match higher-resolution models.

  12. A comparison of finite element and analytic models of acoustic scattering from rough poroelastic interfaces.

    PubMed

    Bonomo, Anthony L; Isakson, Marcia J; Chotiros, Nicholas P

    2015-04-01

    The finite element method is used to model acoustic scattering from rough poroelastic surfaces. Both monostatic and bistatic scattering strengths are calculated and compared with three analytic models: Perturbation theory, the Kirchhoff approximation, and the small-slope approximation. It is found that the small-slope approximation is in very close agreement with the finite element results for all cases studied and that perturbation theory and the Kirchhoff approximation can be considered valid in those instances where their predictions match those given by the small-slope approximation.

  13. Longitudinal Aerodynamic Modeling of the Adaptive Compliant Trailing Edge Flaps on a GIII Airplane and Comparisons to Flight Data

    NASA Technical Reports Server (NTRS)

    Smith, Mark S.; Bui, Trong T.; Garcia, Christian A.; Cumming, Stephen B.

    2016-01-01

    A pair of compliant trailing edge flaps was flown on a modified GIII airplane. Prior to flight test, multiple analysis tools of various levels of complexity were used to predict the aerodynamic effects of the flaps. Vortex lattice, full potential flow, and full Navier-Stokes aerodynamic analysis software programs were used for prediction, in addition to another program that used empirical data. After the flight-test series, lift and pitching moment coefficient increments due to the flaps were estimated from flight data and compared to the results of the predictive tools. The predicted lift increments matched flight data well for all predictive tools for small flap deflections. All tools over-predicted lift increments for large flap deflections. The potential flow and Navier-Stokes programs predicted pitching moment coefficient increments better than the other tools.

  14. Analysis of the predictive qualities of betting odds and FIFA World Ranking: evidence from the 2006, 2010 and 2014 Football World Cups.

    PubMed

    Wunderlich, Fabian; Memmert, Daniel

    2016-12-01

    The present study aims to investigate the ability of a new framework enabling to derive more detailed model-based predictions from ranking systems. These were compared to predictions from the bet market including data from the World Cups 2006, 2010, and 2014. The results revealed that the FIFA World Ranking has essentially improved its predictive qualities compared to the bet market since the mode of calculation was changed in 2006. While both predictors were useful to obtain accurate predictions in general, the world ranking was able to outperform the bet market significantly for the World Cup 2014 and when the data from the World Cups 2010 and 2014 were pooled. Our new framework can be extended in future research to more detailed prediction tasks (i.e., predicting the final scores of a match or the tournament progress of a team).

  15. Validation of model predictions of pore-scale fluid distributions during two-phase flow

    NASA Astrophysics Data System (ADS)

    Bultreys, Tom; Lin, Qingyang; Gao, Ying; Raeini, Ali Q.; AlRatrout, Ahmed; Bijeljic, Branko; Blunt, Martin J.

    2018-05-01

    Pore-scale two-phase flow modeling is an important technology to study a rock's relative permeability behavior. To investigate if these models are predictive, the calculated pore-scale fluid distributions which determine the relative permeability need to be validated. In this work, we introduce a methodology to quantitatively compare models to experimental fluid distributions in flow experiments visualized with microcomputed tomography. First, we analyzed five repeated drainage-imbibition experiments on a single sample. In these experiments, the exact fluid distributions were not fully repeatable on a pore-by-pore basis, while the global properties of the fluid distribution were. Then two fractional flow experiments were used to validate a quasistatic pore network model. The model correctly predicted the fluid present in more than 75% of pores and throats in drainage and imbibition. To quantify what this means for the relevant global properties of the fluid distribution, we compare the main flow paths and the connectivity across the different pore sizes in the modeled and experimental fluid distributions. These essential topology characteristics matched well for drainage simulations, but not for imbibition. This suggests that the pore-filling rules in the network model we used need to be improved to make reliable predictions of imbibition. The presented analysis illustrates the potential of our methodology to systematically and robustly test two-phase flow models to aid in model development and calibration.

  16. A User-Friendly Model for Spray Drying to Aid Pharmaceutical Product Development

    PubMed Central

    Grasmeijer, Niels; de Waard, Hans; Hinrichs, Wouter L. J.; Frijlink, Henderik W.

    2013-01-01

    The aim of this study was to develop a user-friendly model for spray drying that can aid in the development of a pharmaceutical product, by shifting from a trial-and-error towards a quality-by-design approach. To achieve this, a spray dryer model was developed in commercial and open source spreadsheet software. The output of the model was first fitted to the experimental output of a Büchi B-290 spray dryer and subsequently validated. The predicted outlet temperatures of the spray dryer model matched the experimental values very well over the entire range of spray dryer settings that were tested. Finally, the model was applied to produce glassy sugars by spray drying, an often used excipient in formulations of biopharmaceuticals. For the production of glassy sugars, the model was extended to predict the relative humidity at the outlet, which is not measured in the spray dryer by default. This extended model was then successfully used to predict whether specific settings were suitable for producing glassy trehalose and inulin by spray drying. In conclusion, a spray dryer model was developed that is able to predict the output parameters of the spray drying process. The model can aid the development of spray dried pharmaceutical products by shifting from a trial-and-error towards a quality-by-design approach. PMID:24040240

  17. Sensitivity Analysis and Accuracy of a CFD-TFM Approach to Bubbling Bed Using Pressure Drop Fluctuations

    PubMed Central

    Tricomi, Leonardo; Melchiori, Tommaso; Chiaramonti, David; Boulet, Micaël; Lavoie, Jean Michel

    2017-01-01

    Based upon the two fluid model (TFM) theory, a CFD model was implemented to investigate a cold multiphase-fluidized bubbling bed reactor. The key variable used to characterize the fluid dynamic of the experimental system, and compare it to model predictions, was the time-pressure drop induced by the bubble motion across the bed. This time signal was then processed to obtain the power spectral density (PSD) distribution of pressure fluctuations. As an important aspect of this work, the effect of the sampling time scale on the empirical power spectral density (PSD) was investigated. A time scale of 40 s was found to be a good compromise ensuring both simulation performance and numerical validation consistency. The CFD model was first numerically verified by mesh refinement process, after what it was used to investigate the sensitivity with regards to minimum fluidization velocity (as a calibration point for drag law), restitution coefficient, and solid pressure term while assessing his accuracy in matching the empirical PSD. The 2D model provided a fair match with the empirical time-averaged pressure drop, the relating fluctuations amplitude, and the signal’s energy computed as integral of the PSD. A 3D version of the TFM was also used and it improved the match with the empirical PSD in the very first part of the frequency spectrum. PMID:28695119

  18. Sensitivity Analysis and Accuracy of a CFD-TFM Approach to Bubbling Bed Using Pressure Drop Fluctuations.

    PubMed

    Tricomi, Leonardo; Melchiori, Tommaso; Chiaramonti, David; Boulet, Micaël; Lavoie, Jean Michel

    2017-01-01

    Based upon the two fluid model (TFM) theory, a CFD model was implemented to investigate a cold multiphase-fluidized bubbling bed reactor. The key variable used to characterize the fluid dynamic of the experimental system, and compare it to model predictions, was the time-pressure drop induced by the bubble motion across the bed. This time signal was then processed to obtain the power spectral density (PSD) distribution of pressure fluctuations. As an important aspect of this work, the effect of the sampling time scale on the empirical power spectral density (PSD) was investigated. A time scale of 40 s was found to be a good compromise ensuring both simulation performance and numerical validation consistency. The CFD model was first numerically verified by mesh refinement process, after what it was used to investigate the sensitivity with regards to minimum fluidization velocity (as a calibration point for drag law), restitution coefficient, and solid pressure term while assessing his accuracy in matching the empirical PSD. The 2D model provided a fair match with the empirical time-averaged pressure drop, the relating fluctuations amplitude, and the signal's energy computed as integral of the PSD. A 3D version of the TFM was also used and it improved the match with the empirical PSD in the very first part of the frequency spectrum.

  19. The “Dry-Run” Analysis: A Method for Evaluating Risk Scores for Confounding Control

    PubMed Central

    Wyss, Richard; Hansen, Ben B.; Ellis, Alan R.; Gagne, Joshua J.; Desai, Rishi J.; Glynn, Robert J.; Stürmer, Til

    2017-01-01

    Abstract A propensity score (PS) model's ability to control confounding can be assessed by evaluating covariate balance across exposure groups after PS adjustment. The optimal strategy for evaluating a disease risk score (DRS) model's ability to control confounding is less clear. DRS models cannot be evaluated through balance checks within the full population, and they are usually assessed through prediction diagnostics and goodness-of-fit tests. A proposed alternative is the “dry-run” analysis, which divides the unexposed population into “pseudo-exposed” and “pseudo-unexposed” groups so that differences on observed covariates resemble differences between the actual exposed and unexposed populations. With no exposure effect separating the pseudo-exposed and pseudo-unexposed groups, a DRS model is evaluated by its ability to retrieve an unconfounded null estimate after adjustment in this pseudo-population. We used simulations and an empirical example to compare traditional DRS performance metrics with the dry-run validation. In simulations, the dry run often improved assessment of confounding control, compared with the C statistic and goodness-of-fit tests. In the empirical example, PS and DRS matching gave similar results and showed good performance in terms of covariate balance (PS matching) and controlling confounding in the dry-run analysis (DRS matching). The dry-run analysis may prove useful in evaluating confounding control through DRS models. PMID:28338910

  20. Assessing the capability of continuum and discrete particle methods to simulate gas-solids flow using DNS predictions as a benchmark

    DOE PAGES

    Lu, Liqiang; Liu, Xiaowen; Li, Tingwen; ...

    2017-08-12

    For this study, gas–solids flow in a three-dimension periodic domain was numerically investigated by direct numerical simulation (DNS), computational fluid dynamic-discrete element method (CFD-DEM) and two-fluid model (TFM). DNS data obtained by finely resolving the flow around every particle are used as a benchmark to assess the validity of coarser DEM and TFM approaches. The CFD-DEM predicts the correct cluster size distribution and under-predicts the macro-scale slip velocity even with a grid size as small as twice the particle diameter. The TFM approach predicts larger cluster size and lower slip velocity with a homogeneous drag correlation. Although the slip velocitymore » can be matched by a simple modification to the drag model, the predicted voidage distribution is still different from DNS: Both CFD-DEM and TFM over-predict the fraction of particles in dense regions and under-predict the fraction of particles in regions of intermediate void fractions. Also, the cluster aspect ratio of DNS is smaller than CFD-DEM and TFM. Since a simple correction to the drag model can predict a correct slip velocity, it is hopeful that drag corrections based on more elaborate theories that consider voidage gradient and particle fluctuations may be able to improve the current predictions of cluster distribution.« less

  1. Assessing the capability of continuum and discrete particle methods to simulate gas-solids flow using DNS predictions as a benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Liqiang; Liu, Xiaowen; Li, Tingwen

    For this study, gas–solids flow in a three-dimension periodic domain was numerically investigated by direct numerical simulation (DNS), computational fluid dynamic-discrete element method (CFD-DEM) and two-fluid model (TFM). DNS data obtained by finely resolving the flow around every particle are used as a benchmark to assess the validity of coarser DEM and TFM approaches. The CFD-DEM predicts the correct cluster size distribution and under-predicts the macro-scale slip velocity even with a grid size as small as twice the particle diameter. The TFM approach predicts larger cluster size and lower slip velocity with a homogeneous drag correlation. Although the slip velocitymore » can be matched by a simple modification to the drag model, the predicted voidage distribution is still different from DNS: Both CFD-DEM and TFM over-predict the fraction of particles in dense regions and under-predict the fraction of particles in regions of intermediate void fractions. Also, the cluster aspect ratio of DNS is smaller than CFD-DEM and TFM. Since a simple correction to the drag model can predict a correct slip velocity, it is hopeful that drag corrections based on more elaborate theories that consider voidage gradient and particle fluctuations may be able to improve the current predictions of cluster distribution.« less

  2. The aerodynamic cost of flight in the short-tailed fruit bat (Carollia perspicillata): comparing theory with measurement

    PubMed Central

    von Busse, Rhea; Waldman, Rye M.; Swartz, Sharon M.; Voigt, Christian C.; Breuer, Kenneth S.

    2014-01-01

    Aerodynamic theory has long been used to predict the power required for animal flight, but widely used models contain many simplifications. It has been difficult to ascertain how closely biological reality matches model predictions, largely because of the technical challenges of accurately measuring the power expended when an animal flies. We designed a study to measure flight speed-dependent aerodynamic power directly from the kinetic energy contained in the wake of bats flying in a wind tunnel. We compared these measurements with two theoretical predictions that have been used for several decades in diverse fields of vertebrate biology and to metabolic measurements from a previous study using the same individuals. A high-accuracy displaced laser sheet stereo particle image velocimetry experimental design measured the wake velocities in the Trefftz plane behind four bats flying over a range of speeds (3–7 m s−1). We computed the aerodynamic power contained in the wake using a novel interpolation method and compared these results with the power predicted by Pennycuick's and Rayner's models. The measured aerodynamic power falls between the two theoretical predictions, demonstrating that the models effectively predict the appropriate range of flight power, but the models do not accurately predict minimum power or maximum range speeds. Mechanical efficiency—the ratio of aerodynamic power output to metabolic power input—varied from 5.9% to 9.8% for the same individuals, changing with flight speed. PMID:24718450

  3. The Faber–Jackson relation and Fundamental Plane from halo abundance matching

    DOE PAGES

    Desmond, Harry; Wechsler, Risa H.

    2016-11-02

    The Fundamental Plane (FP) describes the relation between the stellar mass, size, and velocity dispersion of elliptical galaxies; the Faber–Jackson relation (FJR) is its projection on to {mass, velocity} space. In this work, we re-deploy and expand the framework of Desmond & Wechsler to ask whether abundance matching-based Λ-cold dark matter models which have shown success in matching the spatial distribution of galaxies are also capable of explaining key properties of the FJR and FP, including their scatter. Within our framework, agreement with the normalization of the FJR requires haloes to expand in response to disc formation. We find thatmore » the tilt of the FP may be explained by a combination of the observed non-homology in galaxy structure and the variation in mass-to-light ratio produced by abundance matching with a universal initial mass function, provided that the anisotropy of stellar motions is taken into account. However, the predicted scatter around the FP is considerably increased by situating galaxies in cosmologically motivated haloes due to the variations in halo properties at fixed stellar mass and appears to exceed that of the data. Finally, this implies that additional correlations between galaxy and halo variables may be required to fully reconcile these models with elliptical galaxy scaling relations.« less

  4. Characterizing and modeling the free recovery and constrained recovery behavior of a polyurethane shape memory polymer

    NASA Astrophysics Data System (ADS)

    Volk, Brent L.; Lagoudas, Dimitris C.; Maitland, Duncan J.

    2011-09-01

    In this work, tensile tests and one-dimensional constitutive modeling were performed on a high recovery force polyurethane shape memory polymer that is being considered for biomedical applications. The tensile tests investigated the free recovery (zero load) response as well as the constrained displacement recovery (stress recovery) response at extension values up to 25%, and two consecutive cycles were performed during each test. The material was observed to recover 100% of the applied deformation when heated at zero load in the second thermomechanical cycle, and a stress recovery of 1.5-4.2 MPa was observed for the constrained displacement recovery experiments. After the experiments were performed, the Chen and Lagoudas model was used to simulate and predict the experimental results. The material properties used in the constitutive model—namely the coefficients of thermal expansion, shear moduli, and frozen volume fraction—were calibrated from a single 10% extension free recovery experiment. The model was then used to predict the material response for the remaining free recovery and constrained displacement recovery experiments. The model predictions match well with the experimental data.

  5. Integrated modeling of cryogenic layered highfoot experiments at the NIF

    NASA Astrophysics Data System (ADS)

    Kritcher, A. L.; Hinkel, D. E.; Callahan, D. A.; Hurricane, O. A.; Clark, D.; Casey, D. T.; Dewald, E. L.; Dittrich, T. R.; Döppner, T.; Barrios Garcia, M. A.; Haan, S.; Berzak Hopkins, L. F.; Jones, O.; Landen, O.; Ma, T.; Meezan, N.; Milovich, J. L.; Pak, A. E.; Park, H.-S.; Patel, P. K.; Ralph, J.; Robey, H. F.; Salmonson, J. D.; Sepke, S.; Spears, B.; Springer, P. T.; Thomas, C. A.; Town, R.; Celliers, P. M.; Edwards, M. J.

    2016-05-01

    Integrated radiation hydrodynamic modeling in two dimensions, including the hohlraum and capsule, of layered cryogenic HighFoot Deuterium-Tritium (DT) implosions on the NIF successfully predicts important data trends. The model consists of a semi-empirical fit to low mode asymmetries and radiation drive multipliers to match shock trajectories, one dimensional inflight radiography, and time of peak neutron production. Application of the model across the HighFoot shot series, over a range of powers, laser energies, laser wavelengths, and target thicknesses predicts the neutron yield to within a factor of two for most shots. The Deuterium-Deuterium ion temperatures and the DT down scattered ratios, ratio of (10-12)/(13-15) MeV neutrons, roughly agree with data at peak fuel velocities <340 km/s and deviate at higher peak velocities, potentially due to flows and neutron scattering differences stemming from 3D or capsule support tent effects. These calculations show a significant amount alpha heating, 1-2.5× for shots where the experimental yield is within a factor of two, which has been achieved by increasing the fuel kinetic energy. This level of alpha heating is consistent with a dynamic hot spot model that is matched to experimental data and as determined from scaling of the yield with peak fuel velocity. These calculations also show that low mode asymmetries become more important as the fuel velocity is increased, and that improving these low mode asymmetries can result in an increase in the yield by a factor of several.

  6. Development of an Improved Time Varying Loudness Model with the Inclusion of Binaural Loudness Summation

    NASA Astrophysics Data System (ADS)

    Charbonneau, Jeremy

    As the perceived quality of a product is becoming more important in the manufacturing industry, more emphasis is being placed on accurately predicting the sound quality of everyday objects. This study was undertaken to improve upon current prediction techniques with regard to the psychoacoustic descriptor of loudness and an improved binaural summation technique. The feasibility of this project was first investigated through a loudness matching experiment involving thirty-one subjects and pure tones of constant sound pressure level. A dependence of binaural summation on frequency was observed which had previously not been a subject of investigation in the reviewed literature. A follow-up investigation was carried out with forty-eight volunteers and pure tones of constant sensation level. Contrary to existing theories in literature the resulting loudness matches revealed an amplitude versus frequency relationship which confirmed the perceived increase in loudness when a signal was presented to both ears simultaneously as opposed to one ear alone. The resulting trend strongly indicated that the higher the frequency of the presented signal, the greater the increase in observed binaural summation. The results from each investigation were summarized into a single binaural summation algorithm and inserted into an improved time-varying loudness model. Using experimental techniques, it was demonstrated that the updated binaural summation algorithm was a considerable improvement over the state of the art approach for predicting the perceived binaural loudness. The improved function retained the ease of use from the original model while additionally providing accurate estimates of diotic listening conditions from monaural WAV files. It was clearly demonstrated using a validation jury test that the revised time-varying loudness model was a significant improvement over the previously standardized approach.

  7. Contextual Match and Cue-Independence of Retrieval-Induced Forgetting: Testing the Prediction of the Model by Norman, Newman, and Detre (2007)

    ERIC Educational Resources Information Center

    Hanczakowski, Maciej; Mazzoni, Giuliana

    2013-01-01

    Retrieval-induced forgetting (RIF) is the finding of impaired memory performance for information stored in long-term memory due to retrieval of a related set of information. This phenomenon is often assigned to operations of a specialized mechanism recruited to resolve interference during retrieval by deactivating competing memory representations.…

  8. Predictive Power of Attention and Reading Readiness Variables on Auditory Reasoning and Processing Skills of Six-Year-Old Children

    ERIC Educational Resources Information Center

    Erbay, Filiz

    2013-01-01

    The aim of present research was to describe the relation of six-year-old children's attention and reading readiness skills (general knowledge, word comprehension, sentences, and matching) with their auditory reasoning and processing skills. This was a quantitative study based on scanning model. Research sampling consisted of 204 kindergarten…

  9. Mars Ozone Absorption Line Shapes from Infrared Heterodyne Spectra Applied to GCM-Predicted Ozone Profiles and to MEX/SPICAM Column Retrievals

    NASA Technical Reports Server (NTRS)

    Fast, Kelly E.; Kostiuk, T.; Annen, J.; Hewagama, T.; Delgado, J.; Livengood, T. A.; Lefevre, F.

    2008-01-01

    We present the application of infrared heterodyne line shapes of ozone on Mars to those produced by radiative transfer modeling of ozone profiles predicted by general circulation models (GCM), and to contemporaneous column abundances measured by Mars Express SPICAM. Ozone is an important tracer of photochemistry Mars' atmosphere, serving as an observable with which to test predictions of photochemistry-coupled GCMs. Infrared heterodyne spectroscopy at 9.5 microns with spectral resolving power >1,000,000 is the only technique that can directly measure fully-resolved line shapes of Martian ozone features from the surface of the Earth. Measurements were made with Goddard Space Flight Center's Heterodyne instrument for Planetary Wind And Composition (HIPWAC) at the NASA Infrared Telescope Facility (IRTF) on Mauna Kea, Hawaii on February 21-24 2008 UT at Ls=35deg on or near the MEX orbital path. The HIPWAC observations were used to test GCM predictions. For example, a GCM-generated ozone profile for 60degN 112degW was scaled so that a radiative transfer calculation of its absorption line shape matched an observed HIPWAC absorption feature at the same areographic position, local time, and season. The RMS deviation of the model from the data was slightly smaller for the GCM-generated profile than for a line shape produced by a constant-with-height profile, even though the total column abundances were the same, showing potential for testing and constraining GCM ozone-profiles. The resulting ozone column abundance from matching the model to the HIPWAC line shape was 60% higher than that observed by SPICAM at the same areographic position one day earlier and 2.5 hours earlier in local time. This could be due to day-to-day, diurnal, or north polar region variability, or to measurement sensitivity to the ozone column and its distribution, and these possibilities will be explored. This work was supported by NASA's Planetary Astronomy Program.

  10. The relationship between relational models and individualism and collectivism: evidence from culturally diverse work groups.

    PubMed

    Vodosek, Markus

    2009-04-01

    Relational models theory (Fiske, 1991 ) proposes that all thinking about social relationships is based on four elementary mental models: communal sharing, authority ranking, equality matching, and market pricing. Triandis and his colleagues (e.g., Triandis, Kurowski, & Gelfand, 1994 ) have suggested a relationship between the constructs of horizontal and vertical individualism and collectivism and Fiske's relational models. However, no previous research has examined this proposed relationship empirically. The objective of the current study was to test the association between the two frameworks in order to further our understanding of why members of culturally diverse groups may prefer different relational models in interactions with other group members. Findings from this study support a relationship between Triandis' constructs and Fiske's four relational models and uphold Fiske's ( 1991 ) claim that the use of the relational models is culturally dependent. As hypothesized, horizontal collectivism was associated with a preference for equality matching and communal sharing, vertical individualism was related to a preference for authority ranking, and vertical collectivism was related to a preference for authority ranking and communal sharing. However, contrary to expectations, horizontal individualism was not related to a preference for equality matching and market pricing, and vertical individualism was not associated with market pricing. By showing that there is a relationship between Triandis' and Fiske's frameworks, this study closes a gap in relational models theory, namely how culture relates to people's preferences for relational models. Thus, the findings from this study will enable future researchers to explain and predict what relational models are likely to be used in a certain cultural context.

  11. Using demography and movement behavior to predict range expansion of the southern sea otter.

    USGS Publications Warehouse

    Tinker, M.T.; Doak, D.F.; Estes, J.A.

    2008-01-01

    In addition to forecasting population growth, basic demographic data combined with movement data provide a means for predicting rates of range expansion. Quantitative models of range expansion have rarely been applied to large vertebrates, although such tools could be useful for restoration and management of many threatened but recovering populations. Using the southern sea otter (Enhydra lutris nereis) as a case study, we utilized integro-difference equations in combination with a stage-structured projection matrix that incorporated spatial variation in dispersal and demography to make forecasts of population recovery and range recolonization. In addition to these basic predictions, we emphasize how to make these modeling predictions useful in a management context through the inclusion of parameter uncertainty and sensitivity analysis. Our models resulted in hind-cast (1989–2003) predictions of net population growth and range expansion that closely matched observed patterns. We next made projections of future range expansion and population growth, incorporating uncertainty in all model parameters, and explored the sensitivity of model predictions to variation in spatially explicit survival and dispersal rates. The predicted rate of southward range expansion (median = 5.2 km/yr) was sensitive to both dispersal and survival rates; elasticity analysis indicated that changes in adult survival would have the greatest potential effect on the rate of range expansion, while perturbation analysis showed that variation in subadult dispersal contributed most to variance in model predictions. Variation in survival and dispersal of females at the south end of the range contributed most of the variance in predicted southward range expansion. Our approach provides guidance for the acquisition of further data and a means of forecasting the consequence of specific management actions. Similar methods could aid in the management of other recovering populations.

  12. Field-scale Prediction of Enhanced DNAPL Dissolution Using Partitioning Tracers and Flow Pattern Effects

    NASA Astrophysics Data System (ADS)

    Wang, F.; Annable, M. D.; Jawitz, J. W.

    2012-12-01

    The equilibrium streamtube model (EST) has demonstrated the ability to accurately predict dense nonaqueous phase liquid (DNAPL) dissolution in laboratory experiments and numerical simulations. Here the model is applied to predict DNAPL dissolution at a PCE-contaminated dry cleaner site, located in Jacksonville, Florida. The EST is an analytical solution with field-measurable input parameters. Here, measured data from a field-scale partitioning tracer test were used to parameterize the EST model and the predicted PCE dissolution was compared to measured data from an in-situ alcohol (ethanol) flood. In addition, a simulated partitioning tracer test from a calibrated spatially explicit multiphase flow model (UTCHEM) was also used to parameterize the EST analytical solution. The ethanol prediction based on both the field partitioning tracer test and the UTCHEM tracer test simulation closely matched the field data. The PCE EST prediction showed a peak shift to an earlier arrival time that was concluded to be caused by well screen interval differences between the field tracer test and alcohol flood. This observation was based on a modeling assessment of potential factors that may influence predictions by using UTCHEM simulations. The imposed injection and pumping flow pattern at this site for both the partitioning tracer test and alcohol flood was more complex than the natural gradient flow pattern (NGFP). Both the EST model and UTCHEM were also used to predict PCE dissolution under natural gradient conditions, with much simpler flow patterns than the forced-gradient double five spot of the alcohol flood. The NGFP predictions based on parameters determined from tracer tests conducted with complex flow patterns underestimated PCE concentrations and total mass removal. This suggests that the flow patterns influence aqueous dissolution and that the aqueous dissolution under the NGFP is more efficient than dissolution under complex flow patterns.

  13. Use of an Improved Matching Algorithm to Select Scaffolds for Enzyme Design Based on a Complex Active Site Model.

    PubMed

    Huang, Xiaoqiang; Xue, Jing; Lin, Min; Zhu, Yushan

    2016-01-01

    Active site preorganization helps native enzymes electrostatically stabilize the transition state better than the ground state for their primary substrates and achieve significant rate enhancement. In this report, we hypothesize that a complex active site model for active site preorganization modeling should help to create preorganized active site design and afford higher starting activities towards target reactions. Our matching algorithm ProdaMatch was improved by invoking effective pruning strategies and the native active sites for ten scaffolds in a benchmark test set were reproduced. The root-mean squared deviations between the matched transition states and those in the crystal structures were < 1.0 Å for the ten scaffolds, and the repacking calculation results showed that 91% of the hydrogen bonds within the active sites are recovered, indicating that the active sites can be preorganized based on the predicted positions of transition states. The application of the complex active site model for de novo enzyme design was evaluated by scaffold selection using a classic catalytic triad motif for the hydrolysis of p-nitrophenyl acetate. Eighty scaffolds were identified from a scaffold library with 1,491 proteins and four scaffolds were native esterase. Furthermore, enzyme design for complicated substrates was investigated for the hydrolysis of cephalexin using scaffold selection based on two different catalytic motifs. Only three scaffolds were identified from the scaffold library by virtue of the classic catalytic triad-based motif. In contrast, 40 scaffolds were identified using a more flexible, but still preorganized catalytic motif, where one scaffold corresponded to the α-amino acid ester hydrolase that catalyzes the hydrolysis and synthesis of cephalexin. Thus, the complex active site modeling approach for de novo enzyme design with the aid of the improved ProdaMatch program is a promising approach for the creation of active sites with high catalytic efficiencies towards target reactions.

  14. Use of an Improved Matching Algorithm to Select Scaffolds for Enzyme Design Based on a Complex Active Site Model

    PubMed Central

    Huang, Xiaoqiang; Xue, Jing; Lin, Min; Zhu, Yushan

    2016-01-01

    Active site preorganization helps native enzymes electrostatically stabilize the transition state better than the ground state for their primary substrates and achieve significant rate enhancement. In this report, we hypothesize that a complex active site model for active site preorganization modeling should help to create preorganized active site design and afford higher starting activities towards target reactions. Our matching algorithm ProdaMatch was improved by invoking effective pruning strategies and the native active sites for ten scaffolds in a benchmark test set were reproduced. The root-mean squared deviations between the matched transition states and those in the crystal structures were < 1.0 Å for the ten scaffolds, and the repacking calculation results showed that 91% of the hydrogen bonds within the active sites are recovered, indicating that the active sites can be preorganized based on the predicted positions of transition states. The application of the complex active site model for de novo enzyme design was evaluated by scaffold selection using a classic catalytic triad motif for the hydrolysis of p-nitrophenyl acetate. Eighty scaffolds were identified from a scaffold library with 1,491 proteins and four scaffolds were native esterase. Furthermore, enzyme design for complicated substrates was investigated for the hydrolysis of cephalexin using scaffold selection based on two different catalytic motifs. Only three scaffolds were identified from the scaffold library by virtue of the classic catalytic triad-based motif. In contrast, 40 scaffolds were identified using a more flexible, but still preorganized catalytic motif, where one scaffold corresponded to the α-amino acid ester hydrolase that catalyzes the hydrolysis and synthesis of cephalexin. Thus, the complex active site modeling approach for de novo enzyme design with the aid of the improved ProdaMatch program is a promising approach for the creation of active sites with high catalytic efficiencies towards target reactions. PMID:27243223

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barbante, Paolo; Frezzotti, Aldo; Gibelli, Livio

    The unsteady evaporation of a thin planar liquid film is studied by molecular dynamics simulations of Lennard-Jones fluid. The obtained results are compared with the predictions of a diffuse interface model in which capillary Korteweg contributions are added to hydrodynamic equations, in order to obtain a unified description of the liquid bulk, liquid-vapor interface and vapor region. Particular care has been taken in constructing a diffuse interface model matching the thermodynamic and transport properties of the Lennard-Jones fluid. The comparison of diffuse interface model and molecular dynamics results shows that, although good agreement is obtained in equilibrium conditions, remarkable deviationsmore » of diffuse interface model predictions from the reference molecular dynamics results are observed in the simulation of liquid film evaporation. It is also observed that molecular dynamics results are in good agreement with preliminary results obtained from a composite model which describes the liquid film by a standard hydrodynamic model and the vapor by the Boltzmann equation. The two mathematical model models are connected by kinetic boundary conditions assuming unit evaporation coefficient.« less

  16. Design and application of implicit solvent models in biomolecular simulations.

    PubMed

    Kleinjung, Jens; Fraternali, Franca

    2014-04-01

    We review implicit solvent models and their parametrisation by introducing the concepts and recent devlopments of the most popular models with a focus on parametrisation via force matching. An overview of recent applications of the solvation energy term in protein dynamics, modelling, design and prediction is given to illustrate the usability and versatility of implicit solvation in reproducing the physical behaviour of biomolecular systems. Limitations of implicit modes are discussed through the example of more challenging systems like nucleic acids and membranes. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  17. Calibration of Discrete Random Walk (DRW) Model via G.I Taylor's Dispersion Theory

    NASA Astrophysics Data System (ADS)

    Javaherchi, Teymour; Aliseda, Alberto

    2012-11-01

    Prediction of particle dispersion in turbulent flows is still an important challenge with many applications to environmental, as well as industrial, fluid mechanics. Several models of dispersion have been developed to predict particle trajectories and their relative velocities, in combination with a RANS-based simulation of the background flow. The interaction of the particles with the velocity fluctuations at different turbulent scales represents a significant difficulty in generalizing the models to the wide range of flows where they are used. We focus our attention on the Discrete Random Walk (DRW) model applied to flow in a channel, particularly to the selection of eddies lifetimes as realizations of a Poisson distribution with a mean value proportional to κ / ɛ . We present a general method to determine the constant of this proportionality by matching the DRW model dispersion predictions for fluid element and particle dispersion to G.I Taylor's classical dispersion theory. This model parameter is critical to the magnitude of predicted dispersion. A case study of its influence on sedimentation of suspended particles in a tidal channel with an array of Marine Hydrokinetic (MHK) turbines highlights the dependency of results on this time scale parameter. Support from US DOE through the Northwest National Marine Renewable Energy Center, a UW-OSU partnership.

  18. Widely tunable optical parametric oscillation in a Kerr microresonator.

    PubMed

    Sayson, Noel Lito B; Webb, Karen E; Coen, Stéphane; Erkintalo, Miro; Murdoch, Stuart G

    2017-12-15

    We report on the first experimental demonstration of widely tunable parametric sideband generation in a Kerr microresonator. Specifically, by pumping a silica microsphere in the normal dispersion regime, we achieve the generation of phase-matched four-wave mixing sidebands at large frequency detunings from the pump. Thanks to the role of higher-order dispersion in enabling phase matching, small variations of the pump wavelength translate into very large and controllable changes in the wavelengths of the generated sidebands: we experimentally demonstrate over 720 nm of tunability using a low-power continuous-wave pump laser in the C-band. We also derive simple theoretical predictions for the phase-matched sideband frequencies and discuss the predictions in light of the discrete cavity resonance frequencies. Our experimentally measured sideband wavelengths are in very good agreement with theoretical predictions obtained from our simple phase-matching analysis.

  19. Application of TURBO-AE to Flutter Prediction: Aeroelastic Code Development

    NASA Technical Reports Server (NTRS)

    Hoyniak, Daniel; Simons, Todd A.; Stefko, George (Technical Monitor)

    2001-01-01

    The TURBO-AE program has been evaluated by comparing the obtained results to cascade rig data and to prediction made from various in-house programs. A high-speed fan cascade, a turbine cascade, a turbine cascade and a fan geometry that shower flutter in torsion mode were analyzed. The steady predictions for the high-speed fan cascade showed the TURBO-AE predictions to match in-house codes. However, the predictions did not match the measured blade surface data. Other researchers also reported similar disagreement with these data set. Unsteady runs for the fan configuration were not successful using TURBO-AE .

  20. Tactile orientation perception: an ideal observer analysis of human psychophysical performance in relation to macaque area 3b receptive fields

    PubMed Central

    Peters, Ryan M.; Staibano, Phillip

    2015-01-01

    The ability to resolve the orientation of edges is crucial to daily tactile and sensorimotor function, yet the means by which edge perception occurs is not well understood. Primate cortical area 3b neurons have diverse receptive field (RF) spatial structures that may participate in edge orientation perception. We evaluated five candidate RF models for macaque area 3b neurons, previously recorded while an oriented bar contacted the monkey's fingertip. We used a Bayesian classifier to assign each neuron a best-fit RF structure. We generated predictions for human performance by implementing an ideal observer that optimally decoded stimulus-evoked spike counts in the model neurons. The ideal observer predicted a saturating reduction in bar orientation discrimination threshold with increasing bar length. We tested 24 humans on an automated, precision-controlled bar orientation discrimination task and observed performance consistent with that predicted. We next queried the ideal observer to discover the RF structure and number of cortical neurons that best matched each participant's performance. Human perception was matched with a median of 24 model neurons firing throughout a 1-s period. The 10 lowest-performing participants were fit with RFs lacking inhibitory sidebands, whereas 12 of the 14 higher-performing participants were fit with RFs containing inhibitory sidebands. Participants whose discrimination improved as bar length increased to 10 mm were fit with longer RFs; those who performed well on the 2-mm bar, with narrower RFs. These results suggest plausible RF features and computational strategies underlying tactile spatial perception and may have implications for perceptual learning. PMID:26354318

  1. Crystal field analysis of the energy level structure of Cs2NaAlF6:Cr3+

    NASA Astrophysics Data System (ADS)

    Rudowicz, C.; Brik, M. G.; Avram, N. M.; Yeung, Y. Y.; Gnutek, P.

    2006-06-01

    An analysis of the energy level structure of Cr3+ ions in Cs2NaAlF6 crystal is performed using the exchange charge model (ECM) together with the crystal field analysis/microscopic spin Hamiltonian (CFA/MSH) computer package. Utilizing the crystal structure data, our approach enables modelling of the crystal field parameters (CFPs) and thus the energy level structure for Cr3+ ions at the two crystallographically inequivalent sites in Cs2NaAlF6. Using the ECM initial adjustment procedure, the CFPs are calculated in the crystallographic axis system centred at the Cr3+ ion at each site. Additionally the CFPs are also calculated using the superposition model (SPM). The ECM and SPM predicted CFP values match very well. Consideration of the symmetry aspects for the so-obtained CFP datasets reveals that the latter axis system matches the symmetry-adapted axis system related directly to the six Cr-F bonds well. Using the ECM predicted CFPs as an input for the CFA/MSH package, the complete energy level schemes are calculated for Cr3+ ions at the two sites. Comparison of the theoretical results with the experimental spectroscopic data yields satisfactory agreement. Our results confirm that the actual symmetry at both impurity sites I and II in the Cs2NaAlF6:Cr3+ system is trigonal D3d. The ECM predicted CFPs may be used as the initial (starting) parameters for simulations and fittings of the energy levels for Cr3+ ions in structurally similar hosts.

  2. Morphing Compression Garments for Space Medicine and Extravehicular Activity Using Active Materials.

    PubMed

    Holschuh, Bradley T; Newman, Dava J

    2016-02-01

    Compression garments tend to be difficult to don/doff, due to their intentional function of squeezing the wearer. This is especially true for compression garments used for space medicine and for extravehicular activity (EVA). We present an innovative solution to this problem by integrating shape changing materials-NiTi shape memory alloy (SMA) coil actuators formed into modular, 3D-printed cartridges-into compression garments to produce garments capable of constricting on command. A parameterized, 2-spring analytic counterpressure model based on 12 garment and material inputs was developed to inform garment design. A methodology was developed for producing novel SMA cartridge systems to enable active compression garment construction. Five active compression sleeve prototypes were manufactured and tested: each sleeve was placed on a rigid cylindrical object and counterpressure was measured as a function of spatial location and time before, during, and after the application of a step voltage input. Controllable active counterpressures were measured up to 34.3 kPa, exceeding the requirement for EVA life support (29.6 kPa). Prototypes which incorporated fabrics with linear properties closely matched analytic model predictions (4.1%/-10.5% error in passive/active pressure predictions); prototypes using nonlinear fabrics did not match model predictions (errors >100%). Pressure non-uniformities were observed due to friction and the rigid SMA cartridge structure. To our knowledge this is the first demonstration of controllable compression technology incorporating active materials, a novel contribution to the field of compression garment design. This technology could lead to easy-to-don compression garments with widespread space and terrestrial applications.

  3. Exploiting proteomic data for genome annotation and gene model validation in Aspergillus niger.

    PubMed

    Wright, James C; Sugden, Deana; Francis-McIntyre, Sue; Riba-Garcia, Isabel; Gaskell, Simon J; Grigoriev, Igor V; Baker, Scott E; Beynon, Robert J; Hubbard, Simon J

    2009-02-04

    Proteomic data is a potentially rich, but arguably unexploited, data source for genome annotation. Peptide identifications from tandem mass spectrometry provide prima facie evidence for gene predictions and can discriminate over a set of candidate gene models. Here we apply this to the recently sequenced Aspergillus niger fungal genome from the Joint Genome Institutes (JGI) and another predicted protein set from another A.niger sequence. Tandem mass spectra (MS/MS) were acquired from 1d gel electrophoresis bands and searched against all available gene models using Average Peptide Scoring (APS) and reverse database searching to produce confident identifications at an acceptable false discovery rate (FDR). 405 identified peptide sequences were mapped to 214 different A.niger genomic loci to which 4093 predicted gene models clustered, 2872 of which contained the mapped peptides. Interestingly, 13 (6%) of these loci either had no preferred predicted gene model or the genome annotators' chosen "best" model for that genomic locus was not found to be the most parsimonious match to the identified peptides. The peptides identified also boosted confidence in predicted gene structures spanning 54 introns from different gene models. This work highlights the potential of integrating experimental proteomics data into genomic annotation pipelines much as expressed sequence tag (EST) data has been. A comparison of the published genome from another strain of A.niger sequenced by DSM showed that a number of the gene models or proteins with proteomics evidence did not occur in both genomes, further highlighting the utility of the method.

  4. A rational inference approach to group and individual-level sentence comprehension performance in aphasia.

    PubMed

    Warren, Tessa; Dickey, Michael Walsh; Liburd, Teljer L

    2017-07-01

    The rational inference, or noisy channel, account of language comprehension predicts that comprehenders are sensitive to the probabilities of different interpretations for a given sentence and adapt as these probabilities change (Gibson, Bergen & Piantadosi, 2013). This account provides an important new perspective on aphasic sentence comprehension: aphasia may increase the likelihood of sentence distortion, leading people with aphasia (PWA) to rely more on the prior probability of an interpretation and less on the form or structure of the sentence (Gibson, Sandberg, Fedorenko, Bergen & Kiran, 2015). We report the results of a sentence-picture matching experiment that tested the predictions of the rational inference account and other current models of aphasic sentence comprehension across a variety of sentence structures. Consistent with the rational inference account, PWA showed similar sensitivity to the probability of particular kinds of form distortions as age-matched controls, yet overall their interpretations relied more on prior probability and less on sentence form. As predicted by rational inference, but not by other models of sentence comprehension in aphasia, PWA's interpretations were more faithful to the form for active and passive sentences than for direct object and prepositional object sentences. However contra rational inference, there was no evidence that individual PWA's severity of syntactic or semantic impairment predicted their sensitivity to form versus the prior probability of a sentence, as cued by semantics. These findings confirm and extend previous findings that suggest the rational inference account holds promise for explaining aphasic and neurotypical comprehension, but they also raise new challenges for the account. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. In Vivo Predictive Dissolution: Comparing the Effect of Bicarbonate and Phosphate Buffer on the Dissolution of Weak Acids and Weak Bases.

    PubMed

    Krieg, Brian J; Taghavi, Seyed Mohammad; Amidon, Gordon L; Amidon, Gregory E

    2015-09-01

    Bicarbonate is the main buffer in the small intestine and it is well known that buffer properties such as pKa can affect the dissolution rate of ionizable drugs. However, bicarbonate buffer is complicated to work with experimentally. Finding a suitable substitute for bicarbonate buffer may provide a way to perform more physiologically relevant dissolution tests. The dissolution of weak acid and weak base drugs was conducted in bicarbonate and phosphate buffer using rotating disk dissolution methodology. Experimental results were compared with the predicted results using the film model approach of (Mooney K, Mintun M, Himmelstein K, Stella V. 1981. J Pharm Sci 70(1):22-32) based on equilibrium assumptions as well as a model accounting for the slow hydration reaction, CO2 + H2 O → H2 CO3 . Assuming carbonic acid is irreversible in the dehydration direction: CO2 + H2 O ← H2 CO3 , the transport analysis can accurately predict rotating disk dissolution of weak acid and weak base drugs in bicarbonate buffer. The predictions show that matching the dissolution of weak acid and weak base drugs in phosphate and bicarbonate buffer is possible. The phosphate buffer concentration necessary to match physiologically relevant bicarbonate buffer [e.g., 10.5 mM (HCO3 (-) ), pH = 6.5] is typically in the range of 1-25 mM and is very dependent upon drug solubility and pKa . © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.

  6. Calculation of Crystallographic Texture of BCC Steels During Cold Rolling

    NASA Astrophysics Data System (ADS)

    Das, Arpan

    2017-05-01

    BCC alloys commonly tend to develop strong fibre textures and often represent as isointensity diagrams in φ 1 sections or by fibre diagrams. Alpha fibre in bcc steels is generally characterised by <110> crystallographic axis parallel to the rolling direction. The objective of present research is to correlate carbon content, carbide dispersion, rolling reduction, Euler angles (ϕ) (when φ 1 = 0° and φ 2 = 45° along alpha fibre) and the resulting alpha fibre texture orientation intensity. In the present research, Bayesian neural computation has been employed to correlate these and compare with the existing feed-forward neural network model comprehensively. Excellent match to the measured texture data within the bounding box of texture training data set has been already predicted through the feed-forward neural network model by other researchers. Feed-forward neural network prediction outside the bounds of training texture data showed deviations from the expected values. Currently, Bayesian computation has been similarly applied to confirm that the predictions are reasonable in the context of basic metallurgical principles, and matched better outside the bounds of training texture data set than the reported feed-forward neural network. Bayesian computation puts error bars on predicted values and allows significance of each individual parameters to be estimated. Additionally, it is also possible by Bayesian computation to estimate the isolated influence of particular variable such as carbon concentration, which exactly cannot in practice be varied independently. This shows the ability of the Bayesian neural network to examine the new phenomenon in situations where the data cannot be accessed through experiments.

  7. Frequencies and Flutter Speed Estimation for Damaged Aircraft Wing Using Scaled Equivalent Plate Analysis

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Thiagarajan

    2010-01-01

    Equivalent plate analysis is often used to replace the computationally expensive finite element analysis in initial design stages or in conceptual design of aircraft wing structures. The equivalent plate model can also be used to design a wind tunnel model to match the stiffness characteristics of the wing box of a full-scale aircraft wing model while satisfying strength-based requirements An equivalent plate analysis technique is presented to predict the static and dynamic response of an aircraft wing with or without damage. First, a geometric scale factor and a dynamic pressure scale factor are defined to relate the stiffness, load and deformation of the equivalent plate to the aircraft wing. A procedure using an optimization technique is presented to create scaled equivalent plate models from the full scale aircraft wing using geometric and dynamic pressure scale factors. The scaled models are constructed by matching the stiffness of the scaled equivalent plate with the scaled aircraft wing stiffness. It is demonstrated that the scaled equivalent plate model can be used to predict the deformation of the aircraft wing accurately. Once the full equivalent plate geometry is obtained, any other scaled equivalent plate geometry can be obtained using the geometric scale factor. Next, an average frequency scale factor is defined as the average ratio of the frequencies of the aircraft wing to the frequencies of the full-scaled equivalent plate. The average frequency scale factor combined with the geometric scale factor is used to predict the frequency response of the aircraft wing from the scaled equivalent plate analysis. A procedure is outlined to estimate the frequency response and the flutter speed of an aircraft wing from the equivalent plate analysis using the frequency scale factor and geometric scale factor. The equivalent plate analysis is demonstrated using an aircraft wing without damage and another with damage. Both of the problems show that the scaled equivalent plate analysis can be successfully used to predict the frequencies and flutter speed of a typical aircraft wing.

  8. Scalar utility theory and proportional processing: what does it actually imply?

    PubMed Central

    Rosenström, Tom; Wiesner, Karoline; Houston, Alasdair I

    2017-01-01

    Scalar Utility Theory (SUT) is a model used to predict animal and human choice behaviour in the context of reward amount, delay to reward, and variability in these quantities (risk preferences). This article reviews and extends SUT, deriving novel predictions. We show that, contrary to what has been implied in the literature, (1) SUT can predict both risk averse and risk prone behaviour for both reward amounts and delays to reward depending on experimental parameters, (2) SUT implies violations of several concepts of rational behaviour (e.g. it violates strong stochastic transitivity and its equivalents, and leads to probability matching) and (3) SUT can predict, but does not always predict, a linear relationship between risk sensitivity in choices and coefficient of variation in the decision-making experiment. SUT derives from Scalar Expectancy Theory which models uncertainty in behavioural timing using a normal distribution. We show that the above conclusions also hold for other distributions, such as the inverse Gaussian distribution derived from drift-diffusion models. A straightforward way to test the key assumptions of SUT is suggested and possible extensions, future prospects and mechanistic underpinnings are discussed. PMID:27288541

  9. Scalar utility theory and proportional processing: What does it actually imply?

    PubMed

    Rosenström, Tom; Wiesner, Karoline; Houston, Alasdair I

    2016-09-07

    Scalar Utility Theory (SUT) is a model used to predict animal and human choice behaviour in the context of reward amount, delay to reward, and variability in these quantities (risk preferences). This article reviews and extends SUT, deriving novel predictions. We show that, contrary to what has been implied in the literature, (1) SUT can predict both risk averse and risk prone behaviour for both reward amounts and delays to reward depending on experimental parameters, (2) SUT implies violations of several concepts of rational behaviour (e.g. it violates strong stochastic transitivity and its equivalents, and leads to probability matching) and (3) SUT can predict, but does not always predict, a linear relationship between risk sensitivity in choices and coefficient of variation in the decision-making experiment. SUT derives from Scalar Expectancy Theory which models uncertainty in behavioural timing using a normal distribution. We show that the above conclusions also hold for other distributions, such as the inverse Gaussian distribution derived from drift-diffusion models. A straightforward way to test the key assumptions of SUT is suggested and possible extensions, future prospects and mechanistic underpinnings are discussed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. The Interferometric Measurement of Phase Mismatch in Potential Second Harmonic Generators.

    NASA Astrophysics Data System (ADS)

    Sinofsky, Edward Lawrence

    This dissertation combines aspects of lasers, nonlinear optics and interferometry to measure the linear optical properties involved in phase matched second harmonic generation, (SHG). A new measuring technique has been developed to rapidly analyze the phase matching performance of potential SHGs. The data taken is in the form of interferograms produced by the self referencing nonlinear Fizeau interferometer (NLF), and correctly predicts when phase matched SHG will occur in the sample wedge. Data extracted from the interferograms produced by the NLF, allows us to predict both phase matching temperatures for noncritically phase matchable crystals and crystal orientation for angle tuned crystals. Phase matching measurements can be made for both Type I and Type II configurations. Phase mismatch measurements were made at the fundamental wavelength of 1.32 (mu)m, for: calcite, lithium niobate, and gadolinium molybdate (GMO). Similar measurements were made at 1.06 (mu)m. for calcite. Phase matched SHG was demonstrated in calcite, lithium niobate and KTP, while phase matching by temperature tuning is ruled out for GMO.

  11. Ternary isocratic mobile phase optimization utilizing resolution Design Space based on retention time and peak width modeling.

    PubMed

    Kawabe, Takefumi; Tomitsuka, Toshiaki; Kajiro, Toshi; Kishi, Naoyuki; Toyo'oka, Toshimasa

    2013-01-18

    An optimization procedure of ternary isocratic mobile phase composition in the HPLC method using a statistical prediction model and visualization technique is described. In this report, two prediction models were first evaluated to obtain reliable prediction results. The retention time prediction model was constructed by modification from past respectable knowledge of retention modeling against ternary solvent strength changes. An excellent correlation between observed and predicted retention time was given in various kinds of pharmaceutical compounds by the multiple regression modeling of solvent strength parameters. The peak width of half height prediction model employed polynomial fitting of the retention time, because a linear relationship between the peak width of half height and the retention time was not obtained even after taking into account the contribution of the extra-column effect based on a moment method. Accurate prediction results were able to be obtained by such model, showing mostly over 0.99 value of correlation coefficient between observed and predicted peak width of half height. Then, a procedure to visualize a resolution Design Space was tried as the secondary challenge. An artificial neural network method was performed to link directly between ternary solvent strength parameters and predicted resolution, which were determined by accurate prediction results of retention time and a peak width of half height, and to visualize appropriate ternary mobile phase compositions as a range of resolution over 1.5 on the contour profile. By using mixtures of similar pharmaceutical compounds in case studies, we verified a possibility of prediction to find the optimal range of condition. Observed chromatographic results on the optimal condition mostly matched with the prediction and the average of difference between observed and predicted resolution were approximately 0.3. This means that enough accuracy for prediction could be achieved by the proposed procedure. Consequently, the procedure to search the optimal range of ternary solvent strength achieving an appropriate separation is provided by using the resolution Design Space based on accurate prediction. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. The effects of topography on magma chamber deformation models: Application to Mt. Etna and radar interferometry

    NASA Astrophysics Data System (ADS)

    Williams, Charles A.; Wadge, Geoff

    We have used a three-dimensional elastic finite element model to examine the effects of topography on the surface deformation predicted by models of magma chamber deflation. We used the topography of Mt. Etna to control the geometry of our model, and compared the finite element results to those predicted by an analytical solution for a pressurized sphere in an elastic half-space. Topography has a significant effect on the predicted surface deformation for both displacement profiles and synthetic interferograms. Not only are the predicted displacement magnitudes significantly different, but also the map-view patterns of displacement. It is possible to match the predicted displacement magnitudes fairly well by adjusting the elevation of a reference surface; however, the horizontal pattern of deformation is still significantly different. Thus, inversions based on constant-elevation reference surfaces may not properly estimate the horizontal position of a magma chamber. We have investigated an approach where the elevation of the reference surface varies for each computation point, corresponding to topography. For vertical displacements and tilts this method provides a good fit to the finite element results, and thus may form the basis for an inversion scheme. For radial displacements, a constant reference elevation provides a better fit to the numerical results.

  13. Prediction of NOx emissions from a simplified biodiesel surrogate by applying stochastic simulation algorithms (SSA)

    NASA Astrophysics Data System (ADS)

    Omidvarborna, Hamid; Kumar, Ashok; Kim, Dong-Shik

    2017-03-01

    A stochastic simulation algorithm (SSA) approach is implemented with the components of a simplified biodiesel surrogate to predict NOx (NO and NO2) emission concentrations from the combustion of biodiesel. The main reaction pathways were obtained by simplifying the previously derived skeletal mechanisms, including saturated methyl decenoate (MD), unsaturated methyl 5-decanoate (MD5D), and n-decane (ND). ND was added to match the energy content and the C/H/O ratio of actual biodiesel fuel. The MD/MD5D/ND surrogate model was also equipped with H2/CO/C1 formation mechanisms and a simplified NOx formation mechanism. The predicted model results are in good agreement with a limited number of experimental data at low-temperature combustion (LTC) conditions for three different biodiesel fuels consisting of various ratios of unsaturated and saturated methyl esters. The root mean square errors (RMSEs) of predicted values are 0.0020, 0.0018, and 0.0025 for soybean methyl ester (SME), waste cooking oil (WCO), and tallow oil (TO), respectively. The SSA model showed the potential to predict NOx emission concentrations, when the peak combustion temperature increased through the addition of ultra-low sulphur diesel (ULSD) to biodiesel. The SSA method used in this study demonstrates the possibility of reducing the computational complexity in biodiesel emissions modelling.

  14. Compound activity prediction using models of binding pockets or ligand properties in 3D

    PubMed Central

    Kufareva, Irina; Chen, Yu-Chen; Ilatovskiy, Andrey V.; Abagyan, Ruben

    2014-01-01

    Transient interactions of endogenous and exogenous small molecules with flexible binding sites in proteins or macromolecular assemblies play a critical role in all biological processes. Current advances in high-resolution protein structure determination, database development, and docking methodology make it possible to design three-dimensional models for prediction of such interactions with increasing accuracy and specificity. Using the data collected in the Pocketome encyclopedia, we here provide an overview of two types of the three-dimensional ligand activity models, pocket-based and ligand property-based, for two important classes of proteins, nuclear and G-protein coupled receptors. For half the targets, the pocket models discriminate actives from property matched decoys with acceptable accuracy (the area under ROC curve, AUC, exceeding 84%) and for about one fifth of the targets with high accuracy (AUC > 95%). The 3D ligand property field models performed better than 95% in half of the cases. The high performance models can already become a basis of activity predictions for new chemicals. Family-wide benchmarking of the models highlights strengths of both approaches and helps identify their inherent bottlenecks and challenges. PMID:23116466

  15. FATE 5: A natural attenuation calibration tool for groundwater fate and transport modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nevin, J.P.; Connor, J.A.; Newell, C.J.

    1997-12-31

    A new groundwater attenuation modeling tool (FATE 5) has been developed to assist users with determining site-specific natural attenuation rates for organic constituents dissolved in groundwater. FATE 5 is based on and represents an enhancement to the Domenico analytical groundwater transport model. These enhancements include use of an optimization routine to match results from the Domenico model to actual measured site concentrations, an extensive database of chemical property data, and calculation of an estimate of the length of time needed for a plume to reach steady state conditions. FATE 5 was developed in Microsoft{reg_sign} Excel and is controlled by meansmore » of a simple, user-friendly graphical interface. Using the Solver routine built into Excel, FATE 5 is able to calibrate the attenuation rate used by the Domenico model to match site-specific data. By calibrating the decay rate to site-specific measurements, FATE 5 can yield accurate predictions of long-term natural attenuation processes within a groundwater within a groundwater plume. In addition, FATE 5 includes a formulation of the transient Domenico solution used to help the user determine if the steady-state assumptions employed by the model are appropriate. The calibrated groundwater flow model can then be used either to (i) predict upper-bound constituent concentrations in groundwater, based on an observed source zone concentration, or (ii) back-calculate a lower-bound SSTL value, based on a user-specified exposure point concentration at the groundwater point of exposure (POE). This paper reviews the major elements of the FATE 5 model - and gives results for real-world applications. Key modeling assumptions and summary guidelines regarding calculation procedures and input parameter selection are also addressed.« less

  16. Memory systems interaction in the pigeon: working and reference memory.

    PubMed

    Roberts, William A; Strang, Caroline; Macpherson, Krista

    2015-04-01

    Pigeons' performance on a working memory task, symbolic delayed matching-to-sample, was used to examine the interaction between working memory and reference memory. Reference memory was established by training pigeons to discriminate between the comparison cues used in delayed matching as S+ and S- stimuli. Delayed matching retention tests then measured accuracy when working and reference memory were congruent and incongruent. In 4 experiments, it was shown that the interaction between working and reference memory is reciprocal: Strengthening either type of memory leads to a decrease in the influence of the other type of memory. A process dissociation procedure analysis of the data from Experiment 4 showed independence of working and reference memory, and a model of working memory and reference memory interaction was shown to predict the findings reported in the 4 experiments. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  17. Experimental verification of electrostatic boundary conditions in gate-patterned quantum devices

    NASA Astrophysics Data System (ADS)

    Hou, H.; Chung, Y.; Rughoobur, G.; Hsiao, T. K.; Nasir, A.; Flewitt, A. J.; Griffiths, J. P.; Farrer, I.; Ritchie, D. A.; Ford, C. J. B.

    2018-06-01

    In a model of a gate-patterned quantum device, it is important to choose the correct electrostatic boundary conditions (BCs) in order to match experiment. In this study, we model gated-patterned devices in doped and undoped GaAs heterostructures for a variety of BCs. The best match is obtained for an unconstrained surface between the gates, with a dielectric region above it and a frozen layer of surface charge, together with a very deep back boundary. Experimentally, we find a  ∼0.2 V offset in pinch-off characteristics of 1D channels in a doped heterostructure before and after etching off a ZnO overlayer, as predicted by the model. Also, we observe a clear quantised current driven by a surface acoustic wave through a lateral induced n-i-n junction in an undoped heterostructure. In the model, the ability to pump electrons in this type of device is highly sensitive to the back BC. Using the improved boundary conditions, it is straightforward to model quantum devices quite accurately using standard software.

  18. Space can substitute for time in predicting climate-change effects on biodiversity

    USGS Publications Warehouse

    Blois, Jessica L.; Williams, John W.; Fitzpatrick, Matthew C.; Jackson, Stephen T.; Ferrier, Simon

    2013-01-01

    “Space-for-time” substitution is widely used in biodiversity modeling to infer past or future trajectories of ecological systems from contemporary spatial patterns. However, the foundational assumption—that drivers of spatial gradients of species composition also drive temporal changes in diversity—rarely is tested. Here, we empirically test the space-for-time assumption by constructing orthogonal datasets of compositional turnover of plant taxa and climatic dissimilarity through time and across space from Late Quaternary pollen records in eastern North America, then modeling climate-driven compositional turnover. Predictions relying on space-for-time substitution were ∼72% as accurate as “time-for-time” predictions. However, space-for-time substitution performed poorly during the Holocene when temporal variation in climate was small relative to spatial variation and required subsampling to match the extent of spatial and temporal climatic gradients. Despite this caution, our results generally support the judicious use of space-for-time substitution in modeling community responses to climate change.

  19. Predicting the risk of suicide by analyzing the text of clinical notes.

    PubMed

    Poulin, Chris; Shiner, Brian; Thompson, Paul; Vepstas, Linas; Young-Xu, Yinong; Goertzel, Benjamin; Watts, Bradley; Flashman, Laura; McAllister, Thomas

    2014-01-01

    We developed linguistics-driven prediction models to estimate the risk of suicide. These models were generated from unstructured clinical notes taken from a national sample of U.S. Veterans Administration (VA) medical records. We created three matched cohorts: veterans who committed suicide, veterans who used mental health services and did not commit suicide, and veterans who did not use mental health services and did not commit suicide during the observation period (n = 70 in each group). From the clinical notes, we generated datasets of single keywords and multi-word phrases, and constructed prediction models using a machine-learning algorithm based on a genetic programming framework. The resulting inference accuracy was consistently 65% or more. Our data therefore suggests that computerized text analytics can be applied to unstructured medical records to estimate the risk of suicide. The resulting system could allow clinicians to potentially screen seemingly healthy patients at the primary care level, and to continuously evaluate the suicide risk among psychiatric patients.

  20. Predicting the Risk of Suicide by Analyzing the Text of Clinical Notes

    PubMed Central

    Thompson, Paul; Vepstas, Linas; Young-Xu, Yinong; Goertzel, Benjamin; Watts, Bradley; Flashman, Laura; McAllister, Thomas

    2014-01-01

    We developed linguistics-driven prediction models to estimate the risk of suicide. These models were generated from unstructured clinical notes taken from a national sample of U.S. Veterans Administration (VA) medical records. We created three matched cohorts: veterans who committed suicide, veterans who used mental health services and did not commit suicide, and veterans who did not use mental health services and did not commit suicide during the observation period (n = 70 in each group). From the clinical notes, we generated datasets of single keywords and multi-word phrases, and constructed prediction models using a machine-learning algorithm based on a genetic programming framework. The resulting inference accuracy was consistently 65% or more. Our data therefore suggests that computerized text analytics can be applied to unstructured medical records to estimate the risk of suicide. The resulting system could allow clinicians to potentially screen seemingly healthy patients at the primary care level, and to continuously evaluate the suicide risk among psychiatric patients. PMID:24489669

  1. Space can substitute for time in predicting climate-change effects on biodiversity.

    PubMed

    Blois, Jessica L; Williams, John W; Fitzpatrick, Matthew C; Jackson, Stephen T; Ferrier, Simon

    2013-06-04

    "Space-for-time" substitution is widely used in biodiversity modeling to infer past or future trajectories of ecological systems from contemporary spatial patterns. However, the foundational assumption--that drivers of spatial gradients of species composition also drive temporal changes in diversity--rarely is tested. Here, we empirically test the space-for-time assumption by constructing orthogonal datasets of compositional turnover of plant taxa and climatic dissimilarity through time and across space from Late Quaternary pollen records in eastern North America, then modeling climate-driven compositional turnover. Predictions relying on space-for-time substitution were ∼72% as accurate as "time-for-time" predictions. However, space-for-time substitution performed poorly during the Holocene when temporal variation in climate was small relative to spatial variation and required subsampling to match the extent of spatial and temporal climatic gradients. Despite this caution, our results generally support the judicious use of space-for-time substitution in modeling community responses to climate change.

  2. Thermal Protection System Mass Estimating Relationships For Blunt-Body, Earth Entry Spacecraft

    NASA Technical Reports Server (NTRS)

    Sepka, Steven A.; Samareh, Jamshid A.

    2015-01-01

    Mass estimating relationships (MERs) are developed to predict the amount of thermal protection system (TPS) necessary for safe Earth entry for blunt-body spacecraft using simple correlations that are non-ITAR and closely match estimates from NASA's highfidelity ablation modeling tool, the Fully Implicit Ablation and Thermal Analysis Program (FIAT). These MERs provide a first order estimate for rapid feasibility studies. There are 840 different trajectories considered in this study, and each TPS MER has a peak heating limit. MERs for the vehicle forebody include the ablators Phenolic Impregnated Carbon Ablator (PICA) and Carbon Phenolic atop Advanced Carbon-Carbon. For the aftbody, the materials are Silicone Impregnated Reusable Ceramic Ablator (SIRCA), Acusil II, SLA- 561V, and LI-900. The MERs are accurate to within 14% (at one standard deviation) of FIAT prediction, and the most any MER can under predict FIAT TPS thickness is 18.7%. This work focuses on the development of these MERs, the resulting equations, model limitations, and model accuracy.

  3. Exploration of a 'double-jeopardy' hypothesis within working memory profiles for children with specific language impairment.

    PubMed

    Briscoe, J; Rankin, P M

    2009-01-01

    Children with specific language impairment (SLI) often experience difficulties in the recall and repetition of verbal information. Archibald and Gathercole (2006) suggested that children with SLI are vulnerable across two separate components of a tripartite model of working memory (Baddeley and Hitch 1974). However, the hierarchical relationship between the 'slave' systems (temporary storage) and the central executive components places a particular challenge for interpreting working memory profiles within a tripartite model. This study aimed to examine whether a 'double-jeopardy' assumption is compatible with a hierarchical relationship between the phonological loop and central executive components of the working memory model in children with SLI. If a strong double-jeopardy assumption is valid for children with SLI, it was predicted that raw scores of working memory tests thought to tap phonological loop and central executive components of tripartite working memory would be lower than the scores of children matched for chronological age and those of children matched for language level, according to independent sources of constraint. In contrast, a hierarchical relationship would imply that a weakness in a slave component of working memory (the phonological loop) would also constrain performance on tests tapping a super-ordinate component (central executive). This locus of constraint would predict that scores of children with SLI on working memory tests that tap the central executive would be weaker relative to the scores of chronological age-matched controls only. Seven subtests of the Working Memory Test Battery for Children (Digit recall, Word recall, Non-word recall, Word matching, Listening recall, Backwards digit recall and Block recall; Pickering and Gathercole 2001) were administered to 14 children with SLI recruited via language resource bases and specialist schools, as well as two control groups matched on chronological age and vocabulary level, respectively. Mean group differences were ascertained by directly comparing raw scores on memory tests linked to different components of the tripartite model using a series of multivariate analyses. The majority of working memory scores of the SLI group were depressed relative to chronological age-matched controls, with the exception of spatial recall (block tapping) and word (order) matching tasks. Marked deficits in serial recall of words and digits were evident, with the SLI group scoring more poorly than the language-ability matched control group on these measures. Impairments of the SLI group on phonological loop tasks were robust, even when covariance with executive working memory scores was accounted for. There was no robust effect of group on complex working memory (central executive) tasks, despite a slight association between listening recall and phonological loop measures. A predominant feature of the working memory profile of SLI was a marked deficit on phonological loop tasks. Although scores on complex working memory tasks were also depressed, there was little evidence for a strong interpretation of double-jeopardy within working memory profiles for these children, rather these findings were consistent with an interpretation of a constraint on phonological loop for children with SLI that operated at all levels of a hierarchical tripartite model of working memory (Baddeley and Hitch 1974). These findings imply that low scores on complex working memory tasks alone do not unequivocally imply an independent deficit in central executive (domain-general) resources of working memory and should therefore be treated cautiously in a clinical context.

  4. Variations on Debris Disks. IV. An Improved Analytical Model for Collisional Cascades

    NASA Astrophysics Data System (ADS)

    Kenyon, Scott J.; Bromley, Benjamin C.

    2017-04-01

    We derive a new analytical model for the evolution of a collisional cascade in a thin annulus around a single central star. In this model, r max the size of the largest object changes with time, {r}\\max \\propto {t}-γ , with γ ≈ 0.1-0.2. Compared to standard models where r max is constant in time, this evolution results in a more rapid decline of M d , the total mass of solids in the annulus, and L d , the luminosity of small particles in the annulus: {M}d\\propto {t}-(γ +1) and {L}d\\propto {t}-(γ /2+1). We demonstrate that the analytical model provides an excellent match to a comprehensive suite of numerical coagulation simulations for annuli at 1 au and at 25 au. If the evolution of real debris disks follows the predictions of the analytical or numerical models, the observed luminosities for evolved stars require up to a factor of two more mass than predicted by previous analytical models.

  5. Predicting locations of rare aquatic species’ habitat with a combination of species-specific and assemblage-based models

    USGS Publications Warehouse

    McKenna, James E.; Carlson, Douglas M.; Payne-Wynne, Molly L.

    2013-01-01

    Aim: Rare aquatic species are a substantial component of biodiversity, and their conservation is a major objective of many management plans. However, they are difficult to assess, and their optimal habitats are often poorly known. Methods to effectively predict the likely locations of suitable rare aquatic species habitats are needed. We combine two modelling approaches to predict occurrence and general abundance of several rare fish species. Location: Allegheny watershed of western New York State (USA) Methods: Our method used two empirical neural network modelling approaches (species specific and assemblage based) to predict stream-by-stream occurrence and general abundance of rare darters, based on broad-scale habitat conditions. Species-specific models were developed for longhead darter (Percina macrocephala), spotted darter (Etheostoma maculatum) and variegate darter (Etheostoma variatum) in the Allegheny drainage. An additional model predicted the type of rare darter-containing assemblage expected in each stream reach. Predictions from both models were then combined inclusively and exclusively and compared with additional independent data. Results Example rare darter predictions demonstrate the method's effectiveness. Models performed well (R2 ≥ 0.79), identified where suitable darter habitat was most likely to occur, and predictions matched well to those of collection sites. Additional independent data showed that the most conservative (exclusive) model slightly underestimated the distributions of these rare darters or predictions were displaced by one stream reach, suggesting that new darter habitat types were detected in the later collections. Main conclusions Broad-scale habitat variables can be used to effectively identify rare species' habitats. Combining species-specific and assemblage-based models enhances our ability to make use of the sparse data on rare species and to identify habitat units most likely and least likely to support those species. This hybrid approach may assist managers with the prioritization of habitats to be examined or conserved for rare species.

  6. Developing a Novel Parameter Estimation Method for Agent-Based Model in Immune System Simulation under the Framework of History Matching: A Case Study on Influenza A Virus Infection

    PubMed Central

    Li, Tingting; Cheng, Zhengguo; Zhang, Le

    2017-01-01

    Since they can provide a natural and flexible description of nonlinear dynamic behavior of complex system, Agent-based models (ABM) have been commonly used for immune system simulation. However, it is crucial for ABM to obtain an appropriate estimation for the key parameters of the model by incorporating experimental data. In this paper, a systematic procedure for immune system simulation by integrating the ABM and regression method under the framework of history matching is developed. A novel parameter estimation method by incorporating the experiment data for the simulator ABM during the procedure is proposed. First, we employ ABM as simulator to simulate the immune system. Then, the dimension-reduced type generalized additive model (GAM) is employed to train a statistical regression model by using the input and output data of ABM and play a role as an emulator during history matching. Next, we reduce the input space of parameters by introducing an implausible measure to discard the implausible input values. At last, the estimation of model parameters is obtained using the particle swarm optimization algorithm (PSO) by fitting the experiment data among the non-implausible input values. The real Influeza A Virus (IAV) data set is employed to demonstrate the performance of our proposed method, and the results show that the proposed method not only has good fitting and predicting accuracy, but it also owns favorable computational efficiency. PMID:29194393

  7. Developing a Novel Parameter Estimation Method for Agent-Based Model in Immune System Simulation under the Framework of History Matching: A Case Study on Influenza A Virus Infection.

    PubMed

    Li, Tingting; Cheng, Zhengguo; Zhang, Le

    2017-12-01

    Since they can provide a natural and flexible description of nonlinear dynamic behavior of complex system, Agent-based models (ABM) have been commonly used for immune system simulation. However, it is crucial for ABM to obtain an appropriate estimation for the key parameters of the model by incorporating experimental data. In this paper, a systematic procedure for immune system simulation by integrating the ABM and regression method under the framework of history matching is developed. A novel parameter estimation method by incorporating the experiment data for the simulator ABM during the procedure is proposed. First, we employ ABM as simulator to simulate the immune system. Then, the dimension-reduced type generalized additive model (GAM) is employed to train a statistical regression model by using the input and output data of ABM and play a role as an emulator during history matching. Next, we reduce the input space of parameters by introducing an implausible measure to discard the implausible input values. At last, the estimation of model parameters is obtained using the particle swarm optimization algorithm (PSO) by fitting the experiment data among the non-implausible input values. The real Influeza A Virus (IAV) data set is employed to demonstrate the performance of our proposed method, and the results show that the proposed method not only has good fitting and predicting accuracy, but it also owns favorable computational efficiency.

  8. Testing the Caustic Ring Dark Matter Halo Model Against Observations in the Milky Way

    NASA Astrophysics Data System (ADS)

    Dumas, Julie; Newberg, Heidi Jo; Niedzielski, Bethany; Susser, Adam; Thompson, Jeffery M.; Weiss, Jake; Lewis, Kim M.

    2016-06-01

    One prediction of axion dark matter models is they can form Bose-Einstein condensates and rigid caustic rings as a halo collapses in the non-linear regime. In this thesis, we undertake the first study of a caustic ring model for the Milky Way halo (Duffy & Sikivie 2008), paying particular attention to observational consequences. We first present the formalism for calculating the gravitational acceleration of a caustic ring halo. The caustic ring dark matter theory reproduces a roughly logarithmic halo, with large perturbations near the rings. We show that this halo can reasonably match the known Galactic rotation curve. We are not able to confirm or rule out an association between the positions of the caustic rings and oscillations in the observed rotation curve, due to insufficient rotation curve data. We explore the effects of dark matter caustic rings on dwarf galaxy tidal disruption with N-body simulations. Simulations of the Sagittarius (Sgr) dwarf galaxy in a caustic ring halo potential, with disk and bulge parameters that are tuned to match the Galactic rotation curve, match observations of the Sgr trailing tidal tails as far as 90 kpc from the Galactic center. Like the Navarro-Frenk-White (NFW) halo, they are, however, unable to match the leading tidal tail. None of the caustic, NFW, or triaxial logarithmic halos are able to simultaneously match observations of the leading and trailing arms of the Sagittarius stream. We further show that simulations of dwarf galaxies that move through caustic rings are qualitatively similar to those moving in a logarithmic halo. This research was funded by NSF grant AST 10-09670, the NASA-NY Space Grant, and the American Fellowship from AAUW.

  9. Semiparametric time varying coefficient model for matched case-crossover studies.

    PubMed

    Ortega-Villa, Ana Maria; Kim, Inyoung; Kim, H

    2017-03-15

    In matched case-crossover studies, it is generally accepted that the covariates on which a case and associated controls are matched cannot exert a confounding effect on independent predictors included in the conditional logistic regression model. This is because any stratum effect is removed by the conditioning on the fixed number of sets of the case and controls in the stratum. Hence, the conditional logistic regression model is not able to detect any effects associated with the matching covariates by stratum. However, some matching covariates such as time often play an important role as an effect modification leading to incorrect statistical estimation and prediction. Therefore, we propose three approaches to evaluate effect modification by time. The first is a parametric approach, the second is a semiparametric penalized approach, and the third is a semiparametric Bayesian approach. Our parametric approach is a two-stage method, which uses conditional logistic regression in the first stage and then estimates polynomial regression in the second stage. Our semiparametric penalized and Bayesian approaches are one-stage approaches developed by using regression splines. Our semiparametric one stage approach allows us to not only detect the parametric relationship between the predictor and binary outcomes, but also evaluate nonparametric relationships between the predictor and time. We demonstrate the advantage of our semiparametric one-stage approaches using both a simulation study and an epidemiological example of a 1-4 bi-directional case-crossover study of childhood aseptic meningitis with drinking water turbidity. We also provide statistical inference for the semiparametric Bayesian approach using Bayes Factors. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  10. Predicting rates of interspecific interaction from phylogenetic trees.

    PubMed

    Nuismer, Scott L; Harmon, Luke J

    2015-01-01

    Integrating phylogenetic information can potentially improve our ability to explain species' traits, patterns of community assembly, the network structure of communities, and ecosystem function. In this study, we use mathematical models to explore the ecological and evolutionary factors that modulate the explanatory power of phylogenetic information for communities of species that interact within a single trophic level. We find that phylogenetic relationships among species can influence trait evolution and rates of interaction among species, but only under particular models of species interaction. For example, when interactions within communities are mediated by a mechanism of phenotype matching, phylogenetic trees make specific predictions about trait evolution and rates of interaction. In contrast, if interactions within a community depend on a mechanism of phenotype differences, phylogenetic information has little, if any, predictive power for trait evolution and interaction rate. Together, these results make clear and testable predictions for when and how evolutionary history is expected to influence contemporary rates of species interaction. © 2014 John Wiley & Sons Ltd/CNRS.

  11. A post audit of a model-designed ground water extraction system.

    PubMed

    Andersen, Peter F; Lu, Silong

    2003-01-01

    Model post audits test the predictive capabilities of ground water models and shed light on their practical limitations. In the work presented here, ground water model predictions were used to design an extraction/treatment/injection system at a military ammunition facility and then were re-evaluated using site-specific water-level data collected approximately one year after system startup. The water-level data indicated that performance specifications for the design, i.e., containment, had been achieved over the required area, but that predicted water-level changes were greater than observed, particularly in the deeper zones of the aquifer. Probable model error was investigated by determining the changes that were required to obtain an improved match to observed water-level changes. This analysis suggests that the originally estimated hydraulic properties were in error by a factor of two to five. These errors may have resulted from attributing less importance to data from deeper zones of the aquifer and from applying pumping test results to a volume of material that was larger than the volume affected by the pumping test. To determine the importance of these errors to the predictions of interest, the models were used to simulate the capture zones resulting from the originally estimated and updated parameter values. The study suggests that, despite the model error, the ground water model contributed positively to the design of the remediation system.

  12. A unified theory for ice vapor growth suitable for cloud models: Testing and implications for cold cloud evolution

    NASA Astrophysics Data System (ADS)

    Zhang, Chengzhu

    A new microphysical model for the vapor growth and aspect ratio evolution of atmospheric ice crystals is presented. The method is based on the adaptive habit model of Chen and Lamb (1994), but is modified to include surface kinetic processes for crystal growth. Inclusion of surface kinetic effects is accomplished with a new theory that accounts for axis dependent growth. Deposition coefficients (growth efficiencies) are predicted for two axis directions based on laboratory-determined parameters for growth initiation (critical supersaturations) on each face. In essence, the new theory extends the adaptive habit approach of Chen and Lamb (1994) to ice saturation states below that of liquid saturation, where Chen and Lamb (1994) is likely most valid. The new model is used to simulate changes in crystal primary habit as a function of temperature and ice supersaturation. Predictions are compared with a detailed hexagonal growth model both in a single particle framework and in a Lagrangian parcel model to indicate the accuracy of the new method. Moreover, predictions of the ratio of the axis deposition coefficients match laboratory-generated data. A parameterization for predicting deposition coefficients is developed for the bulk microphysics frame work in Regional Atmospheric Modeling System (RAMS). Initial eddy-resolving model simulation is conducted to study the effect of surface kinetics on microphysical and dynamical processes in cold cloud development.

  13. Integrated Reservoir Modeling of CO2-EOR Performance and Storage Potential in the Farnsworth Field Unit, Texas.

    NASA Astrophysics Data System (ADS)

    Ampomah, W.; Balch, R. S.; Cather, M.; Dai, Z.

    2017-12-01

    We present a performance assessment methodology and storage potential for CO2 enhanced oil recovery (EOR) in partially depleted reservoirs. A three dimensional heterogeneous reservoir model was developed based on geological, geophysics and engineering data from Farnsworth field Unit (FWU). The model aided in improved characterization of prominent rock properties within the Pennsylvanian aged Morrow sandstone reservoir. Seismic attributes illuminated previously unknown faults and structural elements within the field. A laboratory fluid analysis was tuned to an equation of state and subsequently used to predict the thermodynamic minimum miscible pressure (MMP). Datasets including net-to-gross ratio, volume of shale, permeability, and burial history were used to model initial fault transmissibility based on Sperivick model. An improved history match of primary and secondary recovery was performed to set the basis for a CO2 flood study. The performance of the current CO2 miscible flood patterns was subsequently calibrated to historical production and injection data. Several prediction models were constructed to study the effect of recycling, addition of wells and /or new patterns, water alternating gas (WAG) cycles and optimum amount of CO2 purchase on incremental oil production and CO2 storage in the FWU. The history matching study successfully validated the presence of the previously undetected faults within FWU that were seen in the seismic survey. The analysis of the various prediction scenarios showed that recycling a high percentage of produced gas, addition of new wells and a gradual reduction in CO2 purchase after several years of operation would be the best approach to ensure a high percentage of recoverable incremental oil and sequestration of anthropogenic CO2 within the Morrow reservoir. Larger percentage of stored CO2 were dissolved in residual oil and less amount existed as supercritical free CO2. The geomechanical analysis on the caprock proved to an excellent seal to ensure long-term security of injected CO2.

  14. X-rays from the colliding wind binary WR 146

    NASA Astrophysics Data System (ADS)

    Zhekov, Svetozar A.

    2017-12-01

    The X-ray emission from the massive Wolf-Rayet binary (WR 146 ) is analysed in the framework of the colliding stellar wind (CSW) picture. The theoretical CSW model spectra match well the shape of the observed X-ray spectrum of WR 146, but they overestimate considerably the observed X-ray flux (emission measure). This is valid in the case of both complete temperature equalization and partial electron heating at the shock fronts (different electron and ion temperatures), but there are indications for a better correspondence between model predictions and observations for the latter. To reconcile the model predictions and observations, the mass-loss rate of WR 146 must be reduced by a factor of 8-10 compared to the currently accepted value for this object (the latter already takes clumping into account). No excess X-ray absorption is derived from the CSW modelling.

  15. Using the Maxent program for species distribution modelling to assess invasion risk

    USGS Publications Warehouse

    Jarnevich, Catherine S.; Young, Nicholas E.; Venette, R.C

    2015-01-01

    MAXENT is a software package used to relate known species occurrences to information describing the environment, such as climate, topography, anthropogenic features or soil data, and forecast the presence or absence of a species at unsampled locations. This particular method is one of the most popular species distribution modelling techniques because of its consistent strong predictive performance and its ease to implement. This chapter discusses the decisions and techniques needed to prepare a correlative climate matching model for the native range of an invasive alien species and use this model to predict the potential distribution of this species in a potentially invaded range (i.e. a novel environment) by using MAXENT for the Burmese python (Python molurus bivittatus) as a case study. The chapter discusses and demonstrates the challenges that are associated with this approach and examines the inherent limitations that come with using MAXENT to forecast distributions of invasive alien species.

  16. Bulalo field, Philippines: Reservoir modeling for prediction of limits to sustainable generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strobel, Calvin J.

    1993-01-28

    The Bulalo geothermal field, located in Laguna province, Philippines, supplies 12% of the electricity on the island of Luzon. The first 110 MWe power plant was on line May 1979; current 330 MWe (gross) installed capacity was reached in 1984. Since then, the field has operated at an average plant factor of 76%. The National Power Corporation plans to add 40 MWe base load and 40 MWe standby in 1995. A numerical simulation model for the Bulalo field has been created that matches historic pressure changes, enthalpy and steam flash trends and cumulative steam production. Gravity modeling provided independent verificationmore » of mass balances and time rate of change of liquid desaturation in the rock matrix. Gravity modeling, in conjunction with reservoir simulation provides a means of predicting matrix dry out and the time to limiting conditions for sustainable levelized steam deliverability and power generation.« less

  17. SYNMAG PHOTOMETRY: A FAST TOOL FOR CATALOG-LEVEL MATCHED COLORS OF EXTENDED SOURCES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bundy, Kevin; Yasuda, Naoki; Hogg, David W.

    2012-12-01

    Obtaining reliable, matched photometry for galaxies imaged by different observatories represents a key challenge in the era of wide-field surveys spanning more than several hundred square degrees. Methods such as flux fitting, profile fitting, and PSF homogenization followed by matched-aperture photometry are all computationally expensive. We present an alternative solution called 'synthetic aperture photometry' that exploits galaxy profile fits in one band to efficiently model the observed, point-spread-function-convolved light profile in other bands and predict the flux in arbitrarily sized apertures. Because aperture magnitudes are the most widely tabulated flux measurements in survey catalogs, producing synthetic aperture magnitudes (SYNMAGs) enablesmore » very fast matched photometry at the catalog level, without reprocessing imaging data. We make our code public and apply it to obtain matched photometry between Sloan Digital Sky Survey ugriz and UKIDSS YJHK imaging, recovering red-sequence colors and photometric redshifts with a scatter and accuracy as good as if not better than FWHM-homogenized photometry from the GAMA Survey. Finally, we list some specific measurements that upcoming surveys could make available to facilitate and ease the use of SYNMAGs.« less

  18. Nonnormality and Divergence in Posttreatment Alcohol Use

    PubMed Central

    Witkiewitz, Katie; van der Maas, Han L. J.; Hufford, Michael R.; Marlatt, G. Alan

    2007-01-01

    Alcohol lapses are the modal outcome following treatment for alcohol use disorders, yet many alcohol researchers have encountered limited success in the prediction and prevention of relapse. One hypothesis is that lapses are unpredictable, but another possibility is the complexity of the relapse process is not captured by traditional statistical methods. Data from Project Matching Alcohol Treatments to Client Heterogeneity (Project MATCH), a multisite alcohol treatment study, were reanalyzed with 2 statistical methodologies: catastrophe and 2-part growth mixture modeling. Drawing on previous investigations of self-efficacy as a dynamic predictor of relapse, the current study revisits the self-efficacy matching hypothesis, which was not statistically supported in Project MATCH. Results from both the catastrophe and growth mixture analyses demonstrated a dynamic relationship between self-efficacy and drinking outcomes. The growth mixture analyses provided evidence in support of the original matching hypothesis: Individuals with lower self-efficacy who received cognitive behavior therapy drank far less frequently than did those with low self-efficacy who received motivational therapy. These results highlight the dynamical nature of the relapse process and the importance of the use of methodologies that accommodate this complexity when evaluating treatment outcomes. PMID:17516769

  19. Pattern placement errors: application of in-situ interferometer-determined Zernike coefficients in determining printed image deviations

    NASA Astrophysics Data System (ADS)

    Roberts, William R.; Gould, Christopher J.; Smith, Adlai H.; Rebitz, Ken

    2000-08-01

    Several ideas have recently been presented which attempt to measure and predict lens aberrations for new low k1 imaging systems. Abbreviated sets of Zernike coefficients have been produced and used to predict Across Chip Linewidth Variation. Empirical use of the wavefront aberrations can now be used in commercially available lithography simulators to predict pattern distortion and placement errors. Measurement and Determination of Zernike coefficients has been a significant effort of many. However the use of this data has generally been limited to matching lenses or picking best fit lense pairs. We will use wavefront aberration data collected using the Litel InspecStep in-situ Interferometer as input data for Prolith/3D to model and predict pattern placement errors and intrafield overlay variation. Experiment data will be collected and compared to the simulated predictions.

  20. Evaluation of theoretical and empirical water vapor sorption isotherm models for soils

    NASA Astrophysics Data System (ADS)

    Arthur, Emmanuel; Tuller, Markus; Moldrup, Per; de Jonge, Lis W.

    2016-01-01

    The mathematical characterization of water vapor sorption isotherms of soils is crucial for modeling processes such as volatilization of pesticides and diffusive and convective water vapor transport. Although numerous physically based and empirical models were previously proposed to describe sorption isotherms of building materials, food, and other industrial products, knowledge about the applicability of these functions for soils is noticeably lacking. We present an evaluation of nine models for characterizing adsorption/desorption isotherms for a water activity range from 0.03 to 0.93 based on measured data of 207 soils with widely varying textures, organic carbon contents, and clay mineralogy. In addition, the potential applicability of the models for prediction of sorption isotherms from known clay content was investigated. While in general, all investigated models described measured adsorption and desorption isotherms reasonably well, distinct differences were observed between physical and empirical models and due to the different degrees of freedom of the model equations. There were also considerable differences in model performance for adsorption and desorption data. While regression analysis relating model parameters and clay content and subsequent model application for prediction of measured isotherms showed promise for the majority of investigated soils, for soils with distinct kaolinitic and smectitic clay mineralogy predicted isotherms did not closely match the measurements.

  1. Kinetic Theory and Fast Wind Observations of the Electron Strahl

    NASA Astrophysics Data System (ADS)

    Horaites, Konstantinos; Boldyrev, Stanislav; Wilson, Lynn B., III; Viñas, Adolfo F.; Merka, Jan

    2018-02-01

    We develop a model for the strahl population in the solar wind - a narrow, low-density and high-energy electron beam centred on the magnetic field direction. Our model is based on the solution of the electron drift-kinetic equation at heliospheric distances where the plasma density, temperature and the magnetic field strength decline as power laws of the distance along a magnetic flux tube. Our solution for the strahl depends on a number of parameters that, in the absence of the analytic solution for the full electron velocity distribution function (eVDF), cannot be derived from the theory. We however demonstrate that these parameters can be efficiently found from matching our solution with observations of the eVDF made by the Wind satellite's SWE strahl detector. The model is successful at predicting the angular width (FWHM) of the strahl for the Wind data at 1 au, in particular by predicting how this width scales with particle energy and background density. We find that the strahl distribution is largely determined by the local temperature Knudsen number γ ∼ |T dT/dx|/n, which parametrizes solar wind collisionality. We compute averaged strahl distributions for typical Knudsen numbers observed in the solar wind, and fit our model to these data. The model can be matched quite closely to the eVDFs at 1 au; however, it then overestimates the strahl amplitude at larger heliocentric distances. This indicates that our model may be improved through the inclusion of additional physics, possibly through the introduction of 'anomalous diffusion' of the strahl electrons.

  2. A Model For Rapid Estimation of Economic Loss

    NASA Astrophysics Data System (ADS)

    Holliday, J. R.; Rundle, J. B.

    2012-12-01

    One of the loftier goals in seismic hazard analysis is the creation of an end-to-end earthquake prediction system: a "rupture to rafters" work flow that takes a prediction of fault rupture, propagates it with a ground shaking model, and outputs a damage or loss profile at a given location. So far, the initial prediction of an earthquake rupture (either as a point source or a fault system) has proven to be the most difficult and least solved step in this chain. However, this may soon change. The Collaboratory for the Study of Earthquake Predictability (CSEP) has amassed a suite of earthquake source models for assorted testing regions worldwide. These models are capable of providing rate-based forecasts for earthquake (point) sources over a range of time horizons. Furthermore, these rate forecasts can be easily refined into probabilistic source forecasts. While it's still difficult to fully assess the "goodness" of each of these models, progress is being made: new evaluation procedures are being devised and earthquake statistics continue to accumulate. The scientific community appears to be heading towards a better understanding of rupture predictability. Ground shaking mechanics are better understood, and many different sophisticated models exists. While these models tend to be computationally expensive and often regionally specific, they do a good job at matching empirical data. It is perhaps time to start addressing the third step in the seismic hazard prediction system. We present a model for rapid economic loss estimation using ground motion (PGA or PGV) and socioeconomic measures as its input. We show that the model can be calibrated on a global scale and applied worldwide. We also suggest how the model can be improved and generalized to non-seismic natural disasters such as hurricane and severe wind storms.

  3. Aeroservoelasticity

    NASA Technical Reports Server (NTRS)

    Noll, Thomas E.

    1990-01-01

    The paper describes recent accomplishments and current research projects along four main thrusts in aeroservoelasticity at NASA Langley. One activity focuses on enhancing the modeling and analysis procedures to accurately predict aeroservoelastic interactions. Improvements to the minimum-state method of approximating unsteady aerodynamics are shown to provide precise low-order models for design and simulation tasks. Recent extensions in aerodynamic correction-factor methodology are also described. With respect to analysis procedures, the paper reviews novel enhancements to matched filter theory and random process theory for predicting the critical gust profile and the associated time-correlated gust loads for structural design considerations. Two research projects leading towards improved design capability are also summarized: (1) an integrated structure/control design capability and (2) procedures for obtaining low-order robust digital control laws for aeroelastic applications.

  4. Computation of inlet reference plane flow-field for a subscale free-jet forebody/inlet model and comparison to experimental data

    NASA Astrophysics Data System (ADS)

    McClure, M. D.; Sirbaugh, J. R.

    1991-02-01

    The computational fluid dynamics (CFD) computer code PARC3D was used to predict the inlet reference plane (IRP) flow field for a side-mounted inlet and forebody simulator in a free jet for five different flow conditions. The calculations were performed for free-jet conditions, mass flow rates, and inlet configurations that matched the free-jet test conditions. In addition, viscous terms were included in the main flow so that the viscous free-jet shear layers emanating from the free-jet nozzle exit were modeled. A measure of the predicted accuracy was determined as a function of free-stream Mach number, angle-of-attack, and sideslip angle.

  5. Statistical prediction of dynamic distortion of inlet flow using minimum dynamic measurement. An application to the Melick statistical method and inlet flow dynamic distortion prediction without RMS measurements

    NASA Technical Reports Server (NTRS)

    Schweikhard, W. G.; Chen, Y. S.

    1986-01-01

    The Melick method of inlet flow dynamic distortion prediction by statistical means is outlined. A hypothetic vortex model is used as the basis for the mathematical formulations. The main variables are identified by matching the theoretical total pressure rms ratio with the measured total pressure rms ratio. Data comparisons, using the HiMAT inlet test data set, indicate satisfactory prediction of the dynamic peak distortion for cases with boundary layer control device vortex generators. A method for the dynamic probe selection was developed. Validity of the probe selection criteria is demonstrated by comparing the reduced-probe predictions with the 40-probe predictions. It is indicated that the the number of dynamic probes can be reduced to as few as two and still retain good accuracy.

  6. Predicted space motions for hypervelocity and runaway stars: proper motions and radial velocities for the Gaia Era

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kenyon, Scott J.; Brown, Warren R.; Geller, Margaret J.

    We predict the distinctive three-dimensional space motions of hypervelocity stars (HVSs) and runaway stars moving in a realistic Galactic potential. For nearby stars with distances less than 10 kpc, unbound stars are rare; proper motions alone rarely isolate bound HVSs and runaways from indigenous halo stars. At large distances of 20-100 kpc, unbound HVSs are much more common than runaways; radial velocities easily distinguish both from indigenous halo stars. Comparisons of the predictions with existing observations are encouraging. Although the models fail to match observations of solar-type HVS candidates from SEGUE, they agree well with data for B-type HVS andmore » runaways from other surveys. Complete samples of g ≲ 20 stars with Gaia should provide clear tests of formation models for HVSs and runaways and will enable accurate probes of the shape of the Galactic potential.« less

  7. Development of a Response Surface Thermal Model for Orion Mated to the International Space Station

    NASA Technical Reports Server (NTRS)

    Miller, Stephen W.; Meier, Eric J.

    2010-01-01

    A study was performed to determine if a Design of Experiments (DOE)/Response Surface Methodology could be applied to on-orbit thermal analysis and produce a set of Response Surface Equations (RSE) that accurately predict vehicle temperatures. The study used an integrated thermal model of the International Space Station and the Orion Outer mold line model. Five separate factors were identified for study: yaw, pitch, roll, beta angle, and the environmental parameters. Twenty external Orion temperatures were selected as the responses. A DOE case matrix of 110 runs was developed. The data from these cases were analyzed to produce an RSE for each of the temperature responses. The initial agreement between the engineering data and the RSE predictions was encouraging, although many RSEs had large uncertainties on their predictions. Fourteen verification cases were developed to test the predictive powers of the RSEs. The verification showed mixed results with some RSE predicting temperatures matching the engineering data within the uncertainty bands, while others had very large errors. While this study to not irrefutably prove that the DOE/RSM approach can be applied to on-orbit thermal analysis, it does demonstrate that technique has the potential to predict temperatures. Additional work is needed to better identify the cases needed to produce the RSEs

  8. Mapping the global depth to bedrock for land surface modelling

    NASA Astrophysics Data System (ADS)

    Shangguan, W.; Hengl, T.; Yuan, H.; Dai, Y. J.; Zhang, S.

    2017-12-01

    Depth to bedrock serves as the lower boundary of land surface models, which controls hydrologic and biogeochemical processes. This paper presents a framework for global estimation of Depth to bedrock (DTB). Observations were extracted from a global compilation of soil profile data (ca. 130,000 locations) and borehole data (ca. 1.6 million locations). Additional pseudo-observations generated by expert knowledge were added to fill in large sampling gaps. The model training points were then overlaid on a stack of 155 covariates including DEM-based hydrological and morphological derivatives, lithologic units, MODIS surfacee reflectance bands and vegetation indices derived from the MODIS land products. Global spatial prediction models were developed using random forests and Gradient Boosting Tree algorithms. The final predictions were generated at the spatial resolution of 250m as an ensemble prediction of the two independently fitted models. The 10-fold cross-validation shows that the models explain 59% for absolute DTB and 34% for censored DTB (depths deep than 200 cm are predicted as 200 cm). The model for occurrence of R horizon (bedrock) within 200 cm does a good job. Visual comparisons of predictions in the study areas where more detailed maps of depth to bedrock exist show that there is a general match with spatial patterns from similar local studies. Limitation of the data set and extrapolation in data spare areas should not be ignored in applications. To improve accuracy of spatial prediction, more borehole drilling logs will need to be added to supplement the existing training points in under-represented areas.

  9. Mapping the global depth to bedrock for land surface modeling

    NASA Astrophysics Data System (ADS)

    Shangguan, Wei; Hengl, Tomislav; Mendes de Jesus, Jorge; Yuan, Hua; Dai, Yongjiu

    2017-03-01

    Depth to bedrock serves as the lower boundary of land surface models, which controls hydrologic and biogeochemical processes. This paper presents a framework for global estimation of depth to bedrock (DTB). Observations were extracted from a global compilation of soil profile data (ca. 1,30,000 locations) and borehole data (ca. 1.6 million locations). Additional pseudo-observations generated by expert knowledge were added to fill in large sampling gaps. The model training points were then overlaid on a stack of 155 covariates including DEM-based hydrological and morphological derivatives, lithologic units, MODIS surface reflectance bands and vegetation indices derived from the MODIS land products. Global spatial prediction models were developed using random forest and Gradient Boosting Tree algorithms. The final predictions were generated at the spatial resolution of 250 m as an ensemble prediction of the two independently fitted models. The 10-fold cross-validation shows that the models explain 59% for absolute DTB and 34% for censored DTB (depths deep than 200 cm are predicted as 200 cm). The model for occurrence of R horizon (bedrock) within 200 cm does a good job. Visual comparisons of predictions in the study areas where more detailed maps of depth to bedrock exist show that there is a general match with spatial patterns from similar local studies. Limitation of the data set and extrapolation in data spare areas should not be ignored in applications. To improve accuracy of spatial prediction, more borehole drilling logs will need to be added to supplement the existing training points in under-represented areas.

  10. Specimen-specific modeling of hip fracture pattern and repair.

    PubMed

    Ali, Azhar A; Cristofolini, Luca; Schileo, Enrico; Hu, Haixiang; Taddei, Fulvia; Kim, Raymond H; Rullkoetter, Paul J; Laz, Peter J

    2014-01-22

    Hip fracture remains a major health problem for the elderly. Clinical studies have assessed fracture risk based on bone quality in the aging population and cadaveric testing has quantified bone strength and fracture loads. Prior modeling has primarily focused on quantifying the strain distribution in bone as an indicator of fracture risk. Recent advances in the extended finite element method (XFEM) enable prediction of the initiation and propagation of cracks without requiring a priori knowledge of the crack path. Accordingly, the objectives of this study were to predict femoral fracture in specimen-specific models using the XFEM approach, to perform one-to-one comparisons of predicted and in vitro fracture patterns, and to develop a framework to assess the mechanics and load transfer in the fractured femur when it is repaired with an osteosynthesis implant. Five specimen-specific femur models were developed from in vitro experiments under a simulated stance loading condition. Predicted fracture patterns closely matched the in vitro patterns; however, predictions of fracture load differed by approximately 50% due to sensitivity to local material properties. Specimen-specific intertrochanteric fractures were induced by subjecting the femur models to a sideways fall and repaired with a contemporary implant. Under a post-surgical stance loading, model-predicted load sharing between the implant and bone across the fracture surface varied from 59%:41% to 89%:11%, underscoring the importance of considering anatomic and fracture variability in the evaluation of implants. XFEM modeling shows potential as a macro-level analysis enabling fracture investigations of clinical cohorts, including at-risk groups, and the design of robust implants. © 2013 Published by Elsevier Ltd.

  11. Improved Modeling of Open Waveguide Aperture Radiators for use in Conformal Antenna Arrays

    NASA Astrophysics Data System (ADS)

    Nelson, Gregory James

    Open waveguide apertures have been used as radiating elements in conformal arrays. Individual radiating element model patterns are used in constructing overall array models. The existing models for these aperture radiating elements may not accurately predict the array pattern for TEM waves which are not on boresight for each radiating element. In particular, surrounding structures can affect the far field patterns of these apertures, which ultimately affects the overall array pattern. New models of open waveguide apertures are developed here with the goal of accounting for the surrounding structure effects on the aperture far field patterns such that the new models make accurate pattern predictions. These aperture patterns (both E plane and H plane) are measured in an anechoic chamber and the manner in which they deviate from existing model patterns are studied. Using these measurements as a basis, existing models for both E and H planes are updated with new factors and terms which allow the prediction of far field open waveguide aperture patterns with improved accuracy. These new and improved individual radiator models are then used to predict overall conformal array patterns. Arrays of open waveguide apertures are constructed and measured in a similar fashion to the individual aperture measurements. These measured array patterns are compared with the newly modeled array patterns to verify the improved accuracy of the new models as compared with the performance of existing models in making array far field pattern predictions. The array pattern lobe characteristics are then studied for predicting fully circularly conformal arrays of varying radii. The lobe metrics that are tracked are angular location and magnitude as the radii of the conformal arrays are varied. A constructed, measured array that is close to conforming to a circular surface is compared with a fully circularly conformal modeled array pattern prediction, with the predicted lobe angular locations and magnitudes tracked, plotted and tabulated. The close match between the patterns of the measured array and the modeled circularly conformal array verifies the validity of the modeled circularly conformal array pattern predictions.

  12. Comparison of measured impurity poloidal rotation in DIII-D with neoclassical predictions under low toroidal field conditions

    DOE PAGES

    Burrell, Keith H.; Grierson, Brian A.; Solomon, Wayne M.; ...

    2014-06-26

    Here, predictive understanding of plasma transport is a long-term goal of fusion research. This requires testing models of plasma rotation including poloidal rotation. The present experiment was motivated by recent poloidal rotation measurements on spherical tokamaks (NSTX and MAST) which showed that the poloidal rotation of C +6 is much closer to the neoclassical prediction than reported results in larger aspect ratio machines such as TFTR, DIII-D, JT-60U and JET working at significantly higher toroidal field and ion temperature. We investigated whether the difference in aspect ratio (1.44 on NSTX versus 2.7 on DIII-D) could explain this. We measured Cmore » +6 poloidal rotation in DIII-D under conditions which matched, as best possible, those in the NSTX experiment; we matched plasma current (0.65 MA), on-axis toroidal field (0.55T), minor radius (0.6 m), and outer flux surface shape as well as the density and temperature profiles. DIII-D results from this work also show reasonable agreement with neoclassical theory. Accordingly, the different aspect ratio does not explain the previously mentioned difference in poloidal rotation results.« less

  13. Kalman/Map filtering-aided fast normalized cross correlation-based Wi-Fi fingerprinting location sensing.

    PubMed

    Sun, Yongliang; Xu, Yubin; Li, Cheng; Ma, Lin

    2013-11-13

    A Kalman/map filtering (KMF)-aided fast normalized cross correlation (FNCC)-based Wi-Fi fingerprinting location sensing system is proposed in this paper. Compared with conventional neighbor selection algorithms that calculate localization results with received signal strength (RSS) mean samples, the proposed FNCC algorithm makes use of all the on-line RSS samples and reference point RSS variations to achieve higher fingerprinting accuracy. The FNCC computes efficiently while maintaining the same accuracy as the basic normalized cross correlation. Additionally, a KMF is also proposed to process fingerprinting localization results. It employs a new map matching algorithm to nonlinearize the linear location prediction process of Kalman filtering (KF) that takes advantage of spatial proximities of consecutive localization results. With a calibration model integrated into an indoor map, the map matching algorithm corrects unreasonable prediction locations of the KF according to the building interior structure. Thus, more accurate prediction locations are obtained. Using these locations, the KMF considerably improves fingerprinting algorithm performance. Experimental results demonstrate that the FNCC algorithm with reduced computational complexity outperforms other neighbor selection algorithms and the KMF effectively improves location sensing accuracy by using indoor map information and spatial proximities of consecutive localization results.

  14. Kalman/Map Filtering-Aided Fast Normalized Cross Correlation-Based Wi-Fi Fingerprinting Location Sensing

    PubMed Central

    Sun, Yongliang; Xu, Yubin; Li, Cheng; Ma, Lin

    2013-01-01

    A Kalman/map filtering (KMF)-aided fast normalized cross correlation (FNCC)-based Wi-Fi fingerprinting location sensing system is proposed in this paper. Compared with conventional neighbor selection algorithms that calculate localization results with received signal strength (RSS) mean samples, the proposed FNCC algorithm makes use of all the on-line RSS samples and reference point RSS variations to achieve higher fingerprinting accuracy. The FNCC computes efficiently while maintaining the same accuracy as the basic normalized cross correlation. Additionally, a KMF is also proposed to process fingerprinting localization results. It employs a new map matching algorithm to nonlinearize the linear location prediction process of Kalman filtering (KF) that takes advantage of spatial proximities of consecutive localization results. With a calibration model integrated into an indoor map, the map matching algorithm corrects unreasonable prediction locations of the KF according to the building interior structure. Thus, more accurate prediction locations are obtained. Using these locations, the KMF considerably improves fingerprinting algorithm performance. Experimental results demonstrate that the FNCC algorithm with reduced computational complexity outperforms other neighbor selection algorithms and the KMF effectively improves location sensing accuracy by using indoor map information and spatial proximities of consecutive localization results. PMID:24233027

  15. Diffusion and convection in collagen gels: implications for transport in the tumor interstitium.

    PubMed Central

    Ramanujan, Saroja; Pluen, Alain; McKee, Trevor D; Brown, Edward B; Boucher, Yves; Jain, Rakesh K

    2002-01-01

    Diffusion coefficients of tracer molecules in collagen type I gels prepared from 0-4.5% w/v solutions were measured by fluorescence recovery after photobleaching. When adjusted to account for in vivo tortuosity, diffusion coefficients in gels matched previous measurements in four human tumor xenografts with equivalent collagen concentrations. In contrast, hyaluronan solutions hindered diffusion to a lesser extent when prepared at concentrations equivalent to those reported in these tumors. Collagen permeability, determined from flow through gels under hydrostatic pressure, was compared with predictions obtained from application of the Brinkman effective medium model to diffusion data. Permeability predictions matched experimental results at low concentrations, but underestimated measured values at high concentrations. Permeability measurements in gels did not match previous measurements in tumors. Visualization of gels by transmission electron microscopy and light microscopy revealed networks of long collagen fibers at lower concentrations along with shorter fibers at high concentrations. Negligible assembly was detected in collagen solutions pregelation. However, diffusion was similarly hindered in pre and postgelation samples. Comparison of diffusion and convection data in these gels and tumors suggests that collagen may obstruct diffusion more than convection in tumors. These findings have significant implications for drug delivery in tumors and for tissue engineering applications. PMID:12202388

  16. Predicting RNA folding thermodynamics with a reduced chain representation model

    PubMed Central

    CAO, SONG; CHEN, SHI-JIE

    2005-01-01

    Based on the virtual bond representation for the nucleotide backbone, we develop a reduced conformational model for RNA. We use the experimentally measured atomic coordinates to model the helices and use the self-avoiding walks in a diamond lattice to model the loop conformations. The atomic coordinates of the helices and the lattice representation for the loops are matched at the loop–helix junction, where steric viability is accounted for. Unlike the previous simplified lattice-based models, the present virtual bond model can account for the atomic details of realistic three-dimensional RNA structures. Based on the model, we develop a statistical mechanical theory for RNA folding energy landscapes and folding thermodynamics. Tests against experiments show that the theory can give much more improved predictions for the native structures, the thermal denaturation curves, and the equilibrium folding/unfolding pathways than the previous models. The application of the model to the P5abc region of Tetrahymena group I ribozyme reveals the misfolded intermediates as well as the native-like intermediates in the equilibrium folding process. Moreover, based on the free energy landscape analysis for each and every loop mutation, the model predicts five lethal mutations that can completely alter the free energy landscape and the folding stability of the molecule. PMID:16251382

  17. Biogeochemical metabolic modeling of methanogenesis by Methanosarcina barkeri

    NASA Astrophysics Data System (ADS)

    Jensvold, Z. D.; Jin, Q.

    2015-12-01

    Methanogenesis, the biological process of methane production, is the final step of natural organic matter degradation. In studying natural methanogenesis, important questions include how fast methanogenesis proceeds and how methanogens adapt to the environment. To address these questions, we propose a new approach - biogeochemical reaction modeling - by simulating the metabolic networks of methanogens. Biogeochemical reaction modeling combines geochemical reaction modeling and genome-scale metabolic modeling. Geochemical reaction modeling focuses on the speciation of electron donors and acceptors in the environment, and therefore the energy available to methanogens. Genome-scale metabolic modeling predicts microbial rates and metabolic strategies. Specifically, this approach describes methanogenesis using an enzyme network model, and computes enzyme rates by accounting for both the kinetics and thermodynamics. The network model is simulated numerically to predict enzyme abundances and rates of methanogen metabolism. We applied this new approach to Methanosarcina barkeri strain fusaro, a model methanogen that makes methane by reducing carbon dioxide and oxidizing dihydrogen. The simulation results match well with the results of previous laboratory experiments, including the magnitude of proton motive force and the kinetic parameters of Methanosarcina barkeri. The results also predict that in natural environments, the configuration of methanogenesis network, including the concentrations of enzymes and metabolites, differs significantly from that under laboratory settings.

  18. Chronic Motivational State Interacts with Task Reward Structure in Dynamic Decision-Making

    PubMed Central

    Cooper, Jessica A.; Worthy, Darrell A.; Maddox, W. Todd

    2015-01-01

    Research distinguishes between a habitual, model-free system motivated toward immediately rewarding actions, and a goal-directed, model-based system motivated toward actions that improve future state. We examined the balance of processing in these two systems during state-based decision-making. We tested a regulatory fit hypothesis (Maddox & Markman, 2010) that predicts that global trait motivation affects the balance of habitual- vs. goal-directed processing but only through its interaction with the task framing as gain-maximization or loss-minimization. We found support for the hypothesis that a match between an individual’s chronic motivational state and the task framing enhances goal-directed processing, and thus state-based decision-making. Specifically, chronic promotion-focused individuals under gain-maximization and chronic prevention-focused individuals under loss-minimization both showed enhanced state-based decision-making. Computational modeling indicates that individuals in a match between global chronic motivational state and local task reward structure engaged more goal-directed processing, whereas those in a mismatch engaged more habitual processing. PMID:26520256

  19. Lower extremity EMG-driven modeling of walking with automated adjustment of musculoskeletal geometry

    PubMed Central

    Meyer, Andrew J.; Patten, Carolynn

    2017-01-01

    Neuromusculoskeletal disorders affecting walking ability are often difficult to manage, in part due to limited understanding of how a patient’s lower extremity muscle excitations contribute to the patient’s lower extremity joint moments. To assist in the study of these disorders, researchers have developed electromyography (EMG) driven neuromusculoskeletal models utilizing scaled generic musculoskeletal geometry. While these models can predict individual muscle contributions to lower extremity joint moments during walking, the accuracy of the predictions can be hindered by errors in the scaled geometry. This study presents a novel EMG-driven modeling method that automatically adjusts surrogate representations of the patient’s musculoskeletal geometry to improve prediction of lower extremity joint moments during walking. In addition to commonly adjusted neuromusculoskeletal model parameters, the proposed method adjusts model parameters defining muscle-tendon lengths, velocities, and moment arms. We evaluated our EMG-driven modeling method using data collected from a high-functioning hemiparetic subject walking on an instrumented treadmill at speeds ranging from 0.4 to 0.8 m/s. EMG-driven model parameter values were calibrated to match inverse dynamic moments for five degrees of freedom in each leg while keeping musculoskeletal geometry close to that of an initial scaled musculoskeletal model. We found that our EMG-driven modeling method incorporating automated adjustment of musculoskeletal geometry predicted net joint moments during walking more accurately than did the same method without geometric adjustments. Geometric adjustments improved moment prediction errors by 25% on average and up to 52%, with the largest improvements occurring at the hip. Predicted adjustments to musculoskeletal geometry were comparable to errors reported in the literature between scaled generic geometric models and measurements made from imaging data. Our results demonstrate that with appropriate experimental data, joint moment predictions for walking generated by an EMG-driven model can be improved significantly when automated adjustment of musculoskeletal geometry is included in the model calibration process. PMID:28700708

  20. Does the river continuum concept apply on a tropical island? Longitudinal variation in a Puerto Rican stream.

    Treesearch

    Effie A. Greathouse; Catherine M. Pringle

    2006-01-01

    We examined whether a tropical stream in Puerto Rico matched predictions of the river continuum concept (RCC) for macroinvertebrate functional feeding groups (FFGs). Sampling sites for macroinvertebrates, basal resources, and fishes ranged from headwaters to within 2.5 km of the fourth-order estuary. In a comparison with a model temperate system in which RCC...

  1. A formulation of convection for stellar structure and evolution calculations without the mixing-length theory approximations. II - Application to Alpha Centauri A and B

    NASA Technical Reports Server (NTRS)

    Lydon, Thomas J.; Fox, Peter A.; Sofia, Sabatino

    1993-01-01

    We have constructed a series of models of Alpha Centauri A and Alpha Centauri B for the purposes of testing the effects of convection modeling both by means of the mixing-length theory (MLT), and by means of parameterization of energy fluxes based upon numerical simulations of turbulent compressible convection. We demonstrate that while MLT, through its adjustable parameter alpha, can be used to match any given values of luminosities and radii, our treatment of convection, which lacks any adjustable parameters, makes specific predictions of stellar radii. Since the predicted radii of the Alpha Centauri system fall within the errors of the observed radii, our treatment of convection is applicable to other stars in the H-R diagram in addition to the sun. A second set of models is constructed using MLT, adjusting alpha to yield not the 'measured' radii but, instead, the radii predictions of our revised treatment of convection. We conclude by assessing the appropriateness of using a single value of alpha to model a wide variety of stars.

  2. Predicting the melting temperature of ice-Ih with only electronic structure information as input.

    PubMed

    Pinnick, Eric R; Erramilli, Shyamsunder; Wang, Feng

    2012-07-07

    The melting temperature of ice-Ih was calculated with only electronic structure information as input by creating a problem-specific force field. The force field, Water model by AFM for Ice and Liquid (WAIL), was developed with the adaptive force matching (AFM) method by fitting to post-Hartree-Fock quality forces obtained in quantum mechanics∕molecular mechanics calculations. WAIL predicts the ice-Ih melting temperature to be 270 K. The model also predicts the densities of ice and water, the temperature of maximum density of water, the heat of vaporizations, and the radial distribution functions for both ice and water in good agreement with experimental measurements. The non-dissociative WAIL model is very similar to a flexible version of the popular TIP4P potential and has comparable computational cost. By customizing to problem-specific configurations with the AFM approach, the resulting model is remarkably more accurate than any variants of TIP4P for simulating ice-Ih and water in the temperature range from 253 K and 293 K under ambient pressure.

  3. Water injection into vapor- and liquid-dominated reservoirs: Modeling of heat transfer and mass transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pruess, K.; Oldenburg, C.; Moridis, G.

    1997-12-31

    This paper summarizes recent advances in methods for simulating water and tracer injection, and presents illustrative applications to liquid- and vapor-dominated geothermal reservoirs. High-resolution simulations of water injection into heterogeneous, vertical fractures in superheated vapor zones were performed. Injected water was found to move in dendritic patterns, and to experience stronger lateral flow effects than predicted from homogeneous medium models. Higher-order differencing methods were applied to modeling water and tracer injection into liquid-dominated systems. Conventional upstream weighting techniques were shown to be adequate for predicting the migration of thermal fronts, while higher-order methods give far better accuracy for tracer transport.more » A new fluid property module for the TOUGH2 simulator is described which allows a more accurate description of geofluids, and includes mineral dissolution and precipitation effects with associated porosity and permeability change. Comparisons between numerical simulation predictions and data for laboratory and field injection experiments are summarized. Enhanced simulation capabilities include a new linear solver package for TOUGH2, and inverse modeling techniques for automatic history matching and optimization.« less

  4. Kinetic rate constant prediction supports the conformational selection mechanism of protein binding.

    PubMed

    Moal, Iain H; Bates, Paul A

    2012-01-01

    The prediction of protein-protein kinetic rate constants provides a fundamental test of our understanding of molecular recognition, and will play an important role in the modeling of complex biological systems. In this paper, a feature selection and regression algorithm is applied to mine a large set of molecular descriptors and construct simple models for association and dissociation rate constants using empirical data. Using separate test data for validation, the predicted rate constants can be combined to calculate binding affinity with accuracy matching that of state of the art empirical free energy functions. The models show that the rate of association is linearly related to the proportion of unbound proteins in the bound conformational ensemble relative to the unbound conformational ensemble, indicating that the binding partners must adopt a geometry near to that of the bound prior to binding. Mirroring the conformational selection and population shift mechanism of protein binding, the models provide a strong separate line of evidence for the preponderance of this mechanism in protein-protein binding, complementing structural and theoretical studies.

  5. An analytics approach to designing patient centered medical homes.

    PubMed

    Ajorlou, Saeede; Shams, Issac; Yang, Kai

    2015-03-01

    Recently the patient centered medical home (PCMH) model has become a popular team based approach focused on delivering more streamlined care to patients. In current practices of medical homes, a clinical based prediction frame is recommended because it can help match the portfolio capacity of PCMH teams with the actual load generated by a set of patients. Without such balances in clinical supply and demand, issues such as excessive under and over utilization of physicians, long waiting time for receiving the appropriate treatment, and non-continuity of care will eliminate many advantages of the medical home strategy. In this paper, by using the hierarchical generalized linear model with multivariate responses, we develop a clinical workload prediction model for care portfolio demands in a Bayesian framework. The model allows for heterogeneous variances and unstructured covariance matrices for nested random effects that arise through complex hierarchical care systems. We show that using a multivariate approach substantially enhances the precision of workload predictions at both primary and non primary care levels. We also demonstrate that care demands depend not only on patient demographics but also on other utilization factors, such as length of stay. Our analyses of a recent data from Veteran Health Administration further indicate that risk adjustment for patient health conditions can considerably improve the prediction power of the model.

  6. Microgravity

    NASA Image and Video Library

    1997-04-01

    Apfel's excellent match: This series of photos shows a water drop containing a surfactant (Triton-100) as it experiences a complete cycle of superoscillation on U.S. Microgravity Lab-2 (USML-2; October 1995). The time in seconds appears under the photos. The figures above the photos are the oscillation shapes predicted by a numerical model. The time shown with the predictions is nondimensional. Robert Apfel (Yale University) used the Drop Physics Module on USML-2 to explore the effect of surfactants on liquid drops. Apfel's research of surfactants may contribute to improvements in a variety of industrial processes, including oil recovery and environmental cleanup.

  7. Characterizing and modeling the free recovery and constrained recovery behavior of a polyurethane shape memory polymer

    PubMed Central

    Volk, Brent L; Lagoudas, Dimitris C; Maitland, Duncan J

    2011-01-01

    In this work, tensile tests and one-dimensional constitutive modeling are performed on a high recovery force polyurethane shape memory polymer that is being considered for biomedical applications. The tensile tests investigate the free recovery (zero load) response as well as the constrained displacement recovery (stress recovery) response at extension values up to 25%, and two consecutive cycles are performed during each test. The material is observed to recover 100% of the applied deformation when heated at zero load in the second thermomechanical cycle, and a stress recovery of 1.5 MPa to 4.2 MPa is observed for the constrained displacement recovery experiments. After performing the experiments, the Chen and Lagoudas model is used to simulate and predict the experimental results. The material properties used in the constitutive model – namely the coefficients of thermal expansion, shear moduli, and frozen volume fraction – are calibrated from a single 10% extension free recovery experiment. The model is then used to predict the material response for the remaining free recovery and constrained displacement recovery experiments. The model predictions match well with the experimental data. PMID:22003272

  8. Actuator and aerodynamic modeling for high-angle-of-attack aeroservoelasticity

    NASA Technical Reports Server (NTRS)

    Brenner, Martin J.

    1993-01-01

    Accurate prediction of airframe/actuation coupling is required by the imposing demands of modern flight control systems. In particular, for agility enhancement at high angle of attack and low dynamic pressure, structural integration characteristics such as hinge moments, effective actuator stiffness, and airframe/control surface damping can have a significant effect on stability predictions. Actuator responses are customarily represented with low-order transfer functions matched to actuator test data, and control surface stiffness is often modeled as a linear spring. The inclusion of the physical properties of actuation and its installation on the airframe is therefore addressed in this paper using detailed actuator models which consider the physical, electrical, and mechanical elements of actuation. The aeroservoelastic analysis procedure is described in which the actuators are modeled as detailed high-order transfer functions and as approximate low-order transfer functions. The impacts of unsteady aerodynamic modeling on aeroservoelastic stability are also investigated in this paper by varying the order of approximation, or number of aerodynamic lag states, in the analysis. Test data from a thrust-vectoring configuration of an F/A-18 aircraft are compared to predictions to determine the effects on accuracy as a function of modeling complexity.

  9. Actuator and aerodynamic modeling for high-angle-of-attack aeroservoelasticity

    NASA Technical Reports Server (NTRS)

    Brenner, Martin J.

    1993-01-01

    Accurate prediction of airframe/actuation coupling is required by the imposing demands of modern flight control systems. In particular, for agility enhancement at high angle of attack and low dynamic pressure, structural integration characteristics such as hinge moments, effective actuator stiffness, and airframe/control surface damping can have a significant effect on stability predictions. Actuator responses are customarily represented with low-order transfer functions matched to actuator test data, and control surface stiffness is often modeled as a linear spring. The inclusion of the physical properties of actuation and its installation on the airframe is therefore addressed using detailed actuator models which consider the physical, electrical, and mechanical elements of actuation. The aeroservoelastic analysis procedure is described in which the actuators are modeled as detailed high-order transfer functions and as approximate low-order transfer functions. The impacts of unsteady aerodynamic modeling on aeroservoelastic stability are also investigated by varying the order of approximation, or number of aerodynamic lag states, in the analysis. Test data from a thrust-vectoring configuration of an F/A-l8 aircraft are compared to predictions to determine the effects on accuracy as a function of modeling complexity.

  10. Density-driven transport of gas phase chemicals in unsaturated soils

    NASA Astrophysics Data System (ADS)

    Fen, Chiu-Shia; Sun, Yong-tai; Cheng, Yuen; Chen, Yuanchin; Yang, Whaiwan; Pan, Changtai

    2018-01-01

    Variations of gas phase density are responsible for advective and diffusive transports of organic vapors in unsaturated soils. Laboratory experiments were conducted to explore dense gas transport (sulfur hexafluoride, SF6) from different source densities through a nitrogen gas-dry soil column. Gas pressures and SF6 densities at transient state were measured along the soil column for three transport configurations (horizontal, vertically upward and vertically downward transport). These measurements and others reported in the literature were compared with simulation results obtained from two models based on different diffusion approaches: the dusty gas model (DGM) equations and a Fickian-type molar fraction-based diffusion expression. The results show that the DGM and Fickian-based models predicted similar dense gas density profiles which matched the measured data well for horizontal transport of dense gas at low to high source densities, despite the pressure variations predicted in the soil column were opposite to the measurements. The pressure evolutions predicted by both models were in trend similar to the measured ones for vertical transport of dense gas. However, differences between the dense gas densities predicted by the DGM and Fickian-based models were discernible for vertically upward transport of dense gas even at low source densities, as the DGM-based predictions matched the measured data better than the Fickian results did. For vertically downward transport, the dense gas densities predicted by both models were not greatly different from our experimental measurements, but substantially greater than the observations obtained from the literature, especially at high source densities. Further research will be necessary for exploring factors affecting downward transport of dense gas in soil columns. Use of the measured data to compute flux components of SF6 showed that the magnitudes of diffusive flux component based on the Fickian-type diffusion expressions in terms of molar concentration, molar fraction and mass density fraction gradient were almost the same. However, they were greater than the result computed with the mass fraction gradient for > 24% and the DGM-based result for more than one time. As a consequence, the DGM-based total flux of SF6 was in magnitude greatly less than the Fickian result not only for horizontal transport (diffusion-dominating) but also for vertical transport (advection and diffusion) of dense gas. Particularly, the Fickian-based total flux was more than two times in magnitude as much as the DGM result for vertically upward transport of dense gas.

  11. Effect of casing yield stress on bomb blast impulse

    NASA Astrophysics Data System (ADS)

    Hutchinson, M. D.

    2012-08-01

    An equation to predict blast effects from cased charges was first proposed by U. Fano in 1944 and revised by E.M. Fisher in 1953 [1]. Fisher's revision provides much better matches to available blast impulse data, but still requires empirical parameter adjustments. A new derivation [2], based on the work of R.W. Gurney [3] and G.I. Taylor [4], has resulted in an equation which nearly matches experimental data. This new analytical model is also capable of being extended, through the incorporation of additional physics, such as the effects of early case fracture, finite casing thickness, casing metal strain energy dissipation, explosive gas escape through casing fractures and the comparative dynamics of blast wave and metal fragment impacts. This paper will focus on the choice of relevant case fracture strain criterion, as it will be shown that this allows the explicit inclusion of the dynamic properties of the explosive and casing metal. It will include a review and critique of the most significant earlier work on this topic, contained in a paper by Hoggatt and Recht [5]. Using this extended analytical model, good matches can readily be made to available free-field blast impulse data, without any empirical adjustments being needed. Further work will be required to apply this model to aluminised and other highly oxygen-deficient explosives.

  12. Using Socioeconomic Data to Calibrate Loss Estimates

    NASA Astrophysics Data System (ADS)

    Holliday, J. R.; Rundle, J. B.

    2013-12-01

    One of the loftier goals in seismic hazard analysis is the creation of an end-to-end earthquake prediction system: a "rupture to rafters" work flow that takes a prediction of fault rupture, propagates it with a ground shaking model, and outputs a damage or loss profile at a given location. So far, the initial prediction of an earthquake rupture (either as a point source or a fault system) has proven to be the most difficult and least solved step in this chain. However, this may soon change. The Collaboratory for the Study of Earthquake Predictability (CSEP) has amassed a suite of earthquake source models for assorted testing regions worldwide. These models are capable of providing rate-based forecasts for earthquake (point) sources over a range of time horizons. Furthermore, these rate forecasts can be easily refined into probabilistic source forecasts. While it's still difficult to fully assess the "goodness" of each of these models, progress is being made: new evaluation procedures are being devised and earthquake statistics continue to accumulate. The scientific community appears to be heading towards a better understanding of rupture predictability. Ground shaking mechanics are better understood, and many different sophisticated models exists. While these models tend to be computationally expensive and often regionally specific, they do a good job at matching empirical data. It is perhaps time to start addressing the third step in the seismic hazard prediction system. We present a model for rapid economic loss estimation using ground motion (PGA or PGV) and socioeconomic measures as its input. We show that the model can be calibrated on a global scale and applied worldwide. We also suggest how the model can be improved and generalized to non-seismic natural disasters such as hurricane and severe wind storms.

  13. Beyond neutral and forbidden links: morphological matches and the assembly of mutualistic hawkmoth-plant networks.

    PubMed

    Sazatornil, Federico D; Moré, Marcela; Benitez-Vieyra, Santiago; Cocucci, Andrea A; Kitching, Ian J; Schlumpberger, Boris O; Oliveira, Paulo E; Sazima, Marlies; Amorim, Felipe W

    2016-11-01

    A major challenge in evolutionary ecology is to understand how co-evolutionary processes shape patterns of interactions between species at community level. Pollination of flowers with long corolla tubes by long-tongued hawkmoths has been invoked as a showcase model of co-evolution. Recently, optimal foraging models have predicted that there might be a close association between mouthparts' length and the corolla depth of the visited flowers, thus favouring trait convergence and specialization at community level. Here, we assessed whether hawkmoths more frequently pollinate plants with floral tube lengths similar to their proboscis lengths (morphological match hypothesis) against abundance-based processes (neutral hypothesis) and ecological trait mismatches constraints (forbidden links hypothesis), and how these processes structure hawkmoth-plant mutualistic networks from five communities in four biogeographical regions of South America. We found convergence in morphological traits across the five communities and that the distribution of morphological differences between hawkmoths and plants is consistent with expectations under the morphological match hypothesis in three of the five communities. In the two remaining communities, which are ecotones between two distinct biogeographical areas, interactions are better predicted by the neutral hypothesis. Our findings are consistent with the idea that diffuse co-evolution drives the evolution of extremely long proboscises and flower tubes, and highlight the importance of morphological traits, beyond the forbidden links hypothesis, in structuring interactions between mutualistic partners, revealing that the role of niche-based processes can be much more complex than previously known. © 2016 The Authors. Journal of Animal Ecology © 2016 British Ecological Society.

  14. Comprehensive Gas-Phase Peptide Ion Structure Studies Using Ion Mobility Techniques: Part 2. Gas-Phase Hydrogen/Deuterium Exchange for Ion Population Estimation.

    PubMed

    Khakinejad, Mahdiar; Ghassabi Kondalaji, Samaneh; Tafreshian, Amirmahdi; Valentine, Stephen J

    2017-05-01

    Gas-phase hydrogen/deuterium exchange (HDX) using D 2 O reagent and collision cross-section (CCS) measurements are utilized to monitor the ion conformers of the model peptide acetyl-PAAAAKAAAAKAAAAKAAAAK. The measurements are carried out on a home-built ion mobility instrument coupled to a linear ion trap mass spectrometer containing electron transfer dissociation (ETD) capabilities. ETD is utilized to obtain per-residue deuterium uptake data for select ion conformers, and a new algorithm is presented for interpreting the HDX data. Using molecular dynamics (MD) production data and a hydrogen accessibility scoring (HAS)-number of effective collisions (NEC) model, hypothetical HDX behavior is attributed to various in-silico candidate (CCS match) structures. The HAS-NEC model is applied to all candidate structures, and non-negative linear regression is employed to determine structure contributions resulting in the best match to deuterium uptake. The accuracy of the HAS-NEC model is tested with the comparison of predicted and experimental isotopic envelopes for several of the observed c-ions. It is proposed that gas-phase HDX can be utilized effectively as a second criterion (after CCS matching) for filtering suitable MD candidate structures. In this study, the second step of structure elucidation, 13 nominal structures were selected (from a pool of 300 candidate structures) and each with a population contribution proposed for these ions. Graphical Abstract ᅟ.

  15. Testing the predictive value of peripheral gene expression for nonremission following citalopram treatment for major depression.

    PubMed

    Guilloux, Jean-Philippe; Bassi, Sabrina; Ding, Ying; Walsh, Chris; Turecki, Gustavo; Tseng, George; Cyranowski, Jill M; Sibille, Etienne

    2015-02-01

    Major depressive disorder (MDD) in general, and anxious-depression in particular, are characterized by poor rates of remission with first-line treatments, contributing to the chronic illness burden suffered by many patients. Prospective research is needed to identify the biomarkers predicting nonremission prior to treatment initiation. We collected blood samples from a discovery cohort of 34 adult MDD patients with co-occurring anxiety and 33 matched, nondepressed controls at baseline and after 12 weeks (of citalopram plus psychotherapy treatment for the depressed cohort). Samples were processed on gene arrays and group differences in gene expression were investigated. Exploratory analyses suggest that at pretreatment baseline, nonremitting patients differ from controls with gene function and transcription factor analyses potentially related to elevated inflammation and immune activation. In a second phase, we applied an unbiased machine learning prediction model and corrected for model-selection bias. Results show that baseline gene expression predicted nonremission with 79.4% corrected accuracy with a 13-gene model. The same gene-only model predicted nonremission after 8 weeks of citalopram treatment with 76% corrected accuracy in an independent validation cohort of 63 MDD patients treated with citalopram at another institution. Together, these results demonstrate the potential, but also the limitations, of baseline peripheral blood-based gene expression to predict nonremission after citalopram treatment. These results not only support their use in future prediction tools but also suggest that increased accuracy may be obtained with the inclusion of additional predictors (eg, genetics and clinical scales).

  16. Matching Pupils and Teachers to Maximize Expected Outcomes.

    ERIC Educational Resources Information Center

    Ward, Joe H., Jr.; And Others

    To achieve a good teacher-pupil match, it is necessary (1) to predict the learning outcomes that will result when each student is instructed by each teacher, (2) to use the predicted performance to compute an Optimality Index for each teacher-pupil combination to indicate the quality of each combination toward maximizing learning for all students,…

  17. Cation specific binding with protein surface charges

    PubMed Central

    Hess, Berk; van der Vegt, Nico F. A.

    2009-01-01

    Biological organization depends on a sensitive balance of noncovalent interactions, in particular also those involving interactions between ions. Ion-pairing is qualitatively described by the law of “matching water affinities.” This law predicts that cations and anions (with equal valence) form stable contact ion pairs if their sizes match. We show that this simple physical model fails to describe the interaction of cations with (molecular) anions of weak carboxylic acids, which are present on the surfaces of many intra- and extracellular proteins. We performed molecular simulations with quantitatively accurate models and observed that the order K+ < Na+ < Li+ of increasing binding affinity with carboxylate ions is caused by a stronger preference for forming weak solvent-shared ion pairs. The relative insignificance of contact pair interactions with protein surfaces indicates that thermodynamic stability and interactions between proteins in alkali salt solutions is governed by interactions mediated through hydration water molecules. PMID:19666545

  18. Numerical analysis of hypersonic turbulent film cooling flows

    NASA Technical Reports Server (NTRS)

    Chen, Y. S.; Chen, C. P.; Wei, H.

    1992-01-01

    As a building block, numerical capabilities for predicting heat flux and turbulent flowfields of hypersonic vehicles require extensive model validations. Computational procedures for calculating turbulent flows and heat fluxes for supersonic film cooling with parallel slot injections are described in this study. Two injectant mass flow rates with matched and unmatched pressure conditions using the database of Holden et al. (1990) are considered. To avoid uncertainties associated with the boundary conditions in testing turbulence models, detailed three-dimensional flowfields of the injection nozzle were calculated. Two computational fluid dynamics codes, GASP and FDNS, with the algebraic Baldwin-Lomax and k-epsilon models with compressibility corrections were used. It was found that the B-L model which resolves near-wall viscous sublayer is very sensitive to the inlet boundary conditions at the nozzle exit face. The k-epsilon models with improved wall functions are less sensitive to the inlet boundary conditions. The testings show that compressibility corrections are necessary for the k-epsilon model to realistically predict the heat fluxes of the hypersonic film cooling problems.

  19. A perturbative approach to the redshift space correlation function: beyond the Standard Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bose, Benjamin; Koyama, Kazuya, E-mail: benjamin.bose@port.ac.uk, E-mail: kazuya.koyama@port.ac.uk

    We extend our previous redshift space power spectrum code to the redshift space correlation function. Here we focus on the Gaussian Streaming Model (GSM). Again, the code accommodates a wide range of modified gravity and dark energy models. For the non-linear real space correlation function used in the GSM we use the Fourier transform of the RegPT 1-loop matter power spectrum. We compare predictions of the GSM for a Vainshtein screened and Chameleon screened model as well as GR. These predictions are compared to the Fourier transform of the Taruya, Nishimichi and Saito (TNS) redshift space power spectrum model whichmore » is fit to N-body data. We find very good agreement between the Fourier transform of the TNS model and the GSM predictions, with ≤ 6% deviations in the first two correlation function multipoles for all models for redshift space separations in 50Mpc h ≤ s ≤ 180Mpc/ h . Excellent agreement is found in the differences between the modified gravity and GR multipole predictions for both approaches to the redshift space correlation function, highlighting their matched ability in picking up deviations from GR. We elucidate the timeliness of such non-standard templates at the dawn of stage-IV surveys and discuss necessary preparations and extensions needed for upcoming high quality data.« less

  20. (Lack of) Measurable Clinical or Knowledge Gains From Resident Participation in Noon Conference.

    PubMed

    Meyer, Nathaniel B; Gaetke-Udager, Kara; Shampain, Kimberly L; Spencer, Amy; Cohan, Richard H; Davenport, Matthew S

    2018-06-01

    The objective of this study was to determine whether noon conference attendance by diagnostic radiology residents is predictive of measurable performance. This single-center retrospective Health Insurance and Portability and Accountability Act (HIPAA)-compliant cross-sectional study was considered "not regulated" by the institutional review board. All diagnostic radiology residents who began residency training from 2008 to 2012 were included (N = 54). Metrics of clinical performance and knowledge were collected, including junior and senior precall test results, American Board of Radiology scores (z-score transformed), American College of Radiology in-training scores (years 1-3), on-call "great call" and minor and major discrepancy rates, on-call and daytime case volumes, and training rotation scores. Multivariate regression models were constructed to determine if conference attendance, match rank order, or starting year could predict these outcomes. Pearson bivariate correlations were calculated. Senior precall test results were moderately correlated with American Board of Radiology (r = 0.41) and American College of Radiology (r = 0.38-0.48) test results and mean rotation scores (r = 0.41), indicating moderate internal validity. However, conference attendance, match rank order, and year of training did not correlate with (r = -0.16-0.16) or predict (P > .05) measurable resident knowledge. On multivariate analysis, neither match rank order (P = .14-.96) nor conference attendance (P = .10-.88) predicted measurable clinical efficiency or accuracy. Year started training predicted greater cross-sectional case volume (P < .0001, β = 0.361-0.516) and less faculty-to-resident feedback (P < 0.0001, β = [-0.628]-[-0.733]). Residents with lower conference attendance are indistinguishable from those who attend more frequently in a wide range of clinical and knowledge-based performance assessments, suggesting that required attendance may not be necessary to gain certain measurable core competencies. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  1. Relative quantity judgments in the beluga whale (Delphinapterus leucas) and the bottlenose dolphin (Tursiops truncatus).

    PubMed

    Abramson, José Z; Hernández-Lloreda, Victoria; Call, Josep; Colmenares, Fernando

    2013-06-01

    Numerous studies have documented the ability of many species to make relative quantity judgments using an analogue magnitude system. We investigated whether one beluga whale, Delphinapterus leucas, and three bottlenose dolphins, Tursiops truncatus, were capable of selecting the larger of two sets of quantities, and analyzed if their performance matched predictions from the object file model versus the analog accumulator model. In Experiment 1, the two sets were presented simultaneously, under water, and they were visually (condition 1) or echoically (condition 2) available at the time of choice. In experiment 2, the two sets were presented above the water, successively (condition 1) or sequentially, item-by-item (condition 2), so that they were not visually available at the time of choice (condition 1) or at any time throughout the experiment (condition 2). We analyzed the effect of the ratio between quantities, the difference between quantities, and the total number of items presented on the subjects' choices. All subjects selected the larger of the two sets of quantities above chance levels in all conditions. However, unlike most previous studies, the subjects' choices did not match the predictions from the accumulator model. Whether these findings reflect interspecies differences in the mechanisms which underpin relative quantity judgments remains to be determined. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Effectiveness of conservation easements in agricultural regions.

    PubMed

    Braza, Mark

    2017-08-01

    Conservation easements are a standard technique for preventing habitat loss, particularly in agricultural regions with extensive cropland cultivation, yet little is known about their effectiveness. I developed a spatial econometric approach to propensity-score matching and used the approach to estimate the amount of habitat loss prevented by a grassland conservation easement program of the U.S. federal government. I used a spatial autoregressive probit model to predict tract enrollment in the easement program as of 2001 based on tract agricultural suitability, habitat quality, and spatial interactions among neighboring tracts. Using the predicted values from the model, I matched enrolled tracts with similar unenrolled tracts to form a treatment group and a control group. To measure the program's impact on subsequent grassland loss, I estimated cropland cultivation rates for both groups in 2014 with a second spatial probit model. Between 2001 and 2014, approximately 14.9% of control tracts were cultivated and 0.3% of treated tracts were cultivated. Therefore, approximately 14.6% of the protected land would have been cultivated in the absence of the program. My results demonstrate that conservation easements can significantly reduce habitat loss in agricultural regions; however, the enrollment of tracts with low cropland suitability may constrain the amount of habitat loss they prevent. My results also show that spatial econometric models can improve the validity of control groups and thereby strengthen causal inferences about program effectiveness in situations when spatial interactions influence conservation decisions. © 2017 Society for Conservation Biology.

  3. Full-Waveform Envelope Templates for Low Magnitude Discrimination and Yield Estimation at Local and Regional Distances with Application to the North Korean Nuclear Tests

    NASA Astrophysics Data System (ADS)

    Yoo, S. H.

    2017-12-01

    Monitoring seismologists have successfully used seismic coda for event discrimination and yield estimation for over a decade. In practice seismologists typically analyze long-duration, S-coda signals with high signal-to-noise ratios (SNR) at regional and teleseismic distances, since the single back-scattering model reasonably predicts decay of the late coda. However, seismic monitoring requirements are shifting towards smaller, locally recorded events that exhibit low SNR and short signal lengths. To be successful at characterizing events recorded at local distances, we must utilize the direct-phase arrivals, as well as the earlier part of the coda, which is dominated by multiple forward scattering. To remedy this problem, we have developed a new hybrid method known as full-waveform envelope template matching to improve predicted envelope fits over the entire waveform and account for direct-wave and early coda complexity. We accomplish this by including a multiple forward-scattering approximation in the envelope modeling of the early coda. The new hybrid envelope templates are designed to fit local and regional full waveforms and produce low-variance amplitude estimates, which will improve yield estimation and discrimination between earthquakes and explosions. To demonstrate the new technique, we applied our full-waveform envelope template-matching method to the six known North Korean (DPRK) underground nuclear tests and four aftershock events following the September 2017 test. We successfully discriminated the event types and estimated the yield for all six nuclear tests. We also applied the same technique to the 2015 Tianjin explosions in China, and another suspected low-yield explosion at the DPRK test site on May 12, 2010. Our results show that the new full-waveform envelope template-matching method significantly improves upon longstanding single-scattering coda prediction techniques. More importantly, the new method allows monitoring seismologists to extend coda-based techniques to lower magnitude thresholds and low-yield local explosions.

  4. Right ventricular stroke work correlates with outcomes in pediatric pulmonary arterial hypertension.

    PubMed

    Yang, Weiguang; Marsden, Alison L; Ogawa, Michelle T; Sakarovitch, Charlotte; Hall, Keeley K; Rabinovitch, Marlene; Feinstein, Jeffrey A

    2018-01-01

    Pulmonary arterial hypertension (PAH) is characterized by elevated pulmonary artery pressures (PAP) and pulmonary vascular resistance (PVR). Optimizing treatment strategies and timing for transplant remains challenging. Thus, a quantitative measure to predict disease progression would be greatly beneficial in treatment planning. We devised a novel method to assess right ventricular (RV) stroke work (RVSW) as a potential biomarker of the failing heart that correlates with clinical worsening. Pediatric patients with idiopathic PAH or PAH secondary to congenital heart disease who had serial, temporally matched cardiac catheterization and magnetic resonance imaging (MRI) data were included. RV and PA hemodynamics were numerically determined by using a lumped parameter (circuit analogy) model to create pressure-volume (P-V) loops. The model was tuned using optimization techniques to match MRI and catheterization derived RV volumes and pressures for each time point. RVSW was calculated from the corresponding P-V loop and indexed by ejection fraction and body surface area (RVSW EF ) to compare across patients. Seventeen patients (8 boys; median age = 9.4 years; age range = 4.4-16.3 years) were enrolled. Nine were clinically stable; the others had clinical worsening between the time of their initial matched studies and their most recent follow-up (mean time = 3.9 years; range = 1.1-8.0 years). RVSW EF and the ratio of pulmonary to systemic resistance (Rp:Rs) values were found to have more significant associations with clinical worsening within one, two, and five years following the measurements, when compared with PVR index (PVRI). A receiver operating characteristic analysis showed RVSW EF outperforms PVRI, Rp:Rs and ejection fraction for predicting clinical worsening. RVSW EF correlates with clinical worsening in pediatric PAH, shows promising results towards predicting adverse outcomes, and may serve as an indicator of future clinical worsening.

  5. REDUCING UNCERTAINTIES IN MODEL PREDICTIONS VIA HISTORY MATCHING OF CO2 MIGRATION AND REACTIVE TRANSPORT MODELING OF CO2 FATE AT THE SLEIPNER PROJECT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Chen

    2015-03-31

    An important question for the Carbon Capture, Storage, and Utility program is “can we adequately predict the CO2 plume migration?” For tracking CO2 plume development, the Sleipner project in the Norwegian North Sea provides more time-lapse seismic monitoring data than any other sites, but significant uncertainties still exist for some of the reservoir parameters. In Part I, we assessed model uncertainties by applying two multi-phase compositional simulators to the Sleipner Benchmark model for the uppermost layer (Layer 9) of the Utsira Sand and calibrated our model against the time-lapsed seismic monitoring data for the site from 1999 to 2010. Approximatemore » match with the observed plume was achieved by introducing lateral permeability anisotropy, adding CH4 into the CO2 stream, and adjusting the reservoir temperatures. Model-predicted gas saturation, CO2 accumulation thickness, and CO2 solubility in brine—none were used as calibration metrics—were all comparable with the interpretations of the seismic data in the literature. In Part II & III, we evaluated the uncertainties of predicted long-term CO2 fate up to 10,000 years, due to uncertain reaction kinetics. Under four scenarios of the kinetic rate laws, the temporal and spatial evolution of CO2 partitioning into the four trapping mechanisms (hydrodynamic/structural, solubility, residual/capillary, and mineral) was simulated with ToughReact, taking into account the CO2-brine-rock reactions and the multi-phase reactive flow and mass transport. Modeling results show that different rate laws for mineral dissolution and precipitation reactions resulted in different predicted amounts of trapped CO2 by carbonate minerals, with scenarios of the conventional linear rate law for feldspar dissolution having twice as much mineral trapping (21% of the injected CO2) as scenarios with a Burch-type or Alekseyev et al.–type rate law for feldspar dissolution (11%). So far, most reactive transport modeling (RTM) studies for CCUS have used the conventional rate law and therefore simulated the upper bound of mineral trapping. However, neglecting the regional flow after injection, as most previous RTM studies have done, artificially limits the extent of geochemical reactions as if it were in a batch system. By replenishing undersaturated groundwater from upstream, the Utsira Sand is reactive over a time scale of 10,000 years. The results from this project have been communicated via five peer-reviewed journal articles, four conference proceeding papers, and 19 invited and contributed presentations at conferences and seminars.« less

  6. Chimpanzees predict that a competitor's preference will match their own

    PubMed Central

    Schmelz, Martin; Call, Josep; Tomasello, Michael

    2013-01-01

    The ability to predict how another individual will behave is useful in social competition. Chimpanzees can predict the behaviour of another based on what they observe her to see, hear, know and infer. Here we show that chimpanzees act on the assumption that others have preferences that match their own. All subjects began with a preference for a box with a picture of food over one with a picture of nothing, even though the pictures had no causal relation to the contents. In a back-and-forth food competition, chimpanzees then avoided the box with the picture of food when their competitor had chosen one of the boxes before them—presumably on the assumption that the competitor shared their own preference for it and had already chosen it. Chimpanzees predicted that their competitor's preference would match their own and adjusted their behavioural strategies accordingly. PMID:23193044

  7. Binding ligand prediction for proteins using partial matching of local surface patches.

    PubMed

    Sael, Lee; Kihara, Daisuke

    2010-01-01

    Functional elucidation of uncharacterized protein structures is an important task in bioinformatics. We report our new approach for structure-based function prediction which captures local surface features of ligand binding pockets. Function of proteins, specifically, binding ligands of proteins, can be predicted by finding similar local surface regions of known proteins. To enable partial comparison of binding sites in proteins, a weighted bipartite matching algorithm is used to match pairs of surface patches. The surface patches are encoded with the 3D Zernike descriptors. Unlike the existing methods which compare global characteristics of the protein fold or the global pocket shape, the local surface patch method can find functional similarity between non-homologous proteins and binding pockets for flexible ligand molecules. The proposed method improves prediction results over global pocket shape-based method which was previously developed by our group.

  8. Binding Ligand Prediction for Proteins Using Partial Matching of Local Surface Patches

    PubMed Central

    Sael, Lee; Kihara, Daisuke

    2010-01-01

    Functional elucidation of uncharacterized protein structures is an important task in bioinformatics. We report our new approach for structure-based function prediction which captures local surface features of ligand binding pockets. Function of proteins, specifically, binding ligands of proteins, can be predicted by finding similar local surface regions of known proteins. To enable partial comparison of binding sites in proteins, a weighted bipartite matching algorithm is used to match pairs of surface patches. The surface patches are encoded with the 3D Zernike descriptors. Unlike the existing methods which compare global characteristics of the protein fold or the global pocket shape, the local surface patch method can find functional similarity between non-homologous proteins and binding pockets for flexible ligand molecules. The proposed method improves prediction results over global pocket shape-based method which was previously developed by our group. PMID:21614188

  9. Tie Points Extraction for SAR Images Based on Differential Constraints

    NASA Astrophysics Data System (ADS)

    Xiong, X.; Jin, G.; Xu, Q.; Zhang, H.

    2018-04-01

    Automatically extracting tie points (TPs) on large-size synthetic aperture radar (SAR) images is still challenging because the efficiency and correct ratio of the image matching need to be improved. This paper proposes an automatic TPs extraction method based on differential constraints for large-size SAR images obtained from approximately parallel tracks, between which the relative geometric distortions are small in azimuth direction and large in range direction. Image pyramids are built firstly, and then corresponding layers of pyramids are matched from the top to the bottom. In the process, the similarity is measured by the normalized cross correlation (NCC) algorithm, which is calculated from a rectangular window with the long side parallel to the azimuth direction. False matches are removed by the differential constrained random sample consensus (DC-RANSAC) algorithm, which appends strong constraints in azimuth direction and weak constraints in range direction. Matching points in the lower pyramid images are predicted with the local bilinear transformation model in range direction. Experiments performed on ENVISAT ASAR and Chinese airborne SAR images validated the efficiency, correct ratio and accuracy of the proposed method.

  10. Estimation of liquefaction-induced lateral spread from numerical modeling and its application

    NASA Astrophysics Data System (ADS)

    Meng, Xianhong

    A noncoupled numerical procedure was developed using a scheme of pore water generation that causes shear modulus degradation and shear strength degradation resulting from earthquake cyclic motion. The designed Fast Lagrangian Analysis of Continua (FLAC) model procedure was tested using the liquefaction-induced lateral spread and ground response for Wildlife and Kobe sites. Sixteen well-documented case histories of lateral spread were reviewed and modeled using the modeling procedure. The dynamic residual strength ratios were back-calculated by matching the predicted displacement with the measured lateral spread, or with the displacement predicted by the Yound et al. model. Statistical analysis on the modeling results and soil properties show that most significant parameters governing the residual strength of the liquefied soil are the SPT blow count, fine content and soil particle size of the lateral spread layer. A regression equation was developed to express the residual strength values with these soil properties. Overall, this research demonstrated that a calibrated numerical model can predict the first order effectiveness of liquefaction-induced lateral spread using relatively simple parameters obtained from routine geotechnical investigation. In addition, the model can be used to plan a soil improvement program for cases where liquefaction remediation is needed. This allows the model to be used for design purposes at bridge approaches structured on liquefiable materials.

  11. A probabilistic model to predict clinical phenotypic traits from genome sequencing.

    PubMed

    Chen, Yun-Ching; Douville, Christopher; Wang, Cheng; Niknafs, Noushin; Yeo, Grace; Beleva-Guthrie, Violeta; Carter, Hannah; Stenson, Peter D; Cooper, David N; Li, Biao; Mooney, Sean; Karchin, Rachel

    2014-09-01

    Genetic screening is becoming possible on an unprecedented scale. However, its utility remains controversial. Although most variant genotypes cannot be easily interpreted, many individuals nevertheless attempt to interpret their genetic information. Initiatives such as the Personal Genome Project (PGP) and Illumina's Understand Your Genome are sequencing thousands of adults, collecting phenotypic information and developing computational pipelines to identify the most important variant genotypes harbored by each individual. These pipelines consider database and allele frequency annotations and bioinformatics classifications. We propose that the next step will be to integrate these different sources of information to estimate the probability that a given individual has specific phenotypes of clinical interest. To this end, we have designed a Bayesian probabilistic model to predict the probability of dichotomous phenotypes. When applied to a cohort from PGP, predictions of Gilbert syndrome, Graves' disease, non-Hodgkin lymphoma, and various blood groups were accurate, as individuals manifesting the phenotype in question exhibited the highest, or among the highest, predicted probabilities. Thirty-eight PGP phenotypes (26%) were predicted with area-under-the-ROC curve (AUC)>0.7, and 23 (15.8%) of these were statistically significant, based on permutation tests. Moreover, in a Critical Assessment of Genome Interpretation (CAGI) blinded prediction experiment, the models were used to match 77 PGP genomes to phenotypic profiles, generating the most accurate prediction of 16 submissions, according to an independent assessor. Although the models are currently insufficiently accurate for diagnostic utility, we expect their performance to improve with growth of publicly available genomics data and model refinement by domain experts.

  12. Passive exposures of children to volatile trihalomethanes during domestic cleaning activities of their parents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andra, Syam S.; Harvard-Cyprus Program, Department of Environmental Health, Harvard School of Public Health, Boston, MA; Charisiadis, Pantelis

    Domestic cleaning has been proposed as a determinant of trihalomethanes (THMs) exposure in adult females. We hypothesized that parental housekeeping activities could influence children's passive exposures to THMs from their mere physical presence during domestic cleaning. In a recent cross-sectional study (n=382) in Cyprus [41 children (<18y) and 341 adults (≥18y)], we identified 29 children who met the study's inclusion criteria. Linear regression models were applied to understand the association between children sociodemographic variables, their individual practices influencing ingestion and noningestion exposures to ΣTHMs, and their urinary THMs levels. Among the children-specific variables, age alone showed a statistically significant inversemore » association with their creatinine-adjusted urinary ΣTHMs (r{sub S}=−0.59, p<0.001). A positive correlation was observed between urinary ΣTHMs (ng g{sup −1}) of children and matched-mothers (r{sub S}=0.52, p=0.014), but this was not the case for their matched-fathers (r{sub S}=0.39, p=0.112). Time spent daily by the matched-mothers for domestic mopping, toilet and other cleaning activities using chlorine-based cleaning products was associated with their children's urinary THMs levels (r{sub S}=0.56, p=0.007). This trend was not observed between children and their matched-fathers urinary ΣTHMs levels, because of minimum amount of time spent by the latter in performing domestic cleaning. The proportion of variance of creatinine-unadjusted and adjusted urinary ΣTHMs levels in children that was explained by the matched-mothers covariates was 76% and 74% (p<0.001), respectively. A physiologically-based pharmacokinetic model adequately predicted urinary chloroform excretion estimates, being consistent with the corresponding measured levels. Our findings highlighted the influence of mothers' domestic cleaning activities towards enhancing passive THMs exposures of their children. The duration of such activities could be further tested as a valid indicator of children's THMs body burden. - Highlights: • First report on THMs exposure assessment in matched parents and children. • Duration of domestic cleaning by mothers influenced passive exposure to THMs in children. • Matched-fathers did little cleaning and thereby no contribution to passive exposure to THMs in children. • Reverse dosimetry showed a good agreement between predicted and observed urinary chloroform. • Passive exposures to THMs require new attention in survey questionnaires and epidemiology.« less

  13. Can salivary testosterone and cortisol reactivity to a mid-week stress test discriminate a match outcome during international rugby union competition?

    PubMed

    Crewther, Blair T; Potts, Neil; Kilduff, Liam P; Drawer, Scott; Cook, Christian J

    2018-03-01

    Evidence suggests that stress-induced changes in testosterone and cortisol are related to future competitive behaviours and team-sport outcomes. Therefore, we examined whether salivary testosterone and cortisol reactivity to a mid-week stress test can discriminate a match outcome in international rugby union competition. Single group, quasi-experimental design with repeated measures. Thirty-three male rugby players completed a standardised stress test three or four days before seven international matches. Stress testing involved seven minutes of shuttle runs (2×20m), dispersed across one-minute stages with increasing speeds. Salivary testosterone and cortisol were measured in the morning, along with delta changes from morning to pre-test (Morn-PreΔ) and pre-test to post-test (Pre-PostΔ). Data were compared across wins (n=3) and losses (n=4). The Morn-PreΔ in cortisol increased before winning and decreased prior to losing (p<0.001), with a large effect size difference (d=1.6, 90% CI 1.3-1.9). Testosterone decreased significantly across the same period, irrespective of the match outcome. The Morn-PreΔ in testosterone and cortisol, plus the Pre-PostΔ in testosterone, all predicted a match outcome (p≤0.01). The final model showed good diagnostic accuracy (72%) with cortisol as the main contributor. The salivary testosterone and cortisol responses to mid-week testing showed an ability to discriminate a rugby match outcome over a limited number of games. The Morn-PreΔ in cortisol was the strongest diagnostic biomarker. This model may provide a unique format to assess team readiness or recovery between competitions, especially with the emergence of rapid hormonal testing. Copyright © 2017 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  14. Evaluation of the US Food and Drug Administration sentinel analysis tools in confirming previously observed drug-outcome associations: The case of clindamycin and Clostridium difficile infection.

    PubMed

    Carnahan, Ryan M; Kuntz, Jennifer L; Wang, Shirley V; Fuller, Candace; Gagne, Joshua J; Leonard, Charles E; Hennessy, Sean; Meyer, Tamra; Archdeacon, Patrick; Chen, Chih-Ying; Panozzo, Catherine A; Toh, Sengwee; Katcoff, Hannah; Woodworth, Tiffany; Iyer, Aarthi; Axtman, Sophia; Chrischilles, Elizabeth A

    2018-03-13

    The Food and Drug Administration's Sentinel System developed parameterized, reusable analytic programs for evaluation of medical product safety. Research on outpatient antibiotic exposures, and Clostridium difficile infection (CDI) with non-user reference groups led us to expect a higher rate of CDI among outpatient clindamycin users vs penicillin users. We evaluated the ability of the Cohort Identification and Descriptive Analysis and Propensity Score Matching tools to identify a higher rate of CDI among clindamycin users. We matched new users of outpatient dispensings of oral clindamycin or penicillin from 13 Data Partners 1:1 on propensity score and followed them for up to 60 days for development of CDI. We used Cox proportional hazards regression stratified by Data Partner and matched pair to compare CDI incidence. Propensity score models at 3 Data Partners had convergence warnings and a limited range of predicted values. We excluded these Data Partners despite adequate covariate balance after matching. From the 10 Data Partners where these models converged without warnings, we identified 807 919 new clindamycin users and 8 815 441 new penicillin users eligible for the analysis. The stratified analysis of 807 769 matched pairs included 840 events among clindamycin users and 290 among penicillin users (hazard ratio 2.90, 95% confidence interval 2.53, 3.31). This evaluation produced an expected result and identified several potential enhancements to the Propensity Score Matching tool. This study has important limitations. CDI risk may have been related to factors other than the inherent properties of the drugs, such as duration of use or subsequent exposures. Copyright © 2018 John Wiley & Sons, Ltd.

  15. Alternative community structures in a kelp-urchin community: A qualitative modeling approach

    USGS Publications Warehouse

    Montano-Moctezuma, G.; Li, H.W.; Rossignol, P.A.

    2007-01-01

    Shifts in interaction patterns within a community may result from periodic disturbances and climate. The question arises as to the extent and significance of these shifting patterns. Using a novel approach to link qualitative mathematical models and field data, namely using the inverse matrix to identify the community matrix, we reconstructed community networks from kelp forests off the Oregon Coast. We simulated all ecologically plausible interactions among community members, selected the models whose outcomes match field observations, and identified highly frequent links to characterize the community network from a particular site. We tested all possible biologically reasonable community networks through qualitative simulations, selected those that matched patterns observed in the field, and further reduced the set of possibilities by retaining those that were stable. We found that a community can be represented by a set of alternative structures, or scenarios. From 11,943,936 simulated models, 0.23% matched the field observations; moreover, only 0.006%, or 748 models, were highly reliable in their predictions and met conditions for stability. Predator-prey interactions as well as non-predatory relationships were consistently found in most of the 748 models. These highly frequent connections were useful to characterize the community network in the study site. We suggest that alternative networks provide the community with a buffer to disturbance, allowing it to continuously reorganize to adapt to a variable environment. This is possible due to the fluctuating capacities of foraging species to consume alternate resources. This suggestion is sustained by our results, which indicate that none of the models that matched field observations were fully connected. This plasticity may contribute to the persistence of these communities. We propose that qualitative simulations represent a powerful technique to raise new hypotheses concerning community dynamics and to reconstruct guidelines that may govern community patterns. ?? 2007 Elsevier B.V. All rights reserved.

  16. Long-term effects of child abuse and neglect on emotion processing in adulthood.

    PubMed

    Young, Joanna Cahall; Widom, Cathy Spatz

    2014-08-01

    To determine whether child maltreatment has a long-term impact on emotion processing abilities in adulthood and whether IQ, psychopathology, or psychopathy mediate the relationship between childhood maltreatment and emotion processing in adulthood. Using a prospective cohort design, children (ages 0-11) with documented cases of abuse and neglect during 1967-1971 were matched with non-maltreated children and followed up into adulthood. Potential mediators (IQ, Post-Traumatic Stress [PTSD], Generalized Anxiety [GAD], Dysthymia, and Major Depressive [MDD] Disorders, and psychopathy) were assessed in young adulthood with standardized assessment techniques. In middle adulthood (Mage=47), the International Affective Picture System was used to measure emotion processing. Structural equation modeling was used to test mediation models. Individuals with a history of childhood maltreatment were less accurate in emotion processing overall and in processing positive and neutral pictures than matched controls. Childhood physical abuse predicted less accuracy in neutral pictures and childhood sexual abuse and neglect predicted less accuracy in recognizing positive pictures. MDD, GAD, and IQ predicted overall picture recognition accuracy. However, of the mediators examined, only IQ acted to mediate the relationship between child maltreatment and emotion processing deficits. Although research has focused on emotion processing in maltreated children, these new findings show an impact child abuse and neglect on emotion processing in middle adulthood. Research and interventions aimed at improving emotional processing deficiencies in abused and neglected children should consider the role of IQ. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Using a reactive transport model to elucidate differences between laboratory and field dissolution rates in regolith

    NASA Astrophysics Data System (ADS)

    Moore, Joel; Lichtner, Peter C.; White, Art F.; Brantley, Susan L.

    2012-09-01

    The reactive transport model FLOTRAN was used to forward-model weathering profiles developed on granitic outwash alluvium over 40-3000 ka from the Merced, California (USA) chronosequence as well as deep granitic regolith developed over 800 ka near Davis Run, Virginia (USA). Baseline model predictions that used laboratory rate constants (km), measured fluid flow velocities (v), and BET volumetric surface areas for the parent material (AB,mo) were not consistent with measured profiles of plagioclase, potassium feldspar, and quartz. Reaction fronts predicted by the baseline model are deeper and thinner than the observed, consistent with faster rates of reaction in the model. Reaction front depth in the model depended mostly upon saturated versus unsaturated hydrologic flow conditions, rate constants controlling precipitation of secondary minerals, and the average fluid flow velocity (va). Unsaturated hydrologic flow conditions (relatively open with respect to CO2(g)) resulted in the prediction of deeper reaction fronts and significant differences in the separation between plagioclase and potassium feldspar reaction fronts compared to saturated hydrologic flow (relatively closed with respect to CO2(g)). Under saturated or unsaturated flow conditions, the rate constant that controls precipitation rates of secondary minerals must be reduced relative to laboratory rate constants to match observed reaction front depths and measured pore water chemistry. Additionally, to match the observed reaction front depths, va was set lower than the measured value, v, for three of the four profiles. The reaction front gradients in mineralogy and pore fluid chemistry could only be modeled accurately by adjusting values of the product kmAB,mo. By assuming km values were constrained by laboratory data, field observations were modeled successfully with TST-like rate equations by dividing measured values of AB,mo by factors from 50 to 1700. Alternately, with sigmoidal or Al-inhibition rate models, this adjustment factor ranges from 5 to 170. Best-fit models of the wetter, hydrologically saturated Davis Run profile required a smaller adjustment to AB,mo than the drier hydrologically unsaturated Merced profiles. We attributed the need for large adjustments in va and AB,mo necessary for the Merced models to more complex hydrologic flow that decreased the reactive surface area in contact with bulk flow water, e.g., dead-end pore spaces containing fluids that are near or at chemical equilibrium. Thus, rate models from the laboratory can successfully predict weathering over millions of years, but work is needed to understand how to incorporate changes in what controls the relationship between reactive surface area and hydrologic flow.

  18. Pore-scale simulation of microbial growth using a genome-scale metabolic model: Implications for Darcy-scale reactive transport

    NASA Astrophysics Data System (ADS)

    Tartakovsky, G. D.; Tartakovsky, A. M.; Scheibe, T. D.; Fang, Y.; Mahadevan, R.; Lovley, D. R.

    2013-09-01

    Recent advances in microbiology have enabled the quantitative simulation of microbial metabolism and growth based on genome-scale characterization of metabolic pathways and fluxes. We have incorporated a genome-scale metabolic model of the iron-reducing bacteria Geobacter sulfurreducens into a pore-scale simulation of microbial growth based on coupling of iron reduction to oxidation of a soluble electron donor (acetate). In our model, fluid flow and solute transport is governed by a combination of the Navier-Stokes and advection-diffusion-reaction equations. Microbial growth occurs only on the surface of soil grains where solid-phase mineral iron oxides are available. Mass fluxes of chemical species associated with microbial growth are described by the genome-scale microbial model, implemented using a constraint-based metabolic model, and provide the Robin-type boundary condition for the advection-diffusion equation at soil grain surfaces. Conventional models of microbially-mediated subsurface reactions use a lumped reaction model that does not consider individual microbial reaction pathways, and describe reactions rates using empirically-derived rate formulations such as the Monod-type kinetics. We have used our pore-scale model to explore the relationship between genome-scale metabolic models and Monod-type formulations, and to assess the manifestation of pore-scale variability (microenvironments) in terms of apparent Darcy-scale microbial reaction rates. The genome-scale model predicted lower biomass yield, and different stoichiometry for iron consumption, in comparison to prior Monod formulations based on energetics considerations. We were able to fit an equivalent Monod model, by modifying the reaction stoichiometry and biomass yield coefficient, that could effectively match results of the genome-scale simulation of microbial behaviors under excess nutrient conditions, but predictions of the fitted Monod model deviated from those of the genome-scale model under conditions in which one or more nutrients were limiting. The fitted Monod kinetic model was also applied at the Darcy scale; that is, to simulate average reaction processes at the scale of the entire pore-scale model domain. As we expected, even under excess nutrient conditions for which the Monod and genome-scale models predicted equal reaction rates at the pore scale, the Monod model over-predicted the rates of biomass growth and iron and acetate utilization when applied at the Darcy scale. This discrepancy is caused by an inherent assumption of perfect mixing over the Darcy-scale domain, which is clearly violated in the pore-scale models. These results help to explain the need to modify the flux constraint parameters in order to match observations in previous applications of the genome-scale model at larger scales. These results also motivate further investigation of quantitative multi-scale relationships between fundamental behavior at the pore scale (where genome-scale models are appropriately applied) and observed behavior at larger scales (where predictions of reactive transport phenomena are needed).

  19. Pore-scale simulation of microbial growth using a genome-scale metabolic model: Implications for Darcy-scale reactive transport

    NASA Astrophysics Data System (ADS)

    Scheibe, T. D.; Tartakovsky, G.; Tartakovsky, A. M.; Fang, Y.; Mahadevan, R.; Lovley, D. R.

    2012-12-01

    Recent advances in microbiology have enabled the quantitative simulation of microbial metabolism and growth based on genome-scale characterization of metabolic pathways and fluxes. We have incorporated a genome-scale metabolic model of the iron-reducing bacteria Geobacter sulfurreducens into a pore-scale simulation of microbial growth based on coupling of iron reduction to oxidation of a soluble electron donor (acetate). In our model, fluid flow and solute transport is governed by a combination of the Navier-Stokes and advection-diffusion-reaction equations. Microbial growth occurs only on the surface of soil grains where solid-phase mineral iron oxides are available. Mass fluxes of chemical species associated with microbial growth are described by the genome-scale microbial model, implemented using a constraint-based metabolic model, and provide the Robin-type boundary condition for the advection-diffusion equation at soil grain surfaces. Conventional models of microbially-mediated subsurface reactions use a lumped reaction model that does not consider individual microbial reaction pathways, and describe reactions rates using empirically-derived rate formulations such as the Monod-type kinetics. We have used our pore-scale model to explore the relationship between genome-scale metabolic models and Monod-type formulations, and to assess the manifestation of pore-scale variability (microenvironments) in terms of apparent Darcy-scale microbial reaction rates. The genome-scale model predicted lower biomass yield, and different stoichiometry for iron consumption, in comparison to prior Monod formulations based on energetics considerations. We were able to fit an equivalent Monod model, by modifying the reaction stoichiometry and biomass yield coefficient, that could effectively match results of the genome-scale simulation of microbial behaviors under excess nutrient conditions, but predictions of the fitted Monod model deviated from those of the genome-scale model under conditions in which one or more nutrients were limiting. The fitted Monod kinetic model was also applied at the Darcy scale; that is, to simulate average reaction processes at the scale of the entire pore-scale model domain. As we expected, even under excess nutrient conditions for which the Monod and genome-scale models predicted equal reaction rates at the pore scale, the Monod model over-predicted the rates of biomass growth and iron and acetate utilization when applied at the Darcy scale. This discrepancy is caused by an inherent assumption of perfect mixing over the Darcy-scale domain, which is clearly violated in the pore-scale models. These results help to explain the need to modify the flux constraint parameters in order to match observations in previous applications of the genome-scale model at larger scales. These results also motivate further investigation of quantitative multi-scale relationships between fundamental behavior at the pore scale (where genome-scale models are appropriately applied) and observed behavior at larger scales (where predictions of reactive transport phenomena are needed).

  20. Pore-scale simulation of microbial growth using a genome-scale metabolic model: Implications for Darcy-scale reactive transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tartakovsky, Guzel D.; Tartakovsky, Alexandre M.; Scheibe, Timothy D.

    2013-09-07

    Recent advances in microbiology have enabled the quantitative simulation of microbial metabolism and growth based on genome-scale characterization of metabolic pathways and fluxes. We have incorporated a genome-scale metabolic model of the iron-reducing bacteria Geobacter sulfurreducens into a pore-scale simulation of microbial growth based on coupling of iron reduction to oxidation of a soluble electron donor (acetate). In our model, fluid flow and solute transport is governed by a combination of the Navier-Stokes and advection-diffusion-reaction equations. Microbial growth occurs only on the surface of soil grains where solid-phase mineral iron oxides are available. Mass fluxes of chemical species associated withmore » microbial growth are described by the genome-scale microbial model, implemented using a constraint-based metabolic model, and provide the Robin-type boundary condition for the advection-diffusion equation at soil grain surfaces. Conventional models of microbially-mediated subsurface reactions use a lumped reaction model that does not consider individual microbial reaction pathways, and describe reactions rates using empirically-derived rate formulations such as the Monod-type kinetics. We have used our pore-scale model to explore the relationship between genome-scale metabolic models and Monod-type formulations, and to assess the manifestation of pore-scale variability (microenvironments) in terms of apparent Darcy-scale microbial reaction rates. The genome-scale model predicted lower biomass yield, and different stoichiometry for iron consumption, in comparisonto prior Monod formulations based on energetics considerations. We were able to fit an equivalent Monod model, by modifying the reaction stoichiometry and biomass yield coefficient, that could effectively match results of the genome-scale simulation of microbial behaviors under excess nutrient conditions, but predictions of the fitted Monod model deviated from those of the genome-scale model under conditions in which one or more nutrients were limiting. The fitted Monod kinetic model was also applied at the Darcy scale; that is, to simulate average reaction processes at the scale of the entire pore-scale model domain. As we expected, even under excess nutrient conditions for which the Monod and genome-scale models predicted equal reaction rates at the pore scale, the Monod model over-predicted the rates of biomass growth and iron and acetate utilization when applied at the Darcy scale. This discrepancy is caused by an inherent assumption of perfect mixing over the Darcy-scale domain, which is clearly violated in the pore-scale models. These results help to explain the need to modify the flux constraint parameters in order to match observations in previous applications of the genome-scale model at larger scales. These results also motivate further investigation of quantitative multi-scale relationships between fundamental behavior at the pore scale (where genome-scale models are appropriately applied) and observed behavior at larger scales (where predictions of reactive transport phenomena are needed).« less

Top