Sample records for making quantitative predictions

  1. Quantitative prediction of drug side effects based on drug-related features.

    PubMed

    Niu, Yanqing; Zhang, Wen

    2017-09-01

    Unexpected side effects of drugs are great concern in the drug development, and the identification of side effects is an important task. Recently, machine learning methods are proposed to predict the presence or absence of interested side effects for drugs, but it is difficult to make the accurate prediction for all of them. In this paper, we transform side effect profiles of drugs as their quantitative scores, by summing up their side effects with weights. The quantitative scores may measure the dangers of drugs, and thus help to compare the risk of different drugs. Here, we attempt to predict quantitative scores of drugs, namely the quantitative prediction. Specifically, we explore a variety of drug-related features and evaluate their discriminative powers for the quantitative prediction. Then, we consider several feature combination strategies (direct combination, average scoring ensemble combination) to integrate three informative features: chemical substructures, targets, and treatment indications. Finally, the average scoring ensemble model which produces the better performances is used as the final quantitative prediction model. Since weights for side effects are empirical values, we randomly generate different weights in the simulation experiments. The experimental results show that the quantitative method is robust to different weights, and produces satisfying results. Although other state-of-the-art methods cannot make the quantitative prediction directly, the prediction results can be transformed as the quantitative scores. By indirect comparison, the proposed method produces much better results than benchmark methods in the quantitative prediction. In conclusion, the proposed method is promising for the quantitative prediction of side effects, which may work cooperatively with existing state-of-the-art methods to reveal dangers of drugs.

  2. Evaluating Rapid Models for High-Throughput Exposure Forecasting (SOT)

    EPA Science Inventory

    High throughput exposure screening models can provide quantitative predictions for thousands of chemicals; however these predictions must be systematically evaluated for predictive ability. Without the capability to make quantitative, albeit uncertain, forecasts of exposure, the ...

  3. How to make predictions about future infectious disease risks

    PubMed Central

    Woolhouse, Mark

    2011-01-01

    Formal, quantitative approaches are now widely used to make predictions about the likelihood of an infectious disease outbreak, how the disease will spread, and how to control it. Several well-established methodologies are available, including risk factor analysis, risk modelling and dynamic modelling. Even so, predictive modelling is very much the ‘art of the possible’, which tends to drive research effort towards some areas and away from others which may be at least as important. Building on the undoubted success of quantitative modelling of the epidemiology and control of human and animal diseases such as AIDS, influenza, foot-and-mouth disease and BSE, attention needs to be paid to developing a more holistic framework that captures the role of the underlying drivers of disease risks, from demography and behaviour to land use and climate change. At the same time, there is still considerable room for improvement in how quantitative analyses and their outputs are communicated to policy makers and other stakeholders. A starting point would be generally accepted guidelines for ‘good practice’ for the development and the use of predictive models. PMID:21624924

  4. A Crowdsourcing Approach to Developing and Assessing Prediction Algorithms for AML Prognosis

    PubMed Central

    Noren, David P.; Long, Byron L.; Norel, Raquel; Rrhissorrakrai, Kahn; Hess, Kenneth; Hu, Chenyue Wendy; Bisberg, Alex J.; Schultz, Andre; Engquist, Erik; Liu, Li; Lin, Xihui; Chen, Gregory M.; Xie, Honglei; Hunter, Geoffrey A. M.; Norman, Thea; Friend, Stephen H.; Stolovitzky, Gustavo; Kornblau, Steven; Qutub, Amina A.

    2016-01-01

    Acute Myeloid Leukemia (AML) is a fatal hematological cancer. The genetic abnormalities underlying AML are extremely heterogeneous among patients, making prognosis and treatment selection very difficult. While clinical proteomics data has the potential to improve prognosis accuracy, thus far, the quantitative means to do so have yet to be developed. Here we report the results and insights gained from the DREAM 9 Acute Myeloid Prediction Outcome Prediction Challenge (AML-OPC), a crowdsourcing effort designed to promote the development of quantitative methods for AML prognosis prediction. We identify the most accurate and robust models in predicting patient response to therapy, remission duration, and overall survival. We further investigate patient response to therapy, a clinically actionable prediction, and find that patients that are classified as resistant to therapy are harder to predict than responsive patients across the 31 models submitted to the challenge. The top two performing models, which held a high sensitivity to these patients, substantially utilized the proteomics data to make predictions. Using these models, we also identify which signaling proteins were useful in predicting patient therapeutic response. PMID:27351836

  5. Hydrodynamic predictions for 5.44 TeV Xe+Xe collisions

    NASA Astrophysics Data System (ADS)

    Giacalone, Giuliano; Noronha-Hostler, Jacquelyn; Luzum, Matthew; Ollitrault, Jean-Yves

    2018-03-01

    We argue that relativistic hydrodynamics is able to make robust predictions for soft particle production in Xe+Xe collisions at the CERN Large Hadron Collider (LHC). The change of system size from Pb+Pb to Xe+Xe provides a unique opportunity to test the scaling laws inherent to fluid dynamics. Using event-by-event hydrodynamic simulations, we make quantitative predictions for several observables: mean transverse momentum, anisotropic flow coefficients, and their fluctuations. Results are shown as a function of collision centrality.

  6. Conditional Toxicity Value (CTV) Predictor: An In Silico Approach for Generating Quantitative Risk Estimates for Chemicals.

    PubMed

    Wignall, Jessica A; Muratov, Eugene; Sedykh, Alexander; Guyton, Kathryn Z; Tropsha, Alexander; Rusyn, Ivan; Chiu, Weihsueh A

    2018-05-01

    Human health assessments synthesize human, animal, and mechanistic data to produce toxicity values that are key inputs to risk-based decision making. Traditional assessments are data-, time-, and resource-intensive, and they cannot be developed for most environmental chemicals owing to a lack of appropriate data. As recommended by the National Research Council, we propose a solution for predicting toxicity values for data-poor chemicals through development of quantitative structure-activity relationship (QSAR) models. We used a comprehensive database of chemicals with existing regulatory toxicity values from U.S. federal and state agencies to develop quantitative QSAR models. We compared QSAR-based model predictions to those based on high-throughput screening (HTS) assays. QSAR models for noncancer threshold-based values and cancer slope factors had cross-validation-based Q 2 of 0.25-0.45, mean model errors of 0.70-1.11 log 10 units, and applicability domains covering >80% of environmental chemicals. Toxicity values predicted from QSAR models developed in this study were more accurate and precise than those based on HTS assays or mean-based predictions. A publicly accessible web interface to make predictions for any chemical of interest is available at http://toxvalue.org. An in silico tool that can predict toxicity values with an uncertainty of an order of magnitude or less can be used to quickly and quantitatively assess risks of environmental chemicals when traditional toxicity data or human health assessments are unavailable. This tool can fill a critical gap in the risk assessment and management of data-poor chemicals. https://doi.org/10.1289/EHP2998.

  7. Predicting quantitative and qualitative values of recreation participation

    Treesearch

    Elwood L., Jr. Shafer; George Moeller

    1971-01-01

    If future recreation consumption and associated intangible values can be predicted, the problem of rapid decision making in recreation-resource management can be reduced, and the problems of implementing those decisions can be anticipated. Management and research responsibilities for meeting recreation demand are discussed, and proved methods for forecasting recreation...

  8. Linking short-term responses to ecologically-relevant outcomes

    EPA Pesticide Factsheets

    Opportunity to participate in the conduct of collaborative integrative lab, field and modelling efforts to characterize molecular-to-organismal level responses and make quantitative testable predictions of population level outcomes

  9. A Bayesian network to predict coastal vulnerability to sea level rise

    USGS Publications Warehouse

    Gutierrez, B.T.; Plant, N.G.; Thieler, E.R.

    2011-01-01

    Sea level rise during the 21st century will have a wide range of effects on coastal environments, human development, and infrastructure in coastal areas. The broad range of complex factors influencing coastal systems contributes to large uncertainties in predicting long-term sea level rise impacts. Here we explore and demonstrate the capabilities of a Bayesian network (BN) to predict long-term shoreline change associated with sea level rise and make quantitative assessments of prediction uncertainty. A BN is used to define relationships between driving forces, geologic constraints, and coastal response for the U.S. Atlantic coast that include observations of local rates of relative sea level rise, wave height, tide range, geomorphic classification, coastal slope, and shoreline change rate. The BN is used to make probabilistic predictions of shoreline retreat in response to different future sea level rise rates. Results demonstrate that the probability of shoreline retreat increases with higher rates of sea level rise. Where more specific information is included, the probability of shoreline change increases in a number of cases, indicating more confident predictions. A hindcast evaluation of the BN indicates that the network correctly predicts 71% of the cases. Evaluation of the results using Brier skill and log likelihood ratio scores indicates that the network provides shoreline change predictions that are better than the prior probability. Shoreline change outcomes indicating stability (-1 1 m/yr) was not well predicted. We find that BNs can assimilate important factors contributing to coastal change in response to sea level rise and can make quantitative, probabilistic predictions that can be applied to coastal management decisions. Copyright ?? 2011 by the American Geophysical Union.

  10. Developing a New Quantitative Account of Backward Masking

    ERIC Educational Resources Information Center

    Francis, Gregory

    2003-01-01

    A new general explanation for u-shaped backward masking is analyzed and found to predict shifts in the interstimulus interval (ISI) that produces strongest masking. This predicted shift is then compared to six sets of masking data. The resulting comparisons force the general explanation to make certain assumptions to account for the data. In this…

  11. Simulation-Based Prediction of Equivalent Continuous Noises during Construction Processes

    PubMed Central

    Zhang, Hong; Pei, Yun

    2016-01-01

    Quantitative prediction of construction noise is crucial to evaluate construction plans to help make decisions to address noise levels. Considering limitations of existing methods for measuring or predicting the construction noise and particularly the equivalent continuous noise level over a period of time, this paper presents a discrete-event simulation method for predicting the construction noise in terms of equivalent continuous level. The noise-calculating models regarding synchronization, propagation and equivalent continuous level are presented. The simulation framework for modeling the noise-affected factors and calculating the equivalent continuous noise by incorporating the noise-calculating models into simulation strategy is proposed. An application study is presented to demonstrate and justify the proposed simulation method in predicting the equivalent continuous noise during construction. The study contributes to provision of a simulation methodology to quantitatively predict the equivalent continuous noise of construction by considering the relevant uncertainties, dynamics and interactions. PMID:27529266

  12. Simulation-Based Prediction of Equivalent Continuous Noises during Construction Processes.

    PubMed

    Zhang, Hong; Pei, Yun

    2016-08-12

    Quantitative prediction of construction noise is crucial to evaluate construction plans to help make decisions to address noise levels. Considering limitations of existing methods for measuring or predicting the construction noise and particularly the equivalent continuous noise level over a period of time, this paper presents a discrete-event simulation method for predicting the construction noise in terms of equivalent continuous level. The noise-calculating models regarding synchronization, propagation and equivalent continuous level are presented. The simulation framework for modeling the noise-affected factors and calculating the equivalent continuous noise by incorporating the noise-calculating models into simulation strategy is proposed. An application study is presented to demonstrate and justify the proposed simulation method in predicting the equivalent continuous noise during construction. The study contributes to provision of a simulation methodology to quantitatively predict the equivalent continuous noise of construction by considering the relevant uncertainties, dynamics and interactions.

  13. Toward a Theoretical Model of Decision-Making and Resistance to Change among Higher Education Online Course Designers

    ERIC Educational Resources Information Center

    Dodd, Bucky J.

    2013-01-01

    Online course design is an emerging practice in higher education, yet few theoretical models currently exist to explain or predict how the diffusion of innovations occurs in this space. This study used a descriptive, quantitative survey research design to examine theoretical relationships between decision-making style and resistance to change…

  14. Quantitative PET/CT scanner performance characterization based upon the society of nuclear medicine and molecular imaging clinical trials network oncology clinical simulator phantom.

    PubMed

    Sunderland, John J; Christian, Paul E

    2015-01-01

    The Clinical Trials Network (CTN) of the Society of Nuclear Medicine and Molecular Imaging (SNMMI) operates a PET/CT phantom imaging program using the CTN's oncology clinical simulator phantom, designed to validate scanners at sites that wish to participate in oncology clinical trials. Since its inception in 2008, the CTN has collected 406 well-characterized phantom datasets from 237 scanners at 170 imaging sites covering the spectrum of commercially available PET/CT systems. The combined and collated phantom data describe a global profile of quantitative performance and variability of PET/CT data used in both clinical practice and clinical trials. Individual sites filled and imaged the CTN oncology PET phantom according to detailed instructions. Standard clinical reconstructions were requested and submitted. The phantom itself contains uniform regions suitable for scanner calibration assessment, lung fields, and 6 hot spheric lesions with diameters ranging from 7 to 20 mm at a 4:1 contrast ratio with primary background. The CTN Phantom Imaging Core evaluated the quality of the phantom fill and imaging and measured background standardized uptake values to assess scanner calibration and maximum standardized uptake values of all 6 lesions to review quantitative performance. Scanner make-and-model-specific measurements were pooled and then subdivided by reconstruction to create scanner-specific quantitative profiles. Different makes and models of scanners predictably demonstrated different quantitative performance profiles including, in some cases, small calibration bias. Differences in site-specific reconstruction parameters increased the quantitative variability among similar scanners, with postreconstruction smoothing filters being the most influential parameter. Quantitative assessment of this intrascanner variability over this large collection of phantom data gives, for the first time, estimates of reconstruction variance introduced into trials from allowing trial sites to use their preferred reconstruction methodologies. Predictably, time-of-flight-enabled scanners exhibited less size-based partial-volume bias than non-time-of-flight scanners. The CTN scanner validation experience over the past 5 y has generated a rich, well-curated phantom dataset from which PET/CT make-and-model and reconstruction-dependent quantitative behaviors were characterized for the purposes of understanding and estimating scanner-based variances in clinical trials. These results should make it possible to identify and recommend make-and-model-specific reconstruction strategies to minimize measurement variability in cancer clinical trials. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  15. Applications of Microfluidics in Quantitative Biology.

    PubMed

    Bai, Yang; Gao, Meng; Wen, Lingling; He, Caiyun; Chen, Yuan; Liu, Chenli; Fu, Xiongfei; Huang, Shuqiang

    2018-05-01

    Quantitative biology is dedicated to taking advantage of quantitative reasoning and advanced engineering technologies to make biology more predictable. Microfluidics, as an emerging technique, provides new approaches to precisely control fluidic conditions on small scales and collect data in high-throughput and quantitative manners. In this review, the authors present the relevant applications of microfluidics to quantitative biology based on two major categories (channel-based microfluidics and droplet-based microfluidics), and their typical features. We also envision some other microfluidic techniques that may not be employed in quantitative biology right now, but have great potential in the near future. © 2017 Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences. Biotechnology Journal Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  16. Cognitive Predictors of Achievement Growth in Mathematics: A Five Year Longitudinal Study

    PubMed Central

    Geary, David C.

    2011-01-01

    The study's goal was to identify the beginning of first grade quantitative competencies that predict mathematics achievement start point and growth through fifth grade. Measures of number, counting, and arithmetic competencies were administered in early first grade and used to predict mathematics achievement through fifth (n = 177), while controlling for intelligence, working memory, and processing speed. Multilevel models revealed intelligence, processing speed, and the central executive component of working memory predicted achievement or achievement growth in mathematics and, as a contrast domain, word reading. The phonological loop was uniquely predictive of word reading and the visuospatial sketch pad of mathematics. Early fluency in processing and manipulating numerical set size and Arabic numerals, accurate use of sophisticated counting procedures for solving addition problems, and accuracy in making placements on a mathematical number line were uniquely predictive of mathematics achievement. Use of memory-based processes to solve addition problems predicted mathematics and reading achievement but in different ways. The results identify the early quantitative competencies that uniquely contribute to mathematics learning. PMID:21942667

  17. Predicting Team Performance through Human Behavioral Sensing and Quantitative Workflow Instrumentation

    DTIC Science & Technology

    2016-07-27

    make risk-informed decisions during serious games . Statistical models of intra- game performance were developed to determine whether behaviors in...specific facets of the gameplay workflow were predictive of analytical performance and games outcomes. A study of over seventy instrumented teams revealed...more accurate game decisions. 2 Keywords: Humatics · Serious Games · Human-System Interaction · Instrumentation · Teamwork · Communication Analysis

  18. Testing process predictions of models of risky choice: a quantitative model comparison approach

    PubMed Central

    Pachur, Thorsten; Hertwig, Ralph; Gigerenzer, Gerd; Brandstätter, Eduard

    2013-01-01

    This article presents a quantitative model comparison contrasting the process predictions of two prominent views on risky choice. One view assumes a trade-off between probabilities and outcomes (or non-linear functions thereof) and the separate evaluation of risky options (expectation models). Another view assumes that risky choice is based on comparative evaluation, limited search, aspiration levels, and the forgoing of trade-offs (heuristic models). We derived quantitative process predictions for a generic expectation model and for a specific heuristic model, namely the priority heuristic (Brandstätter et al., 2006), and tested them in two experiments. The focus was on two key features of the cognitive process: acquisition frequencies (i.e., how frequently individual reasons are looked up) and direction of search (i.e., gamble-wise vs. reason-wise). In Experiment 1, the priority heuristic predicted direction of search better than the expectation model (although neither model predicted the acquisition process perfectly); acquisition frequencies, however, were inconsistent with both models. Additional analyses revealed that these frequencies were primarily a function of what Rubinstein (1988) called “similarity.” In Experiment 2, the quantitative model comparison approach showed that people seemed to rely more on the priority heuristic in difficult problems, but to make more trade-offs in easy problems. This finding suggests that risky choice may be based on a mental toolbox of strategies. PMID:24151472

  19. Extensions and evaluations of a general quantitative theory of forest structure and dynamics

    PubMed Central

    Enquist, Brian J.; West, Geoffrey B.; Brown, James H.

    2009-01-01

    Here, we present the second part of a quantitative theory for the structure and dynamics of forests under demographic and resource steady state. The theory is based on individual-level allometric scaling relations for how trees use resources, fill space, and grow. These scale up to determine emergent properties of diverse forests, including size–frequency distributions, spacing relations, canopy configurations, mortality rates, population dynamics, successional dynamics, and resource flux rates. The theory uniquely makes quantitative predictions for both stand-level scaling exponents and normalizations. We evaluate these predictions by compiling and analyzing macroecological datasets from several tropical forests. The close match between theoretical predictions and data suggests that forests are organized by a set of very general scaling rules. Our mechanistic theory is based on allometric scaling relations, is complementary to “demographic theory,” but is fundamentally different in approach. It provides a quantitative baseline for understanding deviations from predictions due to other factors, including disturbance, variation in branching architecture, asymmetric competition, resource limitation, and other sources of mortality, which are not included in the deliberately simplified theory. The theory should apply to a wide range of forests despite large differences in abiotic environment, species diversity, and taxonomic and functional composition. PMID:19363161

  20. Tissue microarrays and quantitative tissue-based image analysis as a tool for oncology biomarker and diagnostic development.

    PubMed

    Dolled-Filhart, Marisa P; Gustavson, Mark D

    2012-11-01

    Translational oncology has been improved by using tissue microarrays (TMAs), which facilitate biomarker analysis of large cohorts on a single slide. This has allowed for rapid analysis and validation of potential biomarkers for prognostic and predictive value, as well as for evaluation of biomarker prevalence. Coupled with quantitative analysis of immunohistochemical (IHC) staining, objective and standardized biomarker data from tumor samples can further advance companion diagnostic approaches for the identification of drug-responsive or resistant patient subpopulations. This review covers the advantages, disadvantages and applications of TMAs for biomarker research. Research literature and reviews of TMAs and quantitative image analysis methodology have been surveyed for this review (with an AQUA® analysis focus). Applications such as multi-marker diagnostic development and pathway-based biomarker subpopulation analyses are described. Tissue microarrays are a useful tool for biomarker analyses including prevalence surveys, disease progression assessment and addressing potential prognostic or predictive value. By combining quantitative image analysis with TMAs, analyses will be more objective and reproducible, allowing for more robust IHC-based diagnostic test development. Quantitative multi-biomarker IHC diagnostic tests that can predict drug response will allow for greater success of clinical trials for targeted therapies and provide more personalized clinical decision making.

  1. Making predictions of mangrove deforestation: a comparison of two methods in Kenya.

    PubMed

    Rideout, Alasdair J R; Joshi, Neha P; Viergever, Karin M; Huxham, Mark; Briers, Robert A

    2013-11-01

    Deforestation of mangroves is of global concern given their importance for carbon storage, biogeochemical cycling and the provision of other ecosystem services, but the links between rates of loss and potential drivers or risk factors are rarely evaluated. Here, we identified key drivers of mangrove loss in Kenya and compared two different approaches to predicting risk. Risk factors tested included various possible predictors of anthropogenic deforestation, related to population, suitability for land use change and accessibility. Two approaches were taken to modelling risk; a quantitative statistical approach and a qualitative categorical ranking approach. A quantitative model linking rates of loss to risk factors was constructed based on generalized least squares regression and using mangrove loss data from 1992 to 2000. Population density, soil type and proximity to roads were the most important predictors. In order to validate this model it was used to generate a map of losses of Kenyan mangroves predicted to have occurred between 2000 and 2010. The qualitative categorical model was constructed using data from the same selection of variables, with the coincidence of different risk factors in particular mangrove areas used in an additive manner to create a relative risk index which was then mapped. Quantitative predictions of loss were significantly correlated with the actual loss of mangroves between 2000 and 2010 and the categorical risk index values were also highly correlated with the quantitative predictions. Hence, in this case the relatively simple categorical modelling approach was of similar predictive value to the more complex quantitative model of mangrove deforestation. The advantages and disadvantages of each approach are discussed, and the implications for mangroves are outlined. © 2013 Blackwell Publishing Ltd.

  2. Quantitative computed tomography for the prediction of pulmonary function after lung cancer surgery: a simple method using simulation software.

    PubMed

    Ueda, Kazuhiro; Tanaka, Toshiki; Li, Tao-Sheng; Tanaka, Nobuyuki; Hamano, Kimikazu

    2009-03-01

    The prediction of pulmonary functional reserve is mandatory in therapeutic decision-making for patients with resectable lung cancer, especially those with underlying lung disease. Volumetric analysis in combination with densitometric analysis of the affected lung lobe or segment with quantitative computed tomography (CT) helps to identify residual pulmonary function, although the utility of this modality needs investigation. The subjects of this prospective study were 30 patients with resectable lung cancer. A three-dimensional CT lung model was created with voxels representing normal lung attenuation (-600 to -910 Hounsfield units). Residual pulmonary function was predicted by drawing a boundary line between the lung to be preserved and that to be resected, directly on the lung model. The predicted values were correlated with the postoperative measured values. The predicted and measured values corresponded well (r=0.89, p<0.001). Although the predicted values corresponded with values predicted by simple calculation using a segment-counting method (r=0.98), there were two outliers whose pulmonary functional reserves were predicted more accurately by CT than by segment counting. The measured pulmonary functional reserves were significantly higher than the predicted values in patients with extensive emphysematous areas (<-910 Hounsfield units), but not in patients with chronic obstructive pulmonary disease. Quantitative CT yielded accurate prediction of functional reserve after lung cancer surgery and helped to identify patients whose functional reserves are likely to be underestimated. Hence, this modality should be utilized for patients with marginal pulmonary function.

  3. Microstructure and rheology of thermoreversible nanoparticle gels.

    PubMed

    Ramakrishnan, S; Zukoski, C F

    2006-08-29

    Naïve mode coupling theory is applied to particles interacting with short-range Yukawa attractions. Model results for the location of the gel line and the modulus of the resulting gels are reduced to algebraic equations capturing the effects of the range and strength of attraction. This model is then applied to thermo reversible gels composed of octadecyl silica particles suspended in decalin. The application of the model to the experimental system requires linking the experimental variable controlling strength of attraction, temperature, to the model strength of attraction. With this link, the model predicts temperature and volume fraction dependencies of gelation and modulus with five parameters: particle size, particle volume fraction, overlap volume of surface hairs, and theta temperature. In comparing model predictions with experimental results, we first observe that in these thermal gels there is no evidence of clustering as has been reported in depletion gels. One consequence of this observation is that there are no additional adjustable parameters required to make quantitative comparisons between experimental results and model predictions. Our results indicate that the naïve mode coupling approach taken here in conjunction with a model linking temperature to strength of attraction provides a robust approach for making quantitative predictions of gel mechanical properties. Extension of model predictions to additional experimental systems requires linking experimental variables to the Yukawa strength and range of attraction.

  4. Are power calculations useful? A multicentre neuroimaging study

    PubMed Central

    Suckling, John; Henty, Julian; Ecker, Christine; Deoni, Sean C; Lombardo, Michael V; Baron-Cohen, Simon; Jezzard, Peter; Barnes, Anna; Chakrabarti, Bhismadev; Ooi, Cinly; Lai, Meng-Chuan; Williams, Steven C; Murphy, Declan GM; Bullmore, Edward

    2014-01-01

    There are now many reports of imaging experiments with small cohorts of typical participants that precede large-scale, often multicentre studies of psychiatric and neurological disorders. Data from these calibration experiments are sufficient to make estimates of statistical power and predictions of sample size and minimum observable effect sizes. In this technical note, we suggest how previously reported voxel-based power calculations can support decision making in the design, execution and analysis of cross-sectional multicentre imaging studies. The choice of MRI acquisition sequence, distribution of recruitment across acquisition centres, and changes to the registration method applied during data analysis are considered as examples. The consequences of modification are explored in quantitative terms by assessing the impact on sample size for a fixed effect size and detectable effect size for a fixed sample size. The calibration experiment dataset used for illustration was a precursor to the now complete Medical Research Council Autism Imaging Multicentre Study (MRC-AIMS). Validation of the voxel-based power calculations is made by comparing the predicted values from the calibration experiment with those observed in MRC-AIMS. The effect of non-linear mappings during image registration to a standard stereotactic space on the prediction is explored with reference to the amount of local deformation. In summary, power calculations offer a validated, quantitative means of making informed choices on important factors that influence the outcome of studies that consume significant resources. PMID:24644267

  5. Demystifying Multitask Deep Neural Networks for Quantitative Structure-Activity Relationships.

    PubMed

    Xu, Yuting; Ma, Junshui; Liaw, Andy; Sheridan, Robert P; Svetnik, Vladimir

    2017-10-23

    Deep neural networks (DNNs) are complex computational models that have found great success in many artificial intelligence applications, such as computer vision1,2 and natural language processing.3,4 In the past four years, DNNs have also generated promising results for quantitative structure-activity relationship (QSAR) tasks.5,6 Previous work showed that DNNs can routinely make better predictions than traditional methods, such as random forests, on a diverse collection of QSAR data sets. It was also found that multitask DNN models-those trained on and predicting multiple QSAR properties simultaneously-outperform DNNs trained separately on the individual data sets in many, but not all, tasks. To date there has been no satisfactory explanation of why the QSAR of one task embedded in a multitask DNN can borrow information from other unrelated QSAR tasks. Thus, using multitask DNNs in a way that consistently provides a predictive advantage becomes a challenge. In this work, we explored why multitask DNNs make a difference in predictive performance. Our results show that during prediction a multitask DNN does borrow "signal" from molecules with similar structures in the training sets of the other tasks. However, whether this borrowing leads to better or worse predictive performance depends on whether the activities are correlated. On the basis of this, we have developed a strategy to use multitask DNNs that incorporate prior domain knowledge to select training sets with correlated activities, and we demonstrate its effectiveness on several examples.

  6. Models of volcanic eruption hazards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wohletz, K.H.

    1992-01-01

    Volcanic eruptions pose an ever present but poorly constrained hazard to life and property for geothermal installations in volcanic areas. Because eruptions occur sporadically and may limit field access, quantitative and systematic field studies of eruptions are difficult to complete. Circumventing this difficulty, laboratory models and numerical simulations are pivotal in building our understanding of eruptions. For example, the results of fuel-coolant interaction experiments show that magma-water interaction controls many eruption styles. Applying these results, increasing numbers of field studies now document and interpret the role of external water eruptions. Similarly, numerical simulations solve the fundamental physics of high-speed fluidmore » flow and give quantitative predictions that elucidate the complexities of pyroclastic flows and surges. A primary goal of these models is to guide geologists in searching for critical field relationships and making their interpretations. Coupled with field work, modeling is beginning to allow more quantitative and predictive volcanic hazard assessments.« less

  7. Models of volcanic eruption hazards

    NASA Astrophysics Data System (ADS)

    Wohletz, K. H.

    Volcanic eruptions pose an ever present but poorly constrained hazard to life and property for geothermal installations in volcanic areas. Because eruptions occur sporadically and may limit field access, quantitative and systematic field studies of eruptions are difficult to complete. Circumventing this difficulty, laboratory models and numerical simulations are pivotal in building our understanding of eruptions. For example, the results of fuel-coolant interaction experiments show that magma-water interaction controls many eruption styles. Applying these results, increasing numbers of field studies now document and interpret the role of external water eruptions. Similarly, numerical simulations solve the fundamental physics of high-speed fluid flow and give quantitative predictions that elucidate the complexities of pyroclastic flows and surges. A primary goal of these models is to guide geologists in searching for critical field relationships and making their interpretations. Coupled with field work, modeling is beginning to allow more quantitative and predictive volcanic hazard assessments.

  8. Health Impacts of Increased Physical Activity from Changes in Transportation Infrastructure: Quantitative Estimates for Three Communities

    PubMed Central

    2015-01-01

    Recently, two quantitative tools have emerged for predicting the health impacts of projects that change population physical activity: the Health Economic Assessment Tool (HEAT) and Dynamic Modeling for Health Impact Assessment (DYNAMO-HIA). HEAT has been used to support health impact assessments of transportation infrastructure projects, but DYNAMO-HIA has not been previously employed for this purpose nor have the two tools been compared. To demonstrate the use of DYNAMO-HIA for supporting health impact assessments of transportation infrastructure projects, we employed the model in three communities (urban, suburban, and rural) in North Carolina. We also compared DYNAMO-HIA and HEAT predictions in the urban community. Using DYNAMO-HIA, we estimated benefit-cost ratios of 20.2 (95% C.I.: 8.7–30.6), 0.6 (0.3–0.9), and 4.7 (2.1–7.1) for the urban, suburban, and rural projects, respectively. For a 40-year time period, the HEAT predictions of deaths avoided by the urban infrastructure project were three times as high as DYNAMO-HIA's predictions due to HEAT's inability to account for changing population health characteristics over time. Quantitative health impact assessment coupled with economic valuation is a powerful tool for integrating health considerations into transportation decision-making. However, to avoid overestimating benefits, such quantitative HIAs should use dynamic, rather than static, approaches. PMID:26504832

  9. Health Impacts of Increased Physical Activity from Changes in Transportation Infrastructure: Quantitative Estimates for Three Communities.

    PubMed

    Mansfield, Theodore J; MacDonald Gibson, Jacqueline

    2015-01-01

    Recently, two quantitative tools have emerged for predicting the health impacts of projects that change population physical activity: the Health Economic Assessment Tool (HEAT) and Dynamic Modeling for Health Impact Assessment (DYNAMO-HIA). HEAT has been used to support health impact assessments of transportation infrastructure projects, but DYNAMO-HIA has not been previously employed for this purpose nor have the two tools been compared. To demonstrate the use of DYNAMO-HIA for supporting health impact assessments of transportation infrastructure projects, we employed the model in three communities (urban, suburban, and rural) in North Carolina. We also compared DYNAMO-HIA and HEAT predictions in the urban community. Using DYNAMO-HIA, we estimated benefit-cost ratios of 20.2 (95% C.I.: 8.7-30.6), 0.6 (0.3-0.9), and 4.7 (2.1-7.1) for the urban, suburban, and rural projects, respectively. For a 40-year time period, the HEAT predictions of deaths avoided by the urban infrastructure project were three times as high as DYNAMO-HIA's predictions due to HEAT's inability to account for changing population health characteristics over time. Quantitative health impact assessment coupled with economic valuation is a powerful tool for integrating health considerations into transportation decision-making. However, to avoid overestimating benefits, such quantitative HIAs should use dynamic, rather than static, approaches.

  10. Thinking Fast and Slow about Causality: Response to Palinkas

    ERIC Educational Resources Information Center

    Marsh, Jeanne C.

    2014-01-01

    Larry Palinkas advances the developing science of social work by providing an explanation of how social science research methods, both qualitative and quantitative, can improve our capacity to draw casual inferences. Understanding causal relations and making causal inferences--with the promise of being able to predict and control outcomes--is…

  11. Quantitative AOP linking aromatase inhibition to impaired reproduction: A case study in predictive ecotoxicology

    EPA Science Inventory

    The adverse outcome pathway (AOP) framework is intended to help support greater use of mechanistic toxicology data as a basis for risk assessment and/or regulatory decision-making. While there have been clear advances in the ability to rapidly generate mechanistically-oriented da...

  12. Quantitating Organoleptic Volatile Phenols in Smoke-Exposed Vitis vinifera Berries.

    PubMed

    Noestheden, Matthew; Thiessen, Katelyn; Dennis, Eric G; Tiet, Ben; Zandberg, Wesley F

    2017-09-27

    Accurate methods for quantitating volatile phenols (i.e., guaiacol, syringol, 4-ethylphenol, etc.) in smoke-exposed Vitis vinifera berries prior to fermentation are needed to predict the likelihood of perceptible smoke taint following vinification. Reported here is a complete, cross-validated analytical workflow to accurately quantitate free and glycosidically bound volatile phenols in smoke-exposed berries using liquid-liquid extraction, acid-mediated hydrolysis, and gas chromatography-tandem mass spectrometry. The reported workflow addresses critical gaps in existing methods for volatile phenols that impact quantitative accuracy, most notably the effect of injection port temperature and the variability in acid-mediated hydrolytic procedures currently used. Addressing these deficiencies will help the wine industry make accurate, informed decisions when producing wines from smoke-exposed berries.

  13. The Quantitative Science of Evaluating Imaging Evidence.

    PubMed

    Genders, Tessa S S; Ferket, Bart S; Hunink, M G Myriam

    2017-03-01

    Cardiovascular diagnostic imaging tests are increasingly used in everyday clinical practice, but are often imperfect, just like any other diagnostic test. The performance of a cardiovascular diagnostic imaging test is usually expressed in terms of sensitivity and specificity compared with the reference standard (gold standard) for diagnosing the disease. However, evidence-based application of a diagnostic test also requires knowledge about the pre-test probability of disease, the benefit of making a correct diagnosis, the harm caused by false-positive imaging test results, and potential adverse effects of performing the test itself. To assist in clinical decision making regarding appropriate use of cardiovascular diagnostic imaging tests, we reviewed quantitative concepts related to diagnostic performance (e.g., sensitivity, specificity, predictive values, likelihood ratios), as well as possible biases and solutions in diagnostic performance studies, Bayesian principles, and the threshold approach to decision making. Copyright © 2017 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  14. Investigation of the Thermomechanical Response of Shape Memory Alloy Hybrid Composite Beams

    NASA Technical Reports Server (NTRS)

    Davis, Brian A.

    2005-01-01

    Previous work at NASA Langley Research Center (LaRC) involved fabrication and testing of composite beams with embedded, pre-strained shape memory alloy (SMA) ribbons. That study also provided comparison of experimental results with numerical predictions from a research code making use of a new thermoelastic model for shape memory alloy hybrid composite (SMAHC) structures. The previous work showed qualitative validation of the numerical model. However, deficiencies in the experimental-numerical correlation were noted and hypotheses for the discrepancies were given for further investigation. The goal of this work is to refine the experimental measurement and numerical modeling approaches in order to better understand the discrepancies, improve the correlation between prediction and measurement, and provide rigorous quantitative validation of the numerical model. Thermal buckling, post-buckling, and random responses to thermal and inertial (base acceleration) loads are studied. Excellent agreement is achieved between the predicted and measured results, thereby quantitatively validating the numerical tool.

  15. The Incremental Value of Subjective and Quantitative Assessment of 18F-FDG PET for the Prediction of Pathologic Complete Response to Preoperative Chemoradiotherapy in Esophageal Cancer.

    PubMed

    van Rossum, Peter S N; Fried, David V; Zhang, Lifei; Hofstetter, Wayne L; van Vulpen, Marco; Meijer, Gert J; Court, Laurence E; Lin, Steven H

    2016-05-01

    A reliable prediction of a pathologic complete response (pathCR) to chemoradiotherapy before surgery for esophageal cancer would enable investigators to study the feasibility and outcome of an organ-preserving strategy after chemoradiotherapy. So far no clinical parameters or diagnostic studies are able to accurately predict which patients will achieve a pathCR. The aim of this study was to determine whether subjective and quantitative assessment of baseline and postchemoradiation (18)F-FDG PET can improve the accuracy of predicting pathCR to preoperative chemoradiotherapy in esophageal cancer beyond clinical predictors. This retrospective study was approved by the institutional review board, and the need for written informed consent was waived. Clinical parameters along with subjective and quantitative parameters from baseline and postchemoradiation (18)F-FDG PET were derived from 217 esophageal adenocarcinoma patients who underwent chemoradiotherapy followed by surgery. The associations between these parameters and pathCR were studied in univariable and multivariable logistic regression analysis. Four prediction models were constructed and internally validated using bootstrapping to study the incremental predictive values of subjective assessment of (18)F-FDG PET, conventional quantitative metabolic features, and comprehensive (18)F-FDG PET texture/geometry features, respectively. The clinical benefit of (18)F-FDG PET was determined using decision-curve analysis. A pathCR was found in 59 (27%) patients. A clinical prediction model (corrected c-index, 0.67) was improved by adding (18)F-FDG PET-based subjective assessment of response (corrected c-index, 0.72). This latter model was slightly improved by the addition of 1 conventional quantitative metabolic feature only (i.e., postchemoradiation total lesion glycolysis; corrected c-index, 0.73), and even more by subsequently adding 4 comprehensive (18)F-FDG PET texture/geometry features (corrected c-index, 0.77). However, at a decision threshold of 0.9 or higher, representing a clinically relevant predictive value for pathCR at which one may be willing to omit surgery, there was no clear incremental value. Subjective and quantitative assessment of (18)F-FDG PET provides statistical incremental value for predicting pathCR after preoperative chemoradiotherapy in esophageal cancer. However, the discriminatory improvement beyond clinical predictors does not translate into a clinically relevant benefit that could change decision making. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  16. Analytic Guided-Search Model of Human Performance Accuracy in Target- Localization Search Tasks

    NASA Technical Reports Server (NTRS)

    Eckstein, Miguel P.; Beutter, Brent R.; Stone, Leland S.

    2000-01-01

    Current models of human visual search have extended the traditional serial/parallel search dichotomy. Two successful models for predicting human visual search are the Guided Search model and the Signal Detection Theory model. Although these models are inherently different, it has been difficult to compare them because the Guided Search model is designed to predict response time, while Signal Detection Theory models are designed to predict performance accuracy. Moreover, current implementations of the Guided Search model require the use of Monte-Carlo simulations, a method that makes fitting the model's performance quantitatively to human data more computationally time consuming. We have extended the Guided Search model to predict human accuracy in target-localization search tasks. We have also developed analytic expressions that simplify simulation of the model to the evaluation of a small set of equations using only three free parameters. This new implementation and extension of the Guided Search model will enable direct quantitative comparisons with human performance in target-localization search experiments and with the predictions of Signal Detection Theory and other search accuracy models.

  17. Use of QSARs in international decision-making frameworks to predict ecologic effects and environmental fate of chemical substances.

    PubMed Central

    Cronin, Mark T D; Walker, John D; Jaworska, Joanna S; Comber, Michael H I; Watts, Christopher D; Worth, Andrew P

    2003-01-01

    This article is a review of the use, by regulatory agencies and authorities, of quantitative structure-activity relationships (QSARs) to predict ecologic effects and environmental fate of chemicals. For many years, the U.S. Environmental Protection Agency has been the most prominent regulatory agency using QSARs to predict the ecologic effects and environmental fate of chemicals. However, as increasing numbers of standard QSAR methods are developed and validated to predict ecologic effects and environmental fate of chemicals, it is anticipated that more regulatory agencies and authorities will find them to be acceptable alternatives to chemical testing. PMID:12896861

  18. Predicting long-term performance of engineered geologic carbon dioxide storage systems to inform decisions amidst uncertainty

    NASA Astrophysics Data System (ADS)

    Pawar, R.

    2016-12-01

    Risk assessment and risk management of engineered geologic CO2 storage systems is an area of active investigation. The potential geologic CO2 storage systems currently under consideration are inherently heterogeneous and have limited to no characterization data. Effective risk management decisions to ensure safe, long-term CO2 storage requires assessing and quantifying risks while taking into account the uncertainties in a storage site's characteristics. The key decisions are typically related to definition of area of review, effective monitoring strategy and monitoring duration, potential of leakage and associated impacts, etc. A quantitative methodology for predicting a sequestration site's long-term performance is critical for making key decisions necessary for successful deployment of commercial scale geologic storage projects where projects will require quantitative assessments of potential long-term liabilities. An integrated assessment modeling (IAM) paradigm which treats a geologic CO2 storage site as a system made up of various linked subsystems can be used to predict long-term performance. The subsystems include storage reservoir, seals, potential leakage pathways (such as wellbores, natural fractures/faults) and receptors (such as shallow groundwater aquifers). CO2 movement within each of the subsystems and resulting interactions are captured through reduced order models (ROMs). The ROMs capture the complex physical/chemical interactions resulting due to CO2 movement and interactions but are computationally extremely efficient. The computational efficiency allows for performing Monte Carlo simulations necessary for quantitative probabilistic risk assessment. We have used the IAM to predict long-term performance of geologic CO2 sequestration systems and to answer questions related to probability of leakage of CO2 through wellbores, impact of CO2/brine leakage into shallow aquifer, etc. Answers to such questions are critical in making key risk management decisions. A systematic uncertainty quantification approach can been used to understand how uncertain parameters associated with different subsystems (e.g., reservoir permeability, wellbore cement permeability, wellbore density, etc.) impact the overall site performance predictions.

  19. Laboratory measurements of the millimeter-wave spectra of calcium isocyanide

    NASA Astrophysics Data System (ADS)

    Steimle, Timothy C.; Saito, Shuji; Takano, Shuro

    1993-06-01

    The ground state of CaNC is presently characterized by mm-wave spectroscopy, using a standard Hamiltonian linear molecule model to analyze the spectrum. The resulting spectroscopic parameters were used to predict the transition frequencies and Einstein A-coefficients, which should make possible a quantitative astrophysical search for CaNC.

  20. Quantitative Assessment of Thermodynamic Constraints on the Solution Space of Genome-Scale Metabolic Models

    PubMed Central

    Hamilton, Joshua J.; Dwivedi, Vivek; Reed, Jennifer L.

    2013-01-01

    Constraint-based methods provide powerful computational techniques to allow understanding and prediction of cellular behavior. These methods rely on physiochemical constraints to eliminate infeasible behaviors from the space of available behaviors. One such constraint is thermodynamic feasibility, the requirement that intracellular flux distributions obey the laws of thermodynamics. The past decade has seen several constraint-based methods that interpret this constraint in different ways, including those that are limited to small networks, rely on predefined reaction directions, and/or neglect the relationship between reaction free energies and metabolite concentrations. In this work, we utilize one such approach, thermodynamics-based metabolic flux analysis (TMFA), to make genome-scale, quantitative predictions about metabolite concentrations and reaction free energies in the absence of prior knowledge of reaction directions, while accounting for uncertainties in thermodynamic estimates. We applied TMFA to a genome-scale network reconstruction of Escherichia coli and examined the effect of thermodynamic constraints on the flux space. We also assessed the predictive performance of TMFA against gene essentiality and quantitative metabolomics data, under both aerobic and anaerobic, and optimal and suboptimal growth conditions. Based on these results, we propose that TMFA is a useful tool for validating phenotypes and generating hypotheses, and that additional types of data and constraints can improve predictions of metabolite concentrations. PMID:23870272

  1. A Physiologically Based Pharmacokinetic Model for Pregnant Women to Predict the Pharmacokinetics of Drugs Metabolized Via Several Enzymatic Pathways.

    PubMed

    Dallmann, André; Ince, Ibrahim; Coboeken, Katrin; Eissing, Thomas; Hempel, Georg

    2017-09-18

    Physiologically based pharmacokinetic modeling is considered a valuable tool for predicting pharmacokinetic changes in pregnancy to subsequently guide in-vivo pharmacokinetic trials in pregnant women. The objective of this study was to extend and verify a previously developed physiologically based pharmacokinetic model for pregnant women for the prediction of pharmacokinetics of drugs metabolized via several cytochrome P450 enzymes. Quantitative information on gestation-specific changes in enzyme activity available in the literature was incorporated in a pregnancy physiologically based pharmacokinetic model and the pharmacokinetics of eight drugs metabolized via one or multiple cytochrome P450 enzymes was predicted. The tested drugs were caffeine, midazolam, nifedipine, metoprolol, ondansetron, granisetron, diazepam, and metronidazole. Pharmacokinetic predictions were evaluated by comparison with in-vivo pharmacokinetic data obtained from the literature. The pregnancy physiologically based pharmacokinetic model successfully predicted the pharmacokinetics of all tested drugs. The observed pregnancy-induced pharmacokinetic changes were qualitatively and quantitatively reasonably well predicted for all drugs. Ninety-seven percent of the mean plasma concentrations predicted in pregnant women fell within a twofold error range and 63% within a 1.25-fold error range. For all drugs, the predicted area under the concentration-time curve was within a 1.25-fold error range. The presented pregnancy physiologically based pharmacokinetic model can quantitatively predict the pharmacokinetics of drugs that are metabolized via one or multiple cytochrome P450 enzymes by integrating prior knowledge of the pregnancy-related effect on these enzymes. This pregnancy physiologically based pharmacokinetic model may thus be used to identify potential exposure changes in pregnant women a priori and to eventually support informed decision making when clinical trials are designed in this special population.

  2. Use of QSARs in international decision-making frameworks to predict health effects of chemical substances.

    PubMed Central

    Cronin, Mark T D; Jaworska, Joanna S; Walker, John D; Comber, Michael H I; Watts, Christopher D; Worth, Andrew P

    2003-01-01

    This article is a review of the use of quantitative (and qualitative) structure-activity relationships (QSARs and SARs) by regulatory agencies and authorities to predict acute toxicity, mutagenicity, carcinogenicity, and other health effects. A number of SAR and QSAR applications, by regulatory agencies and authorities, are reviewed. These include the use of simple QSAR analyses, as well as the use of multivariate QSARs, and a number of different expert system approaches. PMID:12896862

  3. Assessing deep and shallow learning methods for quantitative prediction of acute chemical toxicity.

    PubMed

    Liu, Ruifeng; Madore, Michael; Glover, Kyle P; Feasel, Michael G; Wallqvist, Anders

    2018-05-02

    Animal-based methods for assessing chemical toxicity are struggling to meet testing demands. In silico approaches, including machine-learning methods, are promising alternatives. Recently, deep neural networks (DNNs) were evaluated and reported to outperform other machine-learning methods for quantitative structure-activity relationship modeling of molecular properties. However, most of the reported performance evaluations relied on global performance metrics, such as the root mean squared error (RMSE) between the predicted and experimental values of all samples, without considering the impact of sample distribution across the activity spectrum. Here, we carried out an in-depth analysis of DNN performance for quantitative prediction of acute chemical toxicity using several datasets. We found that the overall performance of DNN models on datasets of up to 30,000 compounds was similar to that of random forest (RF) models, as measured by the RMSE and correlation coefficients between the predicted and experimental results. However, our detailed analyses demonstrated that global performance metrics are inappropriate for datasets with a highly uneven sample distribution, because they show a strong bias for the most populous compounds along the toxicity spectrum. For highly toxic compounds, DNN and RF models trained on all samples performed much worse than the global performance metrics indicated. Surprisingly, our variable nearest neighbor method, which utilizes only structurally similar compounds to make predictions, performed reasonably well, suggesting that information of close near neighbors in the training sets is a key determinant of acute toxicity predictions.

  4. Decision-making, sensitivity to reward, and attrition in weight-management

    PubMed Central

    Koritzky, Gilly; Dieterle, Camille; Rice, Chantelle; Jordan, Katie; Bechara, Antoine

    2014-01-01

    Objective Attrition is a common problem in weight-management. Understanding the risk factors for attrition should enhance professionals’ ability to increase completion rates and improve health outcomes for more individuals. We propose a model that draws upon neuropsychological knowledge on reward-sensitivity in obesity and overeating to predict attrition. Design & Methods 52 participants in a weight-management program completed a complex decision-making task.Decision-making characteristics – including sensitivity to reward – were further estimated using a quantitative model. Impulsivity and risk-taking measures were also administered. Results Consistent with the hypothesis that sensitivity to reward predicted attrition, program dropouts had higher sensitivity to reward than completers (p < 0.03). No differences were observed between completers and dropouts in initial BMI, age, employment status, or the number of prior weight-loss attempts (p ≥ 0.07). Completers had a slightly higher education level than dropouts, but its inclusion in the model did not increase predictive power. Impulsivity, delay of gratification, and risk-taking did not predict attrition, either. Conclusions Findings link attrition in weight-management to the neural mechanisms associated with reward-seeking and related influences on decision-making. Individual differences in the magnitude of response elicited by rewards may account for the relative difficulty experienced by dieters in adhering to treatment. PMID:24771588

  5. Using multi-species occupancy models in structured decision making on managed lands

    USGS Publications Warehouse

    Sauer, John R.; Blank, Peter J.; Zipkin, Elise F.; Fallon, Jane E.; Fallon, Frederick W.

    2013-01-01

    Land managers must balance the needs of a variety of species when manipulating habitats. Structured decision making provides a systematic means of defining choices and choosing among alternative management options; implementation of a structured decision requires quantitative approaches to predicting consequences of management on the relevant species. Multi-species occupancy models provide a convenient framework for making structured decisions when the management objective is focused on a collection of species. These models use replicate survey data that are often collected on managed lands. Occupancy can be modeled for each species as a function of habitat and other environmental features, and Bayesian methods allow for estimation and prediction of collective responses of groups of species to alternative scenarios of habitat management. We provide an example of this approach using data from breeding bird surveys conducted in 2008 at the Patuxent Research Refuge in Laurel, Maryland, evaluating the effects of eliminating meadow and wetland habitats on scrub-successional and woodland-breeding bird species using summed total occupancy of species as an objective function. Removal of meadows and wetlands decreased value of an objective function based on scrub-successional species by 23.3% (95% CI: 20.3–26.5), but caused only a 2% (0.5, 3.5) increase in value of an objective function based on woodland species, documenting differential effects of elimination of meadows and wetlands on these groups of breeding birds. This approach provides a useful quantitative tool for managers interested in structured decision making.

  6. A Research Methodology for Studying What Makes Some Problems Difficult to Solve

    ERIC Educational Resources Information Center

    Gulacar, Ozcan; Fynewever, Herb

    2010-01-01

    We present a quantitative model for predicting the level of difficulty subjects will experience with specific problems. The model explicitly accounts for the number of subproblems a problem can be broken into and the difficultly of each subproblem. Although the model builds on previously published models, it is uniquely suited for blending with…

  7. Conformal Regression for Quantitative Structure-Activity Relationship Modeling-Quantifying Prediction Uncertainty.

    PubMed

    Svensson, Fredrik; Aniceto, Natalia; Norinder, Ulf; Cortes-Ciriano, Isidro; Spjuth, Ola; Carlsson, Lars; Bender, Andreas

    2018-05-29

    Making predictions with an associated confidence is highly desirable as it facilitates decision making and resource prioritization. Conformal regression is a machine learning framework that allows the user to define the required confidence and delivers predictions that are guaranteed to be correct to the selected extent. In this study, we apply conformal regression to model molecular properties and bioactivity values and investigate different ways to scale the resultant prediction intervals to create as efficient (i.e., narrow) regressors as possible. Different algorithms to estimate the prediction uncertainty were used to normalize the prediction ranges, and the different approaches were evaluated on 29 publicly available data sets. Our results show that the most efficient conformal regressors are obtained when using the natural exponential of the ensemble standard deviation from the underlying random forest to scale the prediction intervals, but other approaches were almost as efficient. This approach afforded an average prediction range of 1.65 pIC50 units at the 80% confidence level when applied to bioactivity modeling. The choice of nonconformity function has a pronounced impact on the average prediction range with a difference of close to one log unit in bioactivity between the tightest and widest prediction range. Overall, conformal regression is a robust approach to generate bioactivity predictions with associated confidence.

  8. Age-related quantitative and qualitative changes in decision making ability.

    PubMed

    Isella, Valeria; Mapelli, Cristina; Morielli, Nadia; Pelati, Oriana; Franceschi, Massimo; Appollonio, Ildebrando Marco

    2008-01-01

    The "frontal aging hypothesis" predicts that brain senescence affects predominantly the prefrontal regions. Preliminary evidence has recently been gathered in favour of an age-related change in a typically frontal process, i.e. decision making, using the Iowa Gambling Task (IGT), but overall findings have been conflicting. Following the traditional scoring method, coupled with a qualitative analysis, in the present study we compared IGT performance of 40 young (mean age: 27.9+/-4.7) and 40 old (mean age: 65.4+/-8.6) healthy adults and of 18 patients affected by frontal lobe dementia of mild severity (mean age: 65.1+/-7.4, mean MMSE score: 24.1+/-3.9). Quantitative findings support the notion that decision making ability declines with age; moreover, it approximates the impairment observed in executive dysfunction due to neurodegeneration. Results of the qualitative analysis did not reach statistical significance for the motivational and learning decision making components considered, but approached significance for the attentional component for elderly versus young normals, suggesting a possible decrease in the ability to maintain sustained attention during complex and prolonged tasks as the putative deficit underlying impaired decision making in normal aging.

  9. True Numerical Cognition in the Wild.

    PubMed

    Piantadosi, Steven T; Cantlon, Jessica F

    2017-04-01

    Cognitive and neural research over the past few decades has produced sophisticated models of the representations and algorithms underlying numerical reasoning in humans and other animals. These models make precise predictions for how humans and other animals should behave when faced with quantitative decisions, yet primarily have been tested only in laboratory tasks. We used data from wild baboons' troop movements recently reported by Strandburg-Peshkin, Farine, Couzin, and Crofoot (2015) to compare a variety of models of quantitative decision making. We found that the decisions made by these naturally behaving wild animals rely specifically on numerical representations that have key homologies with the psychophysics of human number representations. These findings provide important new data on the types of problems human numerical cognition was designed to solve and constitute the first robust evidence of true numerical reasoning in wild animals.

  10. Context influences on TALE–DNA binding revealed by quantitative profiling

    PubMed Central

    Rogers, Julia M.; Barrera, Luis A.; Reyon, Deepak; Sander, Jeffry D.; Kellis, Manolis; Joung, J Keith; Bulyk, Martha L.

    2015-01-01

    Transcription activator-like effector (TALE) proteins recognize DNA using a seemingly simple DNA-binding code, which makes them attractive for use in genome engineering technologies that require precise targeting. Although this code is used successfully to design TALEs to target specific sequences, off-target binding has been observed and is difficult to predict. Here we explore TALE–DNA interactions comprehensively by quantitatively assaying the DNA-binding specificities of 21 representative TALEs to ∼5,000–20,000 unique DNA sequences per protein using custom-designed protein-binding microarrays (PBMs). We find that protein context features exert significant influences on binding. Thus, the canonical recognition code does not fully capture the complexity of TALE–DNA binding. We used the PBM data to develop a computational model, Specificity Inference For TAL-Effector Design (SIFTED), to predict the DNA-binding specificity of any TALE. We provide SIFTED as a publicly available web tool that predicts potential genomic off-target sites for improved TALE design. PMID:26067805

  11. Context influences on TALE-DNA binding revealed by quantitative profiling.

    PubMed

    Rogers, Julia M; Barrera, Luis A; Reyon, Deepak; Sander, Jeffry D; Kellis, Manolis; Joung, J Keith; Bulyk, Martha L

    2015-06-11

    Transcription activator-like effector (TALE) proteins recognize DNA using a seemingly simple DNA-binding code, which makes them attractive for use in genome engineering technologies that require precise targeting. Although this code is used successfully to design TALEs to target specific sequences, off-target binding has been observed and is difficult to predict. Here we explore TALE-DNA interactions comprehensively by quantitatively assaying the DNA-binding specificities of 21 representative TALEs to ∼5,000-20,000 unique DNA sequences per protein using custom-designed protein-binding microarrays (PBMs). We find that protein context features exert significant influences on binding. Thus, the canonical recognition code does not fully capture the complexity of TALE-DNA binding. We used the PBM data to develop a computational model, Specificity Inference For TAL-Effector Design (SIFTED), to predict the DNA-binding specificity of any TALE. We provide SIFTED as a publicly available web tool that predicts potential genomic off-target sites for improved TALE design.

  12. Sender–receiver systems and applying information theory for quantitative synthetic biology

    PubMed Central

    Barcena Menendez, Diego; Senthivel, Vivek Raj; Isalan, Mark

    2015-01-01

    Sender–receiver (S–R) systems abound in biology, with communication systems sending information in various forms. Information theory provides a quantitative basis for analysing these processes and is being applied to study natural genetic, enzymatic and neural networks. Recent advances in synthetic biology are providing us with a wealth of artificial S–R systems, giving us quantitative control over networks with a finite number of well-characterised components. Combining the two approaches can help to predict how to maximise signalling robustness, and will allow us to make increasingly complex biological computers. Ultimately, pushing the boundaries of synthetic biology will require moving beyond engineering the flow of information and towards building more sophisticated circuits that interpret biological meaning. PMID:25282688

  13. Quantitative assessment of thermodynamic constraints on the solution space of genome-scale metabolic models.

    PubMed

    Hamilton, Joshua J; Dwivedi, Vivek; Reed, Jennifer L

    2013-07-16

    Constraint-based methods provide powerful computational techniques to allow understanding and prediction of cellular behavior. These methods rely on physiochemical constraints to eliminate infeasible behaviors from the space of available behaviors. One such constraint is thermodynamic feasibility, the requirement that intracellular flux distributions obey the laws of thermodynamics. The past decade has seen several constraint-based methods that interpret this constraint in different ways, including those that are limited to small networks, rely on predefined reaction directions, and/or neglect the relationship between reaction free energies and metabolite concentrations. In this work, we utilize one such approach, thermodynamics-based metabolic flux analysis (TMFA), to make genome-scale, quantitative predictions about metabolite concentrations and reaction free energies in the absence of prior knowledge of reaction directions, while accounting for uncertainties in thermodynamic estimates. We applied TMFA to a genome-scale network reconstruction of Escherichia coli and examined the effect of thermodynamic constraints on the flux space. We also assessed the predictive performance of TMFA against gene essentiality and quantitative metabolomics data, under both aerobic and anaerobic, and optimal and suboptimal growth conditions. Based on these results, we propose that TMFA is a useful tool for validating phenotypes and generating hypotheses, and that additional types of data and constraints can improve predictions of metabolite concentrations. Copyright © 2013 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  14. Novel images extraction model using improved delay vector variance feature extraction and multi-kernel neural network for EEG detection and prediction.

    PubMed

    Ge, Jing; Zhang, Guoping

    2015-01-01

    Advanced intelligent methodologies could help detect and predict diseases from the EEG signals in cases the manual analysis is inefficient available, for instance, the epileptic seizures detection and prediction. This is because the diversity and the evolution of the epileptic seizures make it very difficult in detecting and identifying the undergoing disease. Fortunately, the determinism and nonlinearity in a time series could characterize the state changes. Literature review indicates that the Delay Vector Variance (DVV) could examine the nonlinearity to gain insight into the EEG signals but very limited work has been done to address the quantitative DVV approach. Hence, the outcomes of the quantitative DVV should be evaluated to detect the epileptic seizures. To develop a new epileptic seizure detection method based on quantitative DVV. This new epileptic seizure detection method employed an improved delay vector variance (IDVV) to extract the nonlinearity value as a distinct feature. Then a multi-kernel functions strategy was proposed in the extreme learning machine (ELM) network to provide precise disease detection and prediction. The nonlinearity is more sensitive than the energy and entropy. 87.5% overall accuracy of recognition and 75.0% overall accuracy of forecasting were achieved. The proposed IDVV and multi-kernel ELM based method was feasible and effective for epileptic EEG detection. Hence, the newly proposed method has importance for practical applications.

  15. Midwest Structural Sciences Center, 2006-2013

    DTIC Science & Technology

    2013-09-01

    for Technology High Speed Systems Division Air Force Research Laboratory This report is published in the interest of scientific and...also be used for making predictions of future flights. 2 Approved for public release; distribution unlimited. Fig. 1.1: Development of future high ...methods were developed to provide validation quality data for coupled high temperature and acoustic loading environments, and to quantitatively study

  16. Talking and Learning Physics: Predicting Future Grades from Network Measures and Force Concept Inventory Pretest Scores

    ERIC Educational Resources Information Center

    Bruun, Jesper; Brewe, Eric

    2013-01-01

    The role of student interactions in learning situations is a foundation of sociocultural learning theory, and social network analysis can be used to quantify student relations. We discuss how self-reported student interactions can be viewed as processes of meaning making and use this to understand how quantitative measures that describe the…

  17. Numeracy of multiple sclerosis patients: A comparison of patients from the PERCEPT study to a German probabilistic sample.

    PubMed

    Gaissmaier, Wolfgang; Giese, Helge; Galesic, Mirta; Garcia-Retamero, Rocio; Kasper, Juergen; Kleiter, Ingo; Meuth, Sven G; Köpke, Sascha; Heesen, Christoph

    2018-01-01

    A shared decision-making approach is suggested for multiple sclerosis (MS) patients. To properly evaluate benefits and risks of different treatment options accordingly, MS patients require sufficient numeracy - the ability to understand quantitative information. It is unknown whether MS affects numeracy. Therefore, we investigated whether patients' numeracy was impaired compared to a probabilistic national sample. As part of the larger prospective, observational, multicenter study PERCEPT, we assessed numeracy for a clinical study sample of German MS patients (N=725) with a standard test and compared them to a German probabilistic sample (N=1001), controlling for age, sex, and education. Within patients, we assessed whether disease variables (disease duration, disability, annual relapse rate, cognitive impairment) predicted numeracy beyond these demographics. MS patients showed a comparable level of numeracy as the probabilistic national sample (68.9% vs. 68.5% correct answers, P=0.831). In both samples, numeracy was higher for men and the highly educated. Disease variables did not predict numeracy beyond demographics within patients, and predictability was generally low. This sample of MS patients understood quantitative information on the same level as the general population. There is no reason to withhold quantitative information from MS patients. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. [Quantitative estimation of vegetation cover and management factor in USLE and RUSLE models by using remote sensing data: a review].

    PubMed

    Wu, Chang-Guang; Li, Sheng; Ren, Hua-Dong; Yao, Xiao-Hua; Huang, Zi-Jie

    2012-06-01

    Soil loss prediction models such as universal soil loss equation (USLE) and its revised universal soil loss equation (RUSLE) are the useful tools for risk assessment of soil erosion and planning of soil conservation at regional scale. To make a rational estimation of vegetation cover and management factor, the most important parameters in USLE or RUSLE, is particularly important for the accurate prediction of soil erosion. The traditional estimation based on field survey and measurement is time-consuming, laborious, and costly, and cannot rapidly extract the vegetation cover and management factor at macro-scale. In recent years, the development of remote sensing technology has provided both data and methods for the estimation of vegetation cover and management factor over broad geographic areas. This paper summarized the research findings on the quantitative estimation of vegetation cover and management factor by using remote sensing data, and analyzed the advantages and the disadvantages of various methods, aimed to provide reference for the further research and quantitative estimation of vegetation cover and management factor at large scale.

  19. Parameterizing the Supernova Engine and Its Effect on Remnants and Basic Yields

    NASA Astrophysics Data System (ADS)

    Fryer, Chris L.; Andrews, Sydney; Even, Wesley; Heger, Alex; Safi-Harb, Samar

    2018-03-01

    Core-collapse supernova science is now entering an era in which engine models are beginning to make both qualitative and, in some cases, quantitative predictions. Although the evidence in support of the convective engine for core-collapse supernova continues to grow, it is difficult to place quantitative constraints on this engine. Some studies have made specific predictions for the remnant distribution from the convective engine, but the results differ between different groups. Here we use a broad parameterization for the supernova engine to understand the differences between distinct studies. With this broader set of models, we place error bars on the remnant mass and basic yields from the uncertainties in the explosive engine. We find that, even with only three progenitors and a narrow range of explosion energies, we can produce a wide range of remnant masses and nucleosynthetic yields.

  20. Quantitative radiomics studies for tissue characterization: a review of technology and methodological procedures.

    PubMed

    Larue, Ruben T H M; Defraene, Gilles; De Ruysscher, Dirk; Lambin, Philippe; van Elmpt, Wouter

    2017-02-01

    Quantitative analysis of tumour characteristics based on medical imaging is an emerging field of research. In recent years, quantitative imaging features derived from CT, positron emission tomography and MR scans were shown to be of added value in the prediction of outcome parameters in oncology, in what is called the radiomics field. However, results might be difficult to compare owing to a lack of standardized methodologies to conduct quantitative image analyses. In this review, we aim to present an overview of the current challenges, technical routines and protocols that are involved in quantitative imaging studies. The first issue that should be overcome is the dependency of several features on the scan acquisition and image reconstruction parameters. Adopting consistent methods in the subsequent target segmentation step is evenly crucial. To further establish robust quantitative image analyses, standardization or at least calibration of imaging features based on different feature extraction settings is required, especially for texture- and filter-based features. Several open-source and commercial software packages to perform feature extraction are currently available, all with slightly different functionalities, which makes benchmarking quite challenging. The number of imaging features calculated is typically larger than the number of patients studied, which emphasizes the importance of proper feature selection and prediction model-building routines to prevent overfitting. Even though many of these challenges still need to be addressed before quantitative imaging can be brought into daily clinical practice, radiomics is expected to be a critical component for the integration of image-derived information to personalize treatment in the future.

  1. Earthquake prediction evaluation standards applied to the VAN Method

    NASA Astrophysics Data System (ADS)

    Jackson, David D.

    Earthquake prediction research must meet certain standards before it can be suitably evaluated for potential application in decision making. For methods that result in a binary (on or off) alarm condition, requirements include (1) a quantitative description of observables that trigger an alarm, (2) a quantitative description, including ranges of time, location, and magnitude, of the predicted earthquakes, (3) documented evidence of all previous alarms, (4) a complete list of predicted earthquakes, (5) a complete list of unpredicted earthquakes. The VAN technique [Varotsos and Lazaridou, 1991; Varotsos et al., 1996] has not yet been stated as a testable hypothesis. It fails criteria (1) and (2) so it is not ready to be evaluated properly. Although telegrams were transmitted in advance of claimed successes, these telegrams did not fully specify the predicted events, and all of the published statistical evaluations involve many subjective ex post facto decisions. Lacking a statistically demonstrated relationship to earthquakes, a candidate prediction technique should satisfy several plausibility criteria, including: (1) a reasonable relationship between the location of the candidate precursor and that of the predicted earthquake, (2) some demonstration that the candidate precursory observations are related to stress, strain, or other quantities related to earthquakes, and (3) the existence of co-seismic as well as pre-seismic variations of the candidate precursor. The VAN technique meets none of these criteria.

  2. Modifications to the steady-state 41-node thermoregulatory model including validation of the respiratory and diffusional water loss equations

    NASA Technical Reports Server (NTRS)

    1974-01-01

    After the simplified version of the 41-Node Stolwijk Metabolic Man Model was implemented on the Sigma 3 and UNIVAC 1110 computers in batch mode, it became desirable to make certain revisions. First, the availability of time-sharing terminals makes it possible to provide the capability and flexibility of conversational interaction between user and model. Secondly, recent physiological studies show the need to revise certain parameter values contained in the model. Thirdly, it was desired to make quantitative and accurate predictions of evaporative water loss for humans in an orbiting space station. The result of the first phase of this effort are reported.

  3. An ensemble model of QSAR tools for regulatory risk assessment.

    PubMed

    Pradeep, Prachi; Povinelli, Richard J; White, Shannon; Merrill, Stephen J

    2016-01-01

    Quantitative structure activity relationships (QSARs) are theoretical models that relate a quantitative measure of chemical structure to a physical property or a biological effect. QSAR predictions can be used for chemical risk assessment for protection of human and environmental health, which makes them interesting to regulators, especially in the absence of experimental data. For compatibility with regulatory use, QSAR models should be transparent, reproducible and optimized to minimize the number of false negatives. In silico QSAR tools are gaining wide acceptance as a faster alternative to otherwise time-consuming clinical and animal testing methods. However, different QSAR tools often make conflicting predictions for a given chemical and may also vary in their predictive performance across different chemical datasets. In a regulatory context, conflicting predictions raise interpretation, validation and adequacy concerns. To address these concerns, ensemble learning techniques in the machine learning paradigm can be used to integrate predictions from multiple tools. By leveraging various underlying QSAR algorithms and training datasets, the resulting consensus prediction should yield better overall predictive ability. We present a novel ensemble QSAR model using Bayesian classification. The model allows for varying a cut-off parameter that allows for a selection in the desirable trade-off between model sensitivity and specificity. The predictive performance of the ensemble model is compared with four in silico tools (Toxtree, Lazar, OECD Toolbox, and Danish QSAR) to predict carcinogenicity for a dataset of air toxins (332 chemicals) and a subset of the gold carcinogenic potency database (480 chemicals). Leave-one-out cross validation results show that the ensemble model achieves the best trade-off between sensitivity and specificity (accuracy: 83.8 % and 80.4 %, and balanced accuracy: 80.6 % and 80.8 %) and highest inter-rater agreement [kappa ( κ ): 0.63 and 0.62] for both the datasets. The ROC curves demonstrate the utility of the cut-off feature in the predictive ability of the ensemble model. This feature provides an additional control to the regulators in grading a chemical based on the severity of the toxic endpoint under study.

  4. An ensemble model of QSAR tools for regulatory risk assessment

    DOE PAGES

    Pradeep, Prachi; Povinelli, Richard J.; White, Shannon; ...

    2016-09-22

    Quantitative structure activity relationships (QSARs) are theoretical models that relate a quantitative measure of chemical structure to a physical property or a biological effect. QSAR predictions can be used for chemical risk assessment for protection of human and environmental health, which makes them interesting to regulators, especially in the absence of experimental data. For compatibility with regulatory use, QSAR models should be transparent, reproducible and optimized to minimize the number of false negatives. In silico QSAR tools are gaining wide acceptance as a faster alternative to otherwise time-consuming clinical and animal testing methods. However, different QSAR tools often make conflictingmore » predictions for a given chemical and may also vary in their predictive performance across different chemical datasets. In a regulatory context, conflicting predictions raise interpretation, validation and adequacy concerns. To address these concerns, ensemble learning techniques in the machine learning paradigm can be used to integrate predictions from multiple tools. By leveraging various underlying QSAR algorithms and training datasets, the resulting consensus prediction should yield better overall predictive ability. We present a novel ensemble QSAR model using Bayesian classification. The model allows for varying a cut-off parameter that allows for a selection in the desirable trade-off between model sensitivity and specificity. The predictive performance of the ensemble model is compared with four in silico tools (Toxtree, Lazar, OECD Toolbox, and Danish QSAR) to predict carcinogenicity for a dataset of air toxins (332 chemicals) and a subset of the gold carcinogenic potency database (480 chemicals). Leave-one-out cross validation results show that the ensemble model achieves the best trade-off between sensitivity and specificity (accuracy: 83.8 % and 80.4 %, and balanced accuracy: 80.6 % and 80.8 %) and highest inter-rater agreement [kappa (κ): 0.63 and 0.62] for both the datasets. The ROC curves demonstrate the utility of the cut-off feature in the predictive ability of the ensemble model. In conclusion, this feature provides an additional control to the regulators in grading a chemical based on the severity of the toxic endpoint under study.« less

  5. A probabilistic, distributed, recursive mechanism for decision-making in the brain

    PubMed Central

    Gurney, Kevin N.

    2018-01-01

    Decision formation recruits many brain regions, but the procedure they jointly execute is unknown. Here we characterize its essential composition, using as a framework a novel recursive Bayesian algorithm that makes decisions based on spike-trains with the statistics of those in sensory cortex (MT). Using it to simulate the random-dot-motion task, we demonstrate it quantitatively replicates the choice behaviour of monkeys, whilst predicting losses of otherwise usable information from MT. Its architecture maps to the recurrent cortico-basal-ganglia-thalamo-cortical loops, whose components are all implicated in decision-making. We show that the dynamics of its mapped computations match those of neural activity in the sensorimotor cortex and striatum during decisions, and forecast those of basal ganglia output and thalamus. This also predicts which aspects of neural dynamics are and are not part of inference. Our single-equation algorithm is probabilistic, distributed, recursive, and parallel. Its success at capturing anatomy, behaviour, and electrophysiology suggests that the mechanism implemented by the brain has these same characteristics. PMID:29614077

  6. LOX/Hydrogen Coaxial Injector Atomization Test Program

    NASA Technical Reports Server (NTRS)

    Zaller, M.

    1990-01-01

    Quantitative information about the atomization of injector sprays is needed to improve the accuracy of computational models that predict the performance and stability margin of liquid propellant rocket engines. To obtain this data, a facility for the study of spray atomization is being established at NASA-Lewis to determine the drop size and velocity distributions occurring in vaporizing liquid sprays at supercritical pressures. Hardware configuration and test conditions are selected to make the cold flow simulant testing correspond as closely as possible to conditions in liquid oxygen (LOX)/gaseous H2 rocket engines. Drop size correlations from the literature, developed for liquid/gas coaxial injector geometries, are used to make drop size predictions for LOX/H2 coaxial injectors. The mean drop size predictions for a single element coaxial injector range from 0.1 to 2000 microns, emphasizing the need for additional studies of the atomization process in LOX/H2 engines. Selection of cold flow simulants, measured techniques, and hardware for LOX/H2 atomization simulations are discussed.

  7. A Bayesian network model for predicting type 2 diabetes risk based on electronic health records

    NASA Astrophysics Data System (ADS)

    Xie, Jiang; Liu, Yan; Zeng, Xu; Zhang, Wu; Mei, Zhen

    2017-07-01

    An extensive, in-depth study of diabetes risk factors (DBRF) is of crucial importance to prevent (or reduce) the chance of suffering from type 2 diabetes (T2D). Accumulation of electronic health records (EHRs) makes it possible to build nonlinear relationships between risk factors and diabetes. However, the current DBRF researches mainly focus on qualitative analyses, and the inconformity of physical examination items makes the risk factors likely to be lost, which drives us to study the novel machine learning approach for risk model development. In this paper, we use Bayesian networks (BNs) to analyze the relationship between physical examination information and T2D, and to quantify the link between risk factors and T2D. Furthermore, with the quantitative analyses of DBRF, we adopt EHR and propose a machine learning approach based on BNs to predict the risk of T2D. The experiments demonstrate that our approach can lead to better predictive performance than the classical risk model.

  8. Qualitative and quantitative comparison of geostatistical techniques of porosity prediction from the seismic and logging data: a case study from the Blackfoot Field, Alberta, Canada

    NASA Astrophysics Data System (ADS)

    Maurya, S. P.; Singh, K. H.; Singh, N. P.

    2018-05-01

    In present study, three recently developed geostatistical methods, single attribute analysis, multi-attribute analysis and probabilistic neural network algorithm have been used to predict porosity in inter well region for Blackfoot field, Alberta, Canada, an offshore oil field. These techniques make use of seismic attributes, generated by model based inversion and colored inversion techniques. The principle objective of the study is to find the suitable combination of seismic inversion and geostatistical techniques to predict porosity and identification of prospective zones in 3D seismic volume. The porosity estimated from these geostatistical approaches is corroborated with the well log porosity. The results suggest that all the three implemented geostatistical methods are efficient and reliable to predict the porosity but the multi-attribute and probabilistic neural network analysis provide more accurate and high resolution porosity sections. A low impedance (6000-8000 m/s g/cc) and high porosity (> 15%) zone is interpreted from inverted impedance and porosity sections respectively between 1060 and 1075 ms time interval and is characterized as reservoir. The qualitative and quantitative results demonstrate that of all the employed geostatistical methods, the probabilistic neural network along with model based inversion is the most efficient method for predicting porosity in inter well region.

  9. Probabilistic modeling of discourse-aware sentence processing.

    PubMed

    Dubey, Amit; Keller, Frank; Sturt, Patrick

    2013-07-01

    Probabilistic models of sentence comprehension are increasingly relevant to questions concerning human language processing. However, such models are often limited to syntactic factors. This restriction is unrealistic in light of experimental results suggesting interactions between syntax and other forms of linguistic information in human sentence processing. To address this limitation, this article introduces two sentence processing models that augment a syntactic component with information about discourse co-reference. The novel combination of probabilistic syntactic components with co-reference classifiers permits them to more closely mimic human behavior than existing models. The first model uses a deep model of linguistics, based in part on probabilistic logic, allowing it to make qualitative predictions on experimental data; the second model uses shallow processing to make quantitative predictions on a broad-coverage reading-time corpus. Copyright © 2013 Cognitive Science Society, Inc.

  10. Binaural signal detection - Equalization and cancellation theory.

    NASA Technical Reports Server (NTRS)

    Durlach, N. I.

    1972-01-01

    The improvement in masked-signal detection afforded by two ears (i.e., binaural unmasking) is explained on the basis of a descriptive model of the processing of binaural stimuli by a system consisting of two bandpass filters, an equalization and cancellation mechanism, and a decision device. The main ideas of the model are initially explained, and a general equation is derived for the purpose of making quantitative predictions. Comparisons are then made between various special cases of this equation and experimental data. Failures of the preliminary model in predicting the data are considered, and possible revisions are discussed.

  11. Global Quantitative Modeling of Chromatin Factor Interactions

    PubMed Central

    Zhou, Jian; Troyanskaya, Olga G.

    2014-01-01

    Chromatin is the driver of gene regulation, yet understanding the molecular interactions underlying chromatin factor combinatorial patterns (or the “chromatin codes”) remains a fundamental challenge in chromatin biology. Here we developed a global modeling framework that leverages chromatin profiling data to produce a systems-level view of the macromolecular complex of chromatin. Our model ultilizes maximum entropy modeling with regularization-based structure learning to statistically dissect dependencies between chromatin factors and produce an accurate probability distribution of chromatin code. Our unsupervised quantitative model, trained on genome-wide chromatin profiles of 73 histone marks and chromatin proteins from modENCODE, enabled making various data-driven inferences about chromatin profiles and interactions. We provided a highly accurate predictor of chromatin factor pairwise interactions validated by known experimental evidence, and for the first time enabled higher-order interaction prediction. Our predictions can thus help guide future experimental studies. The model can also serve as an inference engine for predicting unknown chromatin profiles — we demonstrated that with this approach we can leverage data from well-characterized cell types to help understand less-studied cell type or conditions. PMID:24675896

  12. Towards a quantitative description of tunneling conductance of superconductors: Application to LiFeAs

    DOE PAGES

    Kreisel, A.; Nelson, R.; Berlijn, T.; ...

    2016-12-27

    Since the discovery of iron-based superconductors, a number of theories have been put forward to explain the qualitative origin of pairing, but there have been few attempts to make quantitative, material-specific comparisons to experimental results. The spin-fluctuation theory of electronic pairing, based on first-principles electronic structure calculations, makes predictions for the superconducting gap. Within the same framework, the surface wave functions may also be calculated, allowing, e.g., for detailed comparisons between theoretical results and measured scanning tunneling topographs and spectra. We present such a comparison between theory and experiment on the Fe-based superconductor LiFeAs. Our results for the homogeneous surfacemore » as well as impurity states are presented as a benchmark test of the theory. For the homogeneous system, we argue that the maxima of topographic image intensity may be located at positions above either the As or Li atoms, depending on tip height and the setpoint current of the measurement. We further report the experimental observation of transitions between As- and Li-registered lattices as functions of both tip height and setpoint bias, in agreement with this prediction. Next, we give a detailed comparison between the simulated scanning tunneling microscopy images of transition-metal defects with experiment. Finally, we discuss possible extensions of the current framework to obtain a theory with true predictive power for scanning tunneling microscopy in Fe-based systems.« less

  13. Towards a quantitative description of tunneling conductance of superconductors: Application to LiFeAs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kreisel, A.; Nelson, R.; Berlijn, T.

    Since the discovery of iron-based superconductors, a number of theories have been put forward to explain the qualitative origin of pairing, but there have been few attempts to make quantitative, material-specific comparisons to experimental results. The spin-fluctuation theory of electronic pairing, based on first-principles electronic structure calculations, makes predictions for the superconducting gap. Within the same framework, the surface wave functions may also be calculated, allowing, e.g., for detailed comparisons between theoretical results and measured scanning tunneling topographs and spectra. We present such a comparison between theory and experiment on the Fe-based superconductor LiFeAs. Our results for the homogeneous surfacemore » as well as impurity states are presented as a benchmark test of the theory. For the homogeneous system, we argue that the maxima of topographic image intensity may be located at positions above either the As or Li atoms, depending on tip height and the setpoint current of the measurement. We further report the experimental observation of transitions between As- and Li-registered lattices as functions of both tip height and setpoint bias, in agreement with this prediction. Next, we give a detailed comparison between the simulated scanning tunneling microscopy images of transition-metal defects with experiment. Finally, we discuss possible extensions of the current framework to obtain a theory with true predictive power for scanning tunneling microscopy in Fe-based systems.« less

  14. Evaluation of an ensemble of genetic models for prediction of a quantitative trait.

    PubMed

    Milton, Jacqueline N; Steinberg, Martin H; Sebastiani, Paola

    2014-01-01

    Many genetic markers have been shown to be associated with common quantitative traits in genome-wide association studies. Typically these associated genetic markers have small to modest effect sizes and individually they explain only a small amount of the variability of the phenotype. In order to build a genetic prediction model without fitting a multiple linear regression model with possibly hundreds of genetic markers as predictors, researchers often summarize the joint effect of risk alleles into a genetic score that is used as a covariate in the genetic prediction model. However, the prediction accuracy can be highly variable and selecting the optimal number of markers to be included in the genetic score is challenging. In this manuscript we present a strategy to build an ensemble of genetic prediction models from data and we show that the ensemble-based method makes the challenge of choosing the number of genetic markers more amenable. Using simulated data with varying heritability and number of genetic markers, we compare the predictive accuracy and inclusion of true positive and false positive markers of a single genetic prediction model and our proposed ensemble method. The results show that the ensemble of genetic models tends to include a larger number of genetic variants than a single genetic model and it is more likely to include all of the true genetic markers. This increased sensitivity is obtained at the price of a lower specificity that appears to minimally affect the predictive accuracy of the ensemble.

  15. Landslide hazard assessment: recent trends and techniques.

    PubMed

    Pardeshi, Sudhakar D; Autade, Sumant E; Pardeshi, Suchitra S

    2013-01-01

    Landslide hazard assessment is an important step towards landslide hazard and risk management. There are several methods of Landslide Hazard Zonation (LHZ) viz. heuristic, semi quantitative, quantitative, probabilistic and multi-criteria decision making process. However, no one method is accepted universally for effective assessment of landslide hazards. In recent years, several attempts have been made to apply different methods of LHZ and to compare results in order to find the best suited model. This paper presents the review of researches on landslide hazard mapping published in recent years. The advanced multivariate techniques are proved to be effective in spatial prediction of landslides with high degree of accuracy. Physical process based models also perform well in LHZ mapping even in the areas with poor database. Multi-criteria decision making approach also play significant role in determining relative importance of landslide causative factors in slope instability process. Remote Sensing and Geographical Information System (GIS) are powerful tools to assess landslide hazards and are being used extensively in landslide researches since last decade. Aerial photographs and high resolution satellite data are useful in detection, mapping and monitoring landslide processes. GIS based LHZ models helps not only to map and monitor landslides but also to predict future slope failures. The advancements in Geo-spatial technologies have opened the doors for detailed and accurate assessment of landslide hazards.

  16. Comparing models for quantitative risk assessment: an application to the European Registry of foreign body injuries in children.

    PubMed

    Berchialla, Paola; Scarinzi, Cecilia; Snidero, Silvia; Gregori, Dario

    2016-08-01

    Risk Assessment is the systematic study of decisions subject to uncertain consequences. An increasing interest has been focused on modeling techniques like Bayesian Networks since their capability of (1) combining in the probabilistic framework different type of evidence including both expert judgments and objective data; (2) overturning previous beliefs in the light of the new information being received and (3) making predictions even with incomplete data. In this work, we proposed a comparison among Bayesian Networks and other classical Quantitative Risk Assessment techniques such as Neural Networks, Classification Trees, Random Forests and Logistic Regression models. Hybrid approaches, combining both Classification Trees and Bayesian Networks, were also considered. Among Bayesian Networks, a clear distinction between purely data-driven approach and combination of expert knowledge with objective data is made. The aim of this paper consists in evaluating among this models which best can be applied, in the framework of Quantitative Risk Assessment, to assess the safety of children who are exposed to the risk of inhalation/insertion/aspiration of consumer products. The issue of preventing injuries in children is of paramount importance, in particular where product design is involved: quantifying the risk associated to product characteristics can be of great usefulness in addressing the product safety design regulation. Data of the European Registry of Foreign Bodies Injuries formed the starting evidence for risk assessment. Results showed that Bayesian Networks appeared to have both the ease of interpretability and accuracy in making prediction, even if simpler models like logistic regression still performed well. © The Author(s) 2013.

  17. Cortical and Hippocampal Correlates of Deliberation during Model-Based Decisions for Rewards in Humans

    PubMed Central

    Bornstein, Aaron M.; Daw, Nathaniel D.

    2013-01-01

    How do we use our memories of the past to guide decisions we've never had to make before? Although extensive work describes how the brain learns to repeat rewarded actions, decisions can also be influenced by associations between stimuli or events not directly involving reward — such as when planning routes using a cognitive map or chess moves using predicted countermoves — and these sorts of associations are critical when deciding among novel options. This process is known as model-based decision making. While the learning of environmental relations that might support model-based decisions is well studied, and separately this sort of information has been inferred to impact decisions, there is little evidence concerning the full cycle by which such associations are acquired and drive choices. Of particular interest is whether decisions are directly supported by the same mnemonic systems characterized for relational learning more generally, or instead rely on other, specialized representations. Here, building on our previous work, which isolated dual representations underlying sequential predictive learning, we directly demonstrate that one such representation, encoded by the hippocampal memory system and adjacent cortical structures, supports goal-directed decisions. Using interleaved learning and decision tasks, we monitor predictive learning directly and also trace its influence on decisions for reward. We quantitatively compare the learning processes underlying multiple behavioral and fMRI observables using computational model fits. Across both tasks, a quantitatively consistent learning process explains reaction times, choices, and both expectation- and surprise-related neural activity. The same hippocampal and ventral stream regions engaged in anticipating stimuli during learning are also engaged in proportion to the difficulty of decisions. These results support a role for predictive associations learned by the hippocampal memory system to be recalled during choice formation. PMID:24339770

  18. Recommendations for evaluation of computational methods

    NASA Astrophysics Data System (ADS)

    Jain, Ajay N.; Nicholls, Anthony

    2008-03-01

    The field of computational chemistry, particularly as applied to drug design, has become increasingly important in terms of the practical application of predictive modeling to pharmaceutical research and development. Tools for exploiting protein structures or sets of ligands known to bind particular targets can be used for binding-mode prediction, virtual screening, and prediction of activity. A serious weakness within the field is a lack of standards with respect to quantitative evaluation of methods, data set preparation, and data set sharing. Our goal should be to report new methods or comparative evaluations of methods in a manner that supports decision making for practical applications. Here we propose a modest beginning, with recommendations for requirements on statistical reporting, requirements for data sharing, and best practices for benchmark preparation and usage.

  19. The Bragg Reflection Polarimeter On the Gravity and Extreme Magnetism Small Explorer Mission

    NASA Astrophysics Data System (ADS)

    Allured, Ryan; Griffiths, S.; Daly, R.; Prieskorn, Z.; Marlowe, H.; Kaaret, P.; GEMS Team

    2011-09-01

    The strong gravity associated with black holes warps the spacetime outside of the event horizon, and it is predicted that this will leave characteristic signatures on the polarization of X-ray emission originating in the accretion disk. The Gravity and Extreme Magnetism Small Explorer (GEMS) mission will be the first observatory with the capability to make polarization measurements with enough sensitivity to quantitatively test this prediction. Students at the University of Iowa are currently working on the development of the Bragg Reflection Polarimeter (BRP), a soft X-ray polarimeter sensitive at 500 eV, that is the student experiment on GEMS. The BRP will complement the main experiment by making a polarization measurement from accreting black holes below the main energy band (2-10 keV). This measurement will constrain the inclination of the accretion disk and tighten measurements of black hole spin.

  20. Aquatic effects assessment: needs and tools.

    PubMed

    Marchini, Silvia

    2002-01-01

    In the assessment of the adverse effects pollutants can produce on exposed ecosystems, different approaches can be followed depending on the quality and quantity of information available, whose advantages and limits are discussed with reference to the aquatic compartment. When experimental data are lacking, a predictive approach can be pursued by making use of validated quantitative structure-activity relationships (QSARs), which provide reliable ecotoxicity estimates only if appropriate models are applied. The experimental approach is central to any environmental hazard assessment procedure, although many uncertainties underlying the extrapolation from a limited set of single species laboratory data to the complexity of the ecosystem (e.g., the limitations of common summary statistics, the variability of species sensitivity, the need to consider alterations at higher level of integration) make the task difficult. When adequate toxicity information are available, the statistical extrapolation approach can be used to predict environmental compatible concentrations.

  1. An integrated, ethically driven environmental model of clinical decision making in emergency settings.

    PubMed

    Wolf, Lisa

    2013-02-01

    To explore the relationship between multiple variables within a model of critical thinking and moral reasoning. A quantitative descriptive correlational design using a purposive sample of 200 emergency nurses. Measured variables were accuracy in clinical decision-making, moral reasoning, perceived care environment, and demographics. Analysis was by bivariate correlation using Pearson's product-moment correlation coefficients, chi square and multiple linear regression analysis. The elements as identified in the integrated ethically-driven environmental model of clinical decision-making (IEDEM-CD) corrected depict moral reasoning and environment of care as factors significantly affecting accuracy in decision-making. The integrated, ethically driven environmental model of clinical decision making is a framework useful for predicting clinical decision making accuracy for emergency nurses in practice, with further implications in education, research and policy. A diagnostic and therapeutic framework for identifying and remediating individual and environmental challenges to accurate clinical decision making. © 2012, The Author. International Journal of Nursing Knowledge © 2012, NANDA International.

  2. Predicting autism at birth.

    PubMed

    Steinman, Gary

    2013-07-01

    The amounts of at least three biochemical factors are more often abnormal in autistic people than neurologically normal ones. They include insulin-like growth factor, anti-myelin basic protein, and serotonin. This may explain why processes initiated in utero which hinder normal neurogenesis, especially myelination, continue after delivery. Quantitation of these parameters may make possible the calculation of an autism index, anticipating at birth which children will ultimately develop overt autism. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Theoretical Models for Aircraft Availability: Classical Approach to Identification of Trends, Seasonality, and System Constraints in the Development of Realized Models

    DTIC Science & Technology

    2004-03-01

    predicting future events ( Heizer and Render , 1999). Forecasting techniques fall into two major categories, qualitative and quantitative methods...Globemaster III.” Excerpt from website. www.globalsecurity.org/military /systems/ aircraft/c-17-history.htm. 2003. Heizer , Jay, and Barry Render ...of the past data used to make the forecast ( Heizer , et. al., 1999). Explanatory forecasting models assume that the variable being forecasted

  4. Predicting Ideological Prejudice

    PubMed Central

    Brandt, Mark J.

    2017-01-01

    A major shortcoming of current models of ideological prejudice is that although they can anticipate the direction of the association between participants’ ideology and their prejudice against a range of target groups, they cannot predict the size of this association. I developed and tested models that can make specific size predictions for this association. A quantitative model that used the perceived ideology of the target group as the primary predictor of the ideology-prejudice relationship was developed with a representative sample of Americans (N = 4,940) and tested against models using the perceived status of and choice to belong to the target group as predictors. In four studies (total N = 2,093), ideology-prejudice associations were estimated, and these observed estimates were compared with the models’ predictions. The model that was based only on perceived ideology was the most parsimonious with the smallest errors. PMID:28394693

  5. Time-to-contact estimation of accelerated stimuli is based on first-order information.

    PubMed

    Benguigui, Nicolas; Ripoll, Hubert; Broderick, Michael P

    2003-12-01

    The goal of this study was to test whether 1st-order information, which does not account for acceleration, is used (a) to estimate the time to contact (TTC) of an accelerated stimulus after the occlusion of a final part of its trajectory and (b) to indirectly intercept an accelerated stimulus with a thrown projectile. Both tasks require the production of an action on the basis of predictive information acquired before the arrival of the stimulus at the target and allow the experimenter to make quantitative predictions about the participants' use (or nonuse) of 1st-order information. The results show that participants do not use information about acceleration and that they commit errors that rely quantitatively on 1st-order information even when acceleration is psychophysically detectable. In the indirect interceptive task, action is planned about 200 ms before the initiation of the movement, at which time the 1st-order TTC attains a critical value. ((c) 2003 APA, all rights reserved)

  6. Mass and Environment as Drivers of Galaxy Evolution: Simplicity and its Consequences

    NASA Astrophysics Data System (ADS)

    Peng, Yingjie

    2012-01-01

    The galaxy population appears to be composed of infinitely complex different types and properties at first sight, however, when large samples of galaxies are studied, it appears that the vast majority of galaxies just follow simple scaling relations and similar evolutional modes while the outliers represent some minority. The underlying simplicities of the interrelationships among stellar mass, star formation rate and environment are seen in SDSS and zCOSMOS. We demonstrate that the differential effects of mass and environment are completely separable to z 1, indicating that two distinct physical processes are operating, namely the "mass quenching" and "environment quenching". These two simple quenching processes, plus some additional quenching due to merging, then naturally produce the Schechter form of the galaxy stellar mass functions and make quantitative predictions for the inter-relationships between the Schechter parameters of star-forming and passive galaxies in different environments. All of these detailed quantitative relationships are indeed seen, to very high precision, in SDSS, lending strong support to our simple empirically-based model. The model also offers qualitative explanations for the "anti-hierarchical" age-mass relation and the alpha-enrichment patterns for passive galaxies and makes some other testable predictions such as the mass function of the population of transitory objects that are in the process of being quenched, the galaxy major- and minor-merger rates, the galaxy stellar mass assembly history, star formation history and etc. Although still purely phenomenological, the model makes clear what the evolutionary characteristics of the relevant physical processes must in fact be.

  7. Solar Cycle Predictions

    NASA Technical Reports Server (NTRS)

    Pesnell, William Dean

    2012-01-01

    Solar cycle predictions are needed to plan long-term space missions; just like weather predictions are needed to plan the launch. Fleets of satellites circle the Earth collecting many types of science data, protecting astronauts, and relaying information. All of these satellites are sensitive at some level to solar cycle effects. Predictions of drag on LEO spacecraft are one of the most important. Launching a satellite with less propellant can mean a higher orbit, but unanticipated solar activity and increased drag can make that a Pyrrhic victory as you consume the reduced propellant load more rapidly. Energetic events at the Sun can produce crippling radiation storms that endanger all assets in space. Solar cycle predictions also anticipate the shortwave emissions that cause degradation of solar panels. Testing solar dynamo theories by quantitative predictions of what will happen in 5-20 years is the next arena for solar cycle predictions. A summary and analysis of 75 predictions of the amplitude of the upcoming Solar Cycle 24 is presented. The current state of solar cycle predictions and some anticipations how those predictions could be made more accurate in the future will be discussed.

  8. Development and Assessment of Modules to Integrate Quantitative Skills in Introductory Biology Courses

    PubMed Central

    Hoffman, Kathleen; Leupen, Sarah; Dowell, Kathy; Kephart, Kerrie; Leips, Jeff

    2016-01-01

    Redesigning undergraduate biology courses to integrate quantitative reasoning and skill development is critical to prepare students for careers in modern medicine and scientific research. In this paper, we report on the development, implementation, and assessment of stand-alone modules that integrate quantitative reasoning into introductory biology courses. Modules are designed to improve skills in quantitative numeracy, interpreting data sets using visual tools, and making inferences about biological phenomena using mathematical/statistical models. We also examine demographic/background data that predict student improvement in these skills through exposure to these modules. We carried out pre/postassessment tests across four semesters and used student interviews in one semester to examine how students at different levels approached quantitative problems. We found that students improved in all skills in most semesters, although there was variation in the degree of improvement among skills from semester to semester. One demographic variable, transfer status, stood out as a major predictor of the degree to which students improved (transfer students achieved much lower gains every semester, despite the fact that pretest scores in each focus area were similar between transfer and nontransfer students). We propose that increased exposure to quantitative skill development in biology courses is effective at building competency in quantitative reasoning. PMID:27146161

  9. Quantitative prediction of perceptual decisions during near-threshold fear detection

    NASA Astrophysics Data System (ADS)

    Pessoa, Luiz; Padmala, Srikanth

    2005-04-01

    A fundamental goal of cognitive neuroscience is to explain how mental decisions originate from basic neural mechanisms. The goal of the present study was to investigate the neural correlates of perceptual decisions in the context of emotional perception. To probe this question, we investigated how fluctuations in functional MRI (fMRI) signals were correlated with behavioral choice during a near-threshold fear detection task. fMRI signals predicted behavioral choice independently of stimulus properties and task accuracy in a network of brain regions linked to emotional processing: posterior cingulate cortex, medial prefrontal cortex, right inferior frontal gyrus, and left insula. We quantified the link between fMRI signals and behavioral choice in a whole-brain analysis by determining choice probabilities by means of signal-detection theory methods. Our results demonstrate that voxel-wise fMRI signals can reliably predict behavioral choice in a quantitative fashion (choice probabilities ranged from 0.63 to 0.78) at levels comparable to neuronal data. We suggest that the conscious decision that a fearful face has been seen is represented across a network of interconnected brain regions that prepare the organism to appropriately handle emotionally challenging stimuli and that regulate the associated emotional response. decision making | emotion | functional MRI

  10. Quantitative DNA Methylation Analysis Identifies a Single CpG Dinucleotide Important for ZAP-70 Expression and Predictive of Prognosis in Chronic Lymphocytic Leukemia

    PubMed Central

    Claus, Rainer; Lucas, David M.; Stilgenbauer, Stephan; Ruppert, Amy S.; Yu, Lianbo; Zucknick, Manuela; Mertens, Daniel; Bühler, Andreas; Oakes, Christopher C.; Larson, Richard A.; Kay, Neil E.; Jelinek, Diane F.; Kipps, Thomas J.; Rassenti, Laura Z.; Gribben, John G.; Döhner, Hartmut; Heerema, Nyla A.; Marcucci, Guido; Plass, Christoph; Byrd, John C.

    2012-01-01

    Purpose Increased ZAP-70 expression predicts poor prognosis in chronic lymphocytic leukemia (CLL). Current methods for accurately measuring ZAP-70 expression are problematic, preventing widespread application of these tests in clinical decision making. We therefore used comprehensive DNA methylation profiling of the ZAP-70 regulatory region to identify sites important for transcriptional control. Patients and Methods High-resolution quantitative DNA methylation analysis of the entire ZAP-70 gene regulatory regions was conducted on 247 samples from patients with CLL from four independent clinical studies. Results Through this comprehensive analysis, we identified a small area in the 5′ regulatory region of ZAP-70 that showed large variability in methylation in CLL samples but was universally methylated in normal B cells. High correlation with mRNA and protein expression, as well as activity in promoter reporter assays, revealed that within this differentially methylated region, a single CpG dinucleotide and neighboring nucleotides are particularly important in ZAP-70 transcriptional regulation. Furthermore, by using clustering approaches, we identified a prognostic role for this site in four independent data sets of patients with CLL using time to treatment, progression-free survival, and overall survival as clinical end points. Conclusion Comprehensive quantitative DNA methylation analysis of the ZAP-70 gene in CLL identified important regions responsible for transcriptional regulation. In addition, loss of methylation at a specific single CpG dinucleotide in the ZAP-70 5′ regulatory sequence is a highly predictive and reproducible biomarker of poor prognosis in this disease. This work demonstrates the feasibility of using quantitative specific ZAP-70 methylation analysis as a relevant clinically applicable prognostic test in CLL. PMID:22564988

  11. Quantitative biology: where modern biology meets physical sciences.

    PubMed

    Shekhar, Shashank; Zhu, Lian; Mazutis, Linas; Sgro, Allyson E; Fai, Thomas G; Podolski, Marija

    2014-11-05

    Quantitative methods and approaches have been playing an increasingly important role in cell biology in recent years. They involve making accurate measurements to test a predefined hypothesis in order to compare experimental data with predictions generated by theoretical models, an approach that has benefited physicists for decades. Building quantitative models in experimental biology not only has led to discoveries of counterintuitive phenomena but has also opened up novel research directions. To make the biological sciences more quantitative, we believe a two-pronged approach needs to be taken. First, graduate training needs to be revamped to ensure biology students are adequately trained in physical and mathematical sciences and vice versa. Second, students of both the biological and the physical sciences need to be provided adequate opportunities for hands-on engagement with the methods and approaches necessary to be able to work at the intersection of the biological and physical sciences. We present the annual Physiology Course organized at the Marine Biological Laboratory (Woods Hole, MA) as a case study for a hands-on training program that gives young scientists the opportunity not only to acquire the tools of quantitative biology but also to develop the necessary thought processes that will enable them to bridge the gap between these disciplines. © 2014 Shekhar, Zhu, Mazutis, Sgro, Fai, and Podolski. This article is distributed by The American Society for Cell Biology under license from the author(s). Two months after publication it is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  12. Probabilistic prediction of barrier-island response to hurricanes

    USGS Publications Warehouse

    Plant, Nathaniel G.; Stockdon, Hilary F.

    2012-01-01

    Prediction of barrier-island response to hurricane attack is important for assessing the vulnerability of communities, infrastructure, habitat, and recreational assets to the impacts of storm surge, waves, and erosion. We have demonstrated that a conceptual model intended to make qualitative predictions of the type of beach response to storms (e.g., beach erosion, dune erosion, dune overwash, inundation) can be reformulated in a Bayesian network to make quantitative predictions of the morphologic response. In an application of this approach at Santa Rosa Island, FL, predicted dune-crest elevation changes in response to Hurricane Ivan explained about 20% to 30% of the observed variance. An extended Bayesian network based on the original conceptual model, which included dune elevations, storm surge, and swash, but with the addition of beach and dune widths as input variables, showed improved skill compared to the original model, explaining 70% of dune elevation change variance and about 60% of dune and shoreline position change variance. This probabilistic approach accurately represented prediction uncertainty (measured with the log likelihood ratio), and it outperformed the baseline prediction (i.e., the prior distribution based on the observations). Finally, sensitivity studies demonstrated that degrading the resolution of the Bayesian network or removing data from the calibration process reduced the skill of the predictions by 30% to 40%. The reduction in skill did not change conclusions regarding the relative importance of the input variables, and the extended model's skill always outperformed the original model.

  13. Symposium (International) (4th) on DETONATION Held at White Oak, Maryland on 12-15 October 1965.

    DTIC Science & Technology

    1965-10-15

    without Kury et al. and earlier by Wilkins et al. (UCRL- much more success than by the various small- 7797). The theoretical adiabatic exponent was...accelerate metal makes it possible ima in the adiabatic exponent versus volume to measure brisance quantitatively, and clari- plots of Kury et al. all...ef- variable covolume equations of state predict fects on confining metals. that the adiabatic exponent should thereafter decrease (essentially

  14. FMRI Is a Valid Noninvasive Alternative to Wada Testing

    PubMed Central

    Binder, Jeffrey R.

    2010-01-01

    Partial removal of the anterior temporal lobe (ATL) is a highly effective surgical treatment for intractable temporal lobe epilepsy, yet roughly half of patients who undergo left ATL resection show decline in language or verbal memory function postoperatively. Two recent studies demonstrate that preoperative fMRI can predict postoperative naming and verbal memory changes in such patients. Most importantly, fMRI significantly improves the accuracy of prediction relative to other noninvasive measures used alone. Addition of language and memory lateralization data from the intracarotid amobarbital (Wada) test did not improve prediction accuracy in these studies. Thus, fMRI provides patients and practitioners with a safe, non-invasive, and well-validated tool for making better-informed decisions regarding elective surgery based on a quantitative assessment of cognitive risk. PMID:20850386

  15. Extension to Higher Mass Numbers of an Improved Knockout-Ablation-Coalescence Model for Secondary Neutron and Light Ion Production in Cosmic Ray Interactions

    NASA Astrophysics Data System (ADS)

    Indi Sriprisan, Sirikul; Townsend, Lawrence; Cucinotta, Francis A.; Miller, Thomas M.

    Purpose: An analytical knockout-ablation-coalescence model capable of making quantitative predictions of the neutron spectra from high-energy nucleon-nucleus and nucleus-nucleus collisions is being developed for use in space radiation protection studies. The FORTRAN computer code that implements this model is called UBERNSPEC. The knockout or abrasion stage of the model is based on Glauber multiple scattering theory. The ablation part of the model uses the classical evaporation model of Weisskopf-Ewing. In earlier work, the knockout-ablation model has been extended to incorporate important coalescence effects into the formalism. Recently, alpha coalescence has been incorporated, and the ability to predict light ion spectra with the coalescence model added. The earlier versions were limited to nuclei with mass numbers less than 69. In this work, the UBERNSPEC code has been extended to make predictions of secondary neutrons and light ion production from the interactions of heavy charged particles with higher mass numbers (as large as 238). The predictions are compared with published measurements of neutron spectra and light ion energy for a variety of collision pairs. Furthermore, the predicted spectra from this work are compared with the predictions from the recently-developed heavy ion event generator incorporated in the Monte Carlo radiation transport code HETC-HEDS.

  16. Multivariate reference technique for quantitative analysis of fiber-optic tissue Raman spectroscopy.

    PubMed

    Bergholt, Mads Sylvest; Duraipandian, Shiyamala; Zheng, Wei; Huang, Zhiwei

    2013-12-03

    We report a novel method making use of multivariate reference signals of fused silica and sapphire Raman signals generated from a ball-lens fiber-optic Raman probe for quantitative analysis of in vivo tissue Raman measurements in real time. Partial least-squares (PLS) regression modeling is applied to extract the characteristic internal reference Raman signals (e.g., shoulder of the prominent fused silica boson peak (~130 cm(-1)); distinct sapphire ball-lens peaks (380, 417, 646, and 751 cm(-1))) from the ball-lens fiber-optic Raman probe for quantitative analysis of fiber-optic Raman spectroscopy. To evaluate the analytical value of this novel multivariate reference technique, a rapid Raman spectroscopy system coupled with a ball-lens fiber-optic Raman probe is used for in vivo oral tissue Raman measurements (n = 25 subjects) under 785 nm laser excitation powers ranging from 5 to 65 mW. An accurate linear relationship (R(2) = 0.981) with a root-mean-square error of cross validation (RMSECV) of 2.5 mW can be obtained for predicting the laser excitation power changes based on a leave-one-subject-out cross-validation, which is superior to the normal univariate reference method (RMSE = 6.2 mW). A root-mean-square error of prediction (RMSEP) of 2.4 mW (R(2) = 0.985) can also be achieved for laser power prediction in real time when we applied the multivariate method independently on the five new subjects (n = 166 spectra). We further apply the multivariate reference technique for quantitative analysis of gelatin tissue phantoms that gives rise to an RMSEP of ~2.0% (R(2) = 0.998) independent of laser excitation power variations. This work demonstrates that multivariate reference technique can be advantageously used to monitor and correct the variations of laser excitation power and fiber coupling efficiency in situ for standardizing the tissue Raman intensity to realize quantitative analysis of tissue Raman measurements in vivo, which is particularly appealing in challenging Raman endoscopic applications.

  17. Predicting Ki67% expression from DCE-MR images of breast tumors using textural kinetic features in tumor habitats

    NASA Astrophysics Data System (ADS)

    Chaudhury, Baishali; Zhou, Mu; Farhidzadeh, Hamidreza; Goldgof, Dmitry B.; Hall, Lawrence O.; Gatenby, Robert A.; Gillies, Robert J.; Weinfurtner, Robert J.; Drukteinis, Jennifer S.

    2016-03-01

    The use of Ki67% expression, a cell proliferation marker, as a predictive and prognostic factor has been widely studied in the literature. Yet its usefulness is limited due to inconsistent cut off scores for Ki67% expression, subjective differences in its assessment in various studies, and spatial variation in expression, which makes it difficult to reproduce as a reliable independent prognostic factor. Previous studies have shown that there are significant spatial variations in Ki67% expression, which may limit its clinical prognostic utility after core biopsy. These variations are most evident when examining the periphery of the tumor vs. the core. To date, prediction of Ki67% expression from quantitative image analysis of DCE-MRI is very limited. This work presents a novel computer aided diagnosis framework to use textural kinetics to (i) predict the ratio of periphery Ki67% expression to core Ki67% expression, and (ii) predict Ki67% expression from individual tumor habitats. The pilot cohort consists of T1 weighted fat saturated DCE-MR images from 17 patients. Support vector regression with a radial basis function was used for predicting the Ki67% expression and ratios. The initial results show that texture features from individual tumor habitats are more predictive of the Ki67% expression ratio and spatial Ki67% expression than features from the whole tumor. The Ki67% expression ratio could be predicted with a root mean square error (RMSE) of 1.67%. Quantitative image analysis of DCE-MRI using textural kinetic habitats, has the potential to be used as a non-invasive method for predicting Ki67 percentage and ratio, thus more accurately reporting high KI-67 expression for patient prognosis.

  18. Dual Processes in Decision Making and Developmental Neuroscience: A Fuzzy-Trace Model.

    PubMed

    Reyna, Valerie F; Brainerd, Charles J

    2011-09-01

    From Piaget to the present, traditional and dual-process theories have predicted improvement in reasoning from childhood to adulthood, and improvement has been observed. However, developmental reversals-that reasoning biases emerge with development -have also been observed in a growing list of paradigms. We explain how fuzzy-trace theory predicts both improvement and developmental reversals in reasoning and decision making. Drawing on research on logical and quantitative reasoning, as well as on risky decision making in the laboratory and in life, we illustrate how the same small set of theoretical principles apply to typical neurodevelopment, encompassing childhood, adolescence, and adulthood, and to neurological conditions such as autism and Alzheimer's disease. For example, framing effects-that risk preferences shift when the same decisions are phrases in terms of gains versus losses-emerge in early adolescence as gist-based intuition develops. In autistic individuals, who rely less on gist-based intuition and more on verbatim-based analysis, framing biases are attenuated (i.e., they outperform typically developing control subjects). In adults, simple manipulations based on fuzzy-trace theory can make framing effects appear and disappear depending on whether gist-based intuition or verbatim-based analysis is induced. These theoretical principles are summarized and integrated in a new mathematical model that specifies how dual modes of reasoning combine to produce predictable variability in performance. In particular, we show how the most popular and extensively studied model of decision making-prospect theory-can be derived from fuzzy-trace theory by combining analytical (verbatim-based) and intuitive (gist-based) processes.

  19. Development of Naphthalene PLIF for Making Quantitative Measurements of Ablation Products Transport in Supersonic Flows

    NASA Astrophysics Data System (ADS)

    Combs, Christopher; Clemens, Noel

    2014-11-01

    Ablation is a multi-physics process involving heat and mass transfer and codes aiming to predict ablation are in need of experimental data pertaining to the turbulent transport of ablation products for validation. Low-temperature sublimating ablators such as naphthalene can be used to create a limited physics problem and simulate ablation at relatively low temperature conditions. At The University of Texas at Austin, a technique is being developed that uses planar laser-induced fluorescence (PLIF) of naphthalene to visualize the transport of ablation products in a supersonic flow. In the current work, naphthalene PLIF will be used to make quantitative measurements of the concentration of ablation products in a Mach 5 turbulent boundary layer. For this technique to be used for quantitative research in supersonic wind tunnel facilities, the fluorescence properties of naphthalene must first be investigated over a wide range of state conditions and excitation wavelengths. The resulting calibration of naphthalene fluorescence will be applied to the PLIF images of ablation from a boundary layer plug, yielding 2-D fields of naphthalene mole fraction. These images may help provide data necessary to validate computational models of ablative thermal protection systems for reentry vehicles. Work supported by NASA Space Technology Research Fellowship Program under grant NNX11AN55H.

  20. Making quantitative morphological variation from basic developmental processes: where are we? The case of the Drosophila wing

    PubMed Central

    Alexis, Matamoro-Vidal; Isaac, Salazar-Ciudad; David, Houle

    2015-01-01

    One of the aims of evolutionary developmental biology is to discover the developmental origins of morphological variation. The discipline has mainly focused on qualitative morphological differences (e.g., presence or absence of a structure) between species. Studies addressing subtle, quantitative variation are less common. The Drosophila wing is a model for the study of development and evolution, making it suitable to investigate the developmental mechanisms underlying the subtle quantitative morphological variation observed in nature. Previous reviews have focused on the processes involved in wing differentiation, patterning and growth. Here, we investigate what is known about how the wing achieves its final shape, and what variation in development is capable of generating the variation in wing shape observed in nature. Three major developmental stages need to be considered: larval development, pupariation, and pupal development. The major cellular processes involved in the determination of tissue size and shape are cell proliferation, cell death, oriented cell division and oriented cell intercalation. We review how variation in temporal and spatial distribution of growth and transcription factors affects these cellular mechanisms, which in turn affects wing shape. We then discuss which aspects of the wing morphological variation are predictable on the basis of these mechanisms. PMID:25619644

  1. Forecasting seasonal outbreaks of influenza.

    PubMed

    Shaman, Jeffrey; Karspeck, Alicia

    2012-12-11

    Influenza recurs seasonally in temperate regions of the world; however, our ability to predict the timing, duration, and magnitude of local seasonal outbreaks of influenza remains limited. Here we develop a framework for initializing real-time forecasts of seasonal influenza outbreaks, using a data assimilation technique commonly applied in numerical weather prediction. The availability of real-time, web-based estimates of local influenza infection rates makes this type of quantitative forecasting possible. Retrospective ensemble forecasts are generated on a weekly basis following assimilation of these web-based estimates for the 2003-2008 influenza seasons in New York City. The findings indicate that real-time skillful predictions of peak timing can be made more than 7 wk in advance of the actual peak. In addition, confidence in those predictions can be inferred from the spread of the forecast ensemble. This work represents an initial step in the development of a statistically rigorous system for real-time forecast of seasonal influenza.

  2. Forecasting seasonal outbreaks of influenza

    PubMed Central

    Shaman, Jeffrey; Karspeck, Alicia

    2012-01-01

    Influenza recurs seasonally in temperate regions of the world; however, our ability to predict the timing, duration, and magnitude of local seasonal outbreaks of influenza remains limited. Here we develop a framework for initializing real-time forecasts of seasonal influenza outbreaks, using a data assimilation technique commonly applied in numerical weather prediction. The availability of real-time, web-based estimates of local influenza infection rates makes this type of quantitative forecasting possible. Retrospective ensemble forecasts are generated on a weekly basis following assimilation of these web-based estimates for the 2003–2008 influenza seasons in New York City. The findings indicate that real-time skillful predictions of peak timing can be made more than 7 wk in advance of the actual peak. In addition, confidence in those predictions can be inferred from the spread of the forecast ensemble. This work represents an initial step in the development of a statistically rigorous system for real-time forecast of seasonal influenza. PMID:23184969

  3. Quantitative Adverse Outcome Pathways and Their ...

    EPA Pesticide Factsheets

    A quantitative adverse outcome pathway (qAOP) consists of one or more biologically based, computational models describing key event relationships linking a molecular initiating event (MIE) to an adverse outcome. A qAOP provides quantitative, dose–response, and time-course predictions that can support regulatory decision-making. Herein we describe several facets of qAOPs, including (a) motivation for development, (b) technical considerations, (c) evaluation of confidence, and (d) potential applications. The qAOP used as an illustrative example for these points describes the linkage between inhibition of cytochrome P450 19A aromatase (the MIE) and population-level decreases in the fathead minnow (FHM; Pimephales promelas). The qAOP consists of three linked computational models for the following: (a) the hypothalamic-pitutitary-gonadal axis in female FHMs, where aromatase inhibition decreases the conversion of testosterone to 17β-estradiol (E2), thereby reducing E2-dependent vitellogenin (VTG; egg yolk protein precursor) synthesis, (b) VTG-dependent egg development and spawning (fecundity), and (c) fecundity-dependent population trajectory. While development of the example qAOP was based on experiments with FHMs exposed to the aromatase inhibitor fadrozole, we also show how a toxic equivalence (TEQ) calculation allows use of the qAOP to predict effects of another, untested aromatase inhibitor, iprodione. While qAOP development can be resource-intensive, the quan

  4. Iterative near-term ecological forecasting: Needs, opportunities, and challenges

    USGS Publications Warehouse

    Dietze, Michael C.; Fox, Andrew; Beck-Johnson, Lindsay; Betancourt, Julio L.; Hooten, Mevin B.; Jarnevich, Catherine S.; Keitt, Timothy H.; Kenney, Melissa A.; Laney, Christine M.; Larsen, Laurel G.; Loescher, Henry W.; Lunch, Claire K.; Pijanowski, Bryan; Randerson, James T.; Read, Emily; Tredennick, Andrew T.; Vargas, Rodrigo; Weathers, Kathleen C.; White, Ethan P.

    2018-01-01

    Two foundational questions about sustainability are “How are ecosystems and the services they provide going to change in the future?” and “How do human decisions affect these trajectories?” Answering these questions requires an ability to forecast ecological processes. Unfortunately, most ecological forecasts focus on centennial-scale climate responses, therefore neither meeting the needs of near-term (daily to decadal) environmental decision-making nor allowing comparison of specific, quantitative predictions to new observational data, one of the strongest tests of scientific theory. Near-term forecasts provide the opportunity to iteratively cycle between performing analyses and updating predictions in light of new evidence. This iterative process of gaining feedback, building experience, and correcting models and methods is critical for improving forecasts. Iterative, near-term forecasting will accelerate ecological research, make it more relevant to society, and inform sustainable decision-making under high uncertainty and adaptive management. Here, we identify the immediate scientific and societal needs, opportunities, and challenges for iterative near-term ecological forecasting. Over the past decade, data volume, variety, and accessibility have greatly increased, but challenges remain in interoperability, latency, and uncertainty quantification. Similarly, ecologists have made considerable advances in applying computational, informatic, and statistical methods, but opportunities exist for improving forecast-specific theory, methods, and cyberinfrastructure. Effective forecasting will also require changes in scientific training, culture, and institutions. The need to start forecasting is now; the time for making ecology more predictive is here, and learning by doing is the fastest route to drive the science forward.

  5. Iterative near-term ecological forecasting: Needs, opportunities, and challenges.

    PubMed

    Dietze, Michael C; Fox, Andrew; Beck-Johnson, Lindsay M; Betancourt, Julio L; Hooten, Mevin B; Jarnevich, Catherine S; Keitt, Timothy H; Kenney, Melissa A; Laney, Christine M; Larsen, Laurel G; Loescher, Henry W; Lunch, Claire K; Pijanowski, Bryan C; Randerson, James T; Read, Emily K; Tredennick, Andrew T; Vargas, Rodrigo; Weathers, Kathleen C; White, Ethan P

    2018-02-13

    Two foundational questions about sustainability are "How are ecosystems and the services they provide going to change in the future?" and "How do human decisions affect these trajectories?" Answering these questions requires an ability to forecast ecological processes. Unfortunately, most ecological forecasts focus on centennial-scale climate responses, therefore neither meeting the needs of near-term (daily to decadal) environmental decision-making nor allowing comparison of specific, quantitative predictions to new observational data, one of the strongest tests of scientific theory. Near-term forecasts provide the opportunity to iteratively cycle between performing analyses and updating predictions in light of new evidence. This iterative process of gaining feedback, building experience, and correcting models and methods is critical for improving forecasts. Iterative, near-term forecasting will accelerate ecological research, make it more relevant to society, and inform sustainable decision-making under high uncertainty and adaptive management. Here, we identify the immediate scientific and societal needs, opportunities, and challenges for iterative near-term ecological forecasting. Over the past decade, data volume, variety, and accessibility have greatly increased, but challenges remain in interoperability, latency, and uncertainty quantification. Similarly, ecologists have made considerable advances in applying computational, informatic, and statistical methods, but opportunities exist for improving forecast-specific theory, methods, and cyberinfrastructure. Effective forecasting will also require changes in scientific training, culture, and institutions. The need to start forecasting is now; the time for making ecology more predictive is here, and learning by doing is the fastest route to drive the science forward.

  6. A predictive framework to understand forest responses to global change.

    PubMed

    McMahon, Sean M; Dietze, Michael C; Hersh, Michelle H; Moran, Emily V; Clark, James S

    2009-04-01

    Forests are one of Earth's critical biomes. They have been shown to respond strongly to many of the drivers that are predicted to change natural systems over this century, including climate, introduced species, and other anthropogenic influences. Predicting how different tree species might respond to this complex of forces remains a daunting challenge for forest ecologists. Yet shifts in species composition and abundance can radically influence hydrological and atmospheric systems, plant and animal ranges, and human populations, making this challenge an important one to address. Forest ecologists have gathered a great deal of data over the past decades and are now using novel quantitative and computational tools to translate those data into predictions about the fate of forests. Here, after a brief review of the threats to forests over the next century, one of the more promising approaches to making ecological predictions is described: using hierarchical Bayesian methods to model forest demography and simulating future forests from those models. This approach captures complex processes, such as seed dispersal and mortality, and incorporates uncertainty due to unknown mechanisms, data problems, and parameter uncertainty. After describing the approach, an example by simulating drought for a southeastern forest is offered. Finally, there is a discussion of how this approach and others need to be cast within a framework of prediction that strives to answer the important questions posed to environmental scientists, but does so with a respect for the challenges inherent in predicting the future of a complex biological system.

  7. Development of quantitative exposure data for a pooled exposure-response analysis of 10 silica cohorts.

    PubMed

    Mannetje, Andrea 't; Steenland, Kyle; Checkoway, Harvey; Koskela, Riitta-Sisko; Koponen, Matti; Attfield, Michael; Chen, Jingqiong; Hnizdo, Eva; DeKlerk, Nicholas; Dosemeci, Mustafa

    2002-08-01

    Comprehensive quantitative silica exposure estimates over time, measured in the same units across a number of cohorts, would make possible a pooled exposure-response analysis for lung cancer. Such an analysis would help clarify the continuing controversy regarding whether silica causes lung cancer. Existing quantitative exposure data for 10 silica-exposed cohorts were retrieved from the original investigators. Occupation- and time-specific exposure estimates were either adopted/adapted or developed for each cohort, and converted to milligram per cubic meter (mg/m(3)) respirable crystalline silica. Quantitative exposure assignments were typically based on a large number (thousands) of raw measurements, or otherwise consisted of exposure estimates by experts (for two cohorts). Median exposure level of the cohorts ranged between 0.04 and 0.59 mg/m(3) respirable crystalline silica. Exposure estimates were partially validated via their successful prediction of silicosis in these cohorts. Existing data were successfully adopted or modified to create comparable quantitative exposure estimates over time for 10 silica-exposed cohorts, permitting a pooled exposure-response analysis. The difficulties encountered in deriving common exposure estimates across cohorts are discussed. Copyright 2002 Wiley-Liss, Inc.

  8. Predictors of Shared Decision Making and Level of Agreement between Consumers and Providers in Psychiatric Care

    PubMed Central

    Fukui, Sadaaki; Salyers, Michelle P.; Matthias, Marianne S.; Collins, Linda; Thompson, John; Coffman, Melinda; Torrey, William C.

    2014-01-01

    The purpose of this study was to quantitatively examine elements of shared decision making (SDM), and to establish empirical evidence for factors correlated with SDM and the level of agreement between consumer and provider in psychiatric care. Transcripts containing 128 audio-recorded medication check-up visits with eight providers at three community mental health centers were rated using the Shared Decision Making scale, adapted from Braddock’s Informed Decision Making Scale (Braddock et al., 1997; 1999; 2008). Multilevel regression analyses revealed that greater consumer activity in the session and greater decision complexity significantly predicted the SDM score. The best predictor of agreement between consumer and provider was “exploration of consumer preference,” with a four-fold increase in full agreement when consumer preferences were discussed more completely. Enhancing active consumer participation, particularly by incorporating consumer preferences in the decision making process appears to be an important factor in SDM. PMID:23299226

  9. Silkworm cocoons inspire models for random fiber and particulate composites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen Fujia; Porter, David; Vollrath, Fritz

    The bioengineering design principles evolved in silkworm cocoons make them ideal natural prototypes and models for structural composites. Cocoons depend for their stiffness and strength on the connectivity of bonding between their constituent materials of silk fibers and sericin binder. Strain-activated mechanisms for loss of bonding connectivity in cocoons can be translated directly into a surprisingly simple yet universal set of physically realistic as well as predictive quantitative structure-property relations for a wide range of technologically important fiber and particulate composite materials.

  10. Silkworm cocoons inspire models for random fiber and particulate composites

    NASA Astrophysics Data System (ADS)

    Chen, Fujia; Porter, David; Vollrath, Fritz

    2010-10-01

    The bioengineering design principles evolved in silkworm cocoons make them ideal natural prototypes and models for structural composites. Cocoons depend for their stiffness and strength on the connectivity of bonding between their constituent materials of silk fibers and sericin binder. Strain-activated mechanisms for loss of bonding connectivity in cocoons can be translated directly into a surprisingly simple yet universal set of physically realistic as well as predictive quantitative structure-property relations for a wide range of technologically important fiber and particulate composite materials.

  11. ACL Return to Sport Guidelines and Criteria.

    PubMed

    Davies, George J; McCarty, Eric; Provencher, Matthew; Manske, Robert C

    2017-09-01

    Because of the epidemiological incidence of anterior cruciate ligament (ACL) injuries, the high reinjury rates that occur when returning back to sports, the actual number of patients that return to the same premorbid level of competition, the high incidence of osteoarthritis at 5-10-year follow-ups, and the effects on the long-term health of the knee and the quality of life for the patient, individualizing the return to sports after ACL reconstruction (ACL-R) is critical. However, one of the challenging but unsolved dilemmas is what criteria and clinical decision making should be used to return an athlete back to sports following an ACL-R. This article describes an example of a functional testing algorithm (FTA) as one method for clinical decision making based on quantitative and qualitative testing and assessment utilized to make informed decisions to return an athlete to their sports safely and without compromised performance. The methods were a review of the best current evidence to support a FTA. In order to evaluate all the complicated domains of the clinical decision making for individualizing the return to sports after ACL-R, numerous assessments need to be performed including the biopsychosocial concepts, impairment testing, strength and power testing, functional testing, and patient-reported outcomes (PROs). The optimum criteria to use for individualizing the return to sports after ACL-R remain elusive. However, since this decision needs to be made on a regular basis with the safety and performance factors of the patient involved, this FTA provides one method of quantitatively and qualitatively making the decisions. Admittedly, there is no predictive validity of this system, but it does provide practical guidelines to facilitate the clinical decision making process for return to sports. The clinical decision to return an athlete back into competition has significant implications ranging from the safety of the athlete, to performance factors and actual litigation issues. By using a multifactorial FTA, such as the one described, provides quantitative and qualitatively criteria to make an informed decision in the best interests of the athlete.

  12. Difficult Decisions: A Qualitative Exploration of the Statistical Decision Making Process from the Perspectives of Psychology Students and Academics

    PubMed Central

    Allen, Peter J.; Dorozenko, Kate P.; Roberts, Lynne D.

    2016-01-01

    Quantitative research methods are essential to the development of professional competence in psychology. They are also an area of weakness for many students. In particular, students are known to struggle with the skill of selecting quantitative analytical strategies appropriate for common research questions, hypotheses and data types. To begin understanding this apparent deficit, we presented nine psychology undergraduates (who had all completed at least one quantitative methods course) with brief research vignettes, and asked them to explicate the process they would follow to identify an appropriate statistical technique for each. Thematic analysis revealed that all participants found this task challenging, and even those who had completed several research methods courses struggled to articulate how they would approach the vignettes on more than a very superficial and intuitive level. While some students recognized that there is a systematic decision making process that can be followed, none could describe it clearly or completely. We then presented the same vignettes to 10 psychology academics with particular expertise in conducting research and/or research methods instruction. Predictably, these “experts” were able to describe a far more systematic, comprehensive, flexible, and nuanced approach to statistical decision making, which begins early in the research process, and pays consideration to multiple contextual factors. They were sensitive to the challenges that students experience when making statistical decisions, which they attributed partially to how research methods and statistics are commonly taught. This sensitivity was reflected in their pedagogic practices. When asked to consider the format and features of an aid that could facilitate the statistical decision making process, both groups expressed a preference for an accessible, comprehensive and reputable resource that follows a basic decision tree logic. For the academics in particular, this aid should function as a teaching tool, which engages the user with each choice-point in the decision making process, rather than simply providing an “answer.” Based on these findings, we offer suggestions for tools and strategies that could be deployed in the research methods classroom to facilitate and strengthen students' statistical decision making abilities. PMID:26909064

  13. Difficult Decisions: A Qualitative Exploration of the Statistical Decision Making Process from the Perspectives of Psychology Students and Academics.

    PubMed

    Allen, Peter J; Dorozenko, Kate P; Roberts, Lynne D

    2016-01-01

    Quantitative research methods are essential to the development of professional competence in psychology. They are also an area of weakness for many students. In particular, students are known to struggle with the skill of selecting quantitative analytical strategies appropriate for common research questions, hypotheses and data types. To begin understanding this apparent deficit, we presented nine psychology undergraduates (who had all completed at least one quantitative methods course) with brief research vignettes, and asked them to explicate the process they would follow to identify an appropriate statistical technique for each. Thematic analysis revealed that all participants found this task challenging, and even those who had completed several research methods courses struggled to articulate how they would approach the vignettes on more than a very superficial and intuitive level. While some students recognized that there is a systematic decision making process that can be followed, none could describe it clearly or completely. We then presented the same vignettes to 10 psychology academics with particular expertise in conducting research and/or research methods instruction. Predictably, these "experts" were able to describe a far more systematic, comprehensive, flexible, and nuanced approach to statistical decision making, which begins early in the research process, and pays consideration to multiple contextual factors. They were sensitive to the challenges that students experience when making statistical decisions, which they attributed partially to how research methods and statistics are commonly taught. This sensitivity was reflected in their pedagogic practices. When asked to consider the format and features of an aid that could facilitate the statistical decision making process, both groups expressed a preference for an accessible, comprehensive and reputable resource that follows a basic decision tree logic. For the academics in particular, this aid should function as a teaching tool, which engages the user with each choice-point in the decision making process, rather than simply providing an "answer." Based on these findings, we offer suggestions for tools and strategies that could be deployed in the research methods classroom to facilitate and strengthen students' statistical decision making abilities.

  14. Prediction of acute mammalian toxicity using QSAR methods: a case study of sulfur mustard and its breakdown products.

    PubMed

    Ruiz, Patricia; Begluitti, Gino; Tincher, Terry; Wheeler, John; Mumtaz, Moiz

    2012-07-27

    Predicting toxicity quantitatively, using Quantitative Structure Activity Relationships (QSAR), has matured over recent years to the point that the predictions can be used to help identify missing comparison values in a substance's database. In this manuscript we investigate using the lethal dose that kills fifty percent of a test population (LD₅₀) for determining relative toxicity of a number of substances. In general, the smaller the LD₅₀ value, the more toxic the chemical, and the larger the LD₅₀ value, the lower the toxicity. When systemic toxicity and other specific toxicity data are unavailable for the chemical(s) of interest, during emergency responses, LD₅₀ values may be employed to determine the relative toxicity of a series of chemicals. In the present study, a group of chemical warfare agents and their breakdown products have been evaluated using four available rat oral QSAR LD₅₀ models. The QSAR analysis shows that the breakdown products of Sulfur Mustard (HD) are predicted to be less toxic than the parent compound as well as other known breakdown products that have known toxicities. The QSAR estimated break down products LD₅₀ values ranged from 299 mg/kg to 5,764 mg/kg. This evaluation allows for the ranking and toxicity estimation of compounds for which little toxicity information existed; thus leading to better risk decision making in the field.

  15. A Monte-Carlo game theoretic approach for Multi-Criteria Decision Making under uncertainty

    NASA Astrophysics Data System (ADS)

    Madani, Kaveh; Lund, Jay R.

    2011-05-01

    Game theory provides a useful framework for studying Multi-Criteria Decision Making problems. This paper suggests modeling Multi-Criteria Decision Making problems as strategic games and solving them using non-cooperative game theory concepts. The suggested method can be used to prescribe non-dominated solutions and also can be used as a method to predict the outcome of a decision making problem. Non-cooperative stability definitions for solving the games allow consideration of non-cooperative behaviors, often neglected by other methods which assume perfect cooperation among decision makers. To deal with the uncertainty in input variables a Monte-Carlo Game Theory (MCGT) approach is suggested which maps the stochastic problem into many deterministic strategic games. The games are solved using non-cooperative stability definitions and the results include possible effects of uncertainty in input variables on outcomes. The method can handle multi-criteria multi-decision-maker problems with uncertainty. The suggested method does not require criteria weighting, developing a compound decision objective, and accurate quantitative (cardinal) information as it simplifies the decision analysis by solving problems based on qualitative (ordinal) information, reducing the computational burden substantially. The MCGT method is applied to analyze California's Sacramento-San Joaquin Delta problem. The suggested method provides insights, identifies non-dominated alternatives, and predicts likely decision outcomes.

  16. Forensic comparison and matching of fingerprints: using quantitative image measures for estimating error rates through understanding and predicting difficulty.

    PubMed

    Kellman, Philip J; Mnookin, Jennifer L; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E

    2014-01-01

    Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and subjective assessment of difficulty in fingerprint comparisons.

  17. Modeling the reactivities of hydroxyl radical and ozone towards atmospheric organic chemicals using quantitative structure-reactivity relationship approaches.

    PubMed

    Gupta, Shikha; Basant, Nikita; Mohan, Dinesh; Singh, Kunwar P

    2016-07-01

    The persistence and the removal of organic chemicals from the atmosphere are largely determined by their reactions with the OH radical and O3. Experimental determinations of the kinetic rate constants of OH and O3 with a large number of chemicals are tedious and resource intensive and development of computational approaches has widely been advocated. Recently, ensemble machine learning (EML) methods have emerged as unbiased tools to establish relationship between independent and dependent variables having a nonlinear dependence. In this study, EML-based, temperature-dependent quantitative structure-reactivity relationship (QSRR) models have been developed for predicting the kinetic rate constants for OH (kOH) and O3 (kO3) reactions with diverse chemicals. Structural diversity of chemicals was evaluated using a Tanimoto similarity index. The generalization and prediction abilities of the constructed models were established through rigorous internal and external validation performed employing statistical checks. In test data, the EML QSRR models yielded correlation (R (2)) of ≥0.91 between the measured and the predicted reactivities. The applicability domains of the constructed models were determined using methods based on descriptors range, Euclidean distance, leverage, and standardization approaches. The prediction accuracies for the higher reactivity compounds were relatively better than those of the low reactivity compounds. Proposed EML QSRR models performed well and outperformed the previous reports. The proposed QSRR models can make predictions of rate constants at different temperatures. The proposed models can be useful tools in predicting the reactivities of chemicals towards OH radical and O3 in the atmosphere.

  18. Assessment of quantitative structure-activity relationship of toxicity prediction models for Korean chemical substance control legislation

    PubMed Central

    Kim, Kwang-Yon; Shin, Seong Eun; No, Kyoung Tai

    2015-01-01

    Objectives For successful adoption of legislation controlling registration and assessment of chemical substances, it is important to obtain sufficient toxicological experimental evidence and other related information. It is also essential to obtain a sufficient number of predicted risk and toxicity results. Particularly, methods used in predicting toxicities of chemical substances during acquisition of required data, ultimately become an economic method for future dealings with new substances. Although the need for such methods is gradually increasing, the-required information about reliability and applicability range has not been systematically provided. Methods There are various representative environmental and human toxicity models based on quantitative structure-activity relationships (QSAR). Here, we secured the 10 representative QSAR-based prediction models and its information that can make predictions about substances that are expected to be regulated. We used models that predict and confirm usability of the information expected to be collected and submitted according to the legislation. After collecting and evaluating each predictive model and relevant data, we prepared methods quantifying the scientific validity and reliability, which are essential conditions for using predictive models. Results We calculated predicted values for the models. Furthermore, we deduced and compared adequacies of the models using the Alternative non-testing method assessed for Registration, Evaluation, Authorization, and Restriction of Chemicals Substances scoring system, and deduced the applicability domains for each model. Additionally, we calculated and compared inclusion rates of substances expected to be regulated, to confirm the applicability. Conclusions We evaluated and compared the data, adequacy, and applicability of our selected QSAR-based toxicity prediction models, and included them in a database. Based on this data, we aimed to construct a system that can be used with predicted toxicity results. Furthermore, by presenting the suitability of individual predicted results, we aimed to provide a foundation that could be used in actual assessments and regulations. PMID:26206368

  19. Offspring Size and Reproductive Allocation in Harvester Ants.

    PubMed

    Wiernasz, Diane C; Cole, Blaine J

    2018-01-01

    A fundamental decision that an organism must make is how to allocate resources to offspring, with respect to both size and number. The two major theoretical approaches to this problem, optimal offspring size and optimistic brood size models, make different predictions that may be reconciled by including how offspring fitness is related to size. We extended the reasoning of Trivers and Willard (1973) to derive a general model of how parents should allocate additional resources with respect to the number of males and females produced, and among individuals of each sex, based on the fitness payoffs of each. We then predicted how harvester ant colonies should invest additional resources and tested three hypotheses derived from our model, using data from 3 years of food supplementation bracketed by 6 years without food addition. All major results were predicted by our model: food supplementation increased the number of reproductives produced. Male, but not female, size increased with food addition; the greatest increases in male size occurred in colonies that made small females. We discuss how use of a fitness landscape improves quantitative predictions about allocation decisions. When parents can invest differentially in offspring of different types, the best strategy will depend on parental state as well as the effect of investment on offspring fitness.

  20. In silico environmental chemical science: properties and processes from statistical and computational modelling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tratnyek, Paul G.; Bylaska, Eric J.; Weber, Eric J.

    2017-01-01

    Quantitative structure–activity relationships (QSARs) have long been used in the environmental sciences. More recently, molecular modeling and chemoinformatic methods have become widespread. These methods have the potential to expand and accelerate advances in environmental chemistry because they complement observational and experimental data with “in silico” results and analysis. The opportunities and challenges that arise at the intersection between statistical and theoretical in silico methods are most apparent in the context of properties that determine the environmental fate and effects of chemical contaminants (degradation rate constants, partition coefficients, toxicities, etc.). The main example of this is the calibration of QSARs usingmore » descriptor variable data calculated from molecular modeling, which can make QSARs more useful for predicting property data that are unavailable, but also can make them more powerful tools for diagnosis of fate determining pathways and mechanisms. Emerging opportunities for “in silico environmental chemical science” are to move beyond the calculation of specific chemical properties using statistical models and toward more fully in silico models, prediction of transformation pathways and products, incorporation of environmental factors into model predictions, integration of databases and predictive models into more comprehensive and efficient tools for exposure assessment, and extending the applicability of all the above from chemicals to biologicals and materials.« less

  1. A statistical framework for applying RNA profiling to chemical hazard detection.

    PubMed

    Kostich, Mitchell S

    2017-12-01

    Use of 'omics technologies in environmental science is expanding. However, application is mostly restricted to characterizing molecular steps leading from toxicant interaction with molecular receptors to apical endpoints in laboratory species. Use in environmental decision-making is limited, due to difficulty in elucidating mechanisms in sufficient detail to make quantitative outcome predictions in any single species or in extending predictions to aquatic communities. Here we introduce a mechanism-agnostic statistical approach, supplementing mechanistic investigation by allowing probabilistic outcome prediction even when understanding of molecular pathways is limited, and facilitating extrapolation from results in laboratory test species to predictions about aquatic communities. We use concepts familiar to environmental managers, supplemented with techniques employed for clinical interpretation of 'omics-based biomedical tests. We describe the framework in step-wise fashion, beginning with single test replicates of a single RNA variant, then extending to multi-gene RNA profiling, collections of test replicates, and integration of complementary data. In order to simplify the presentation, we focus on using RNA profiling for distinguishing presence versus absence of chemical hazards, but the principles discussed can be extended to other types of 'omics measurements, multi-class problems, and regression. We include a supplemental file demonstrating many of the concepts using the open source R statistical package. Published by Elsevier Ltd.

  2. Genetic models of homosexuality: generating testable predictions

    PubMed Central

    Gavrilets, Sergey; Rice, William R

    2006-01-01

    Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality including: (i) chromosomal location, (ii) dominance among segregating alleles and (iii) effect sizes that distinguish between the two major models for their polymorphism: the overdominance and sexual antagonism models. We conclude that the measurement of the genetic characteristics of quantitative trait loci (QTLs) found in genomic screens for genes influencing homosexuality can be highly informative in resolving the form of natural selection maintaining their polymorphism. PMID:17015344

  3. Adversity magnifies the importance of social information in decision-making.

    PubMed

    Pérez-Escudero, Alfonso; de Polavieja, Gonzalo G

    2017-11-01

    Decision-making theories explain animal behaviour, including human behaviour, as a response to estimations about the environment. In the case of collective behaviour, they have given quantitative predictions of how animals follow the majority option. However, they have so far failed to explain that in some species and contexts social cohesion increases when conditions become more adverse (i.e. individuals choose the majority option with higher probability when the estimated quality of all available options decreases). We have found that this failure is due to modelling simplifications that aided analysis, like low levels of stochasticity or the assumption that only one choice is the correct one. We provide a more general but simple geometric framework to describe optimal or suboptimal decisions in collectives that gives insight into three different mechanisms behind this effect. The three mechanisms have in common that the private information acts as a gain factor to social information: a decrease in the privately estimated quality of all available options increases the impact of social information, even when social information itself remains unchanged. This increase in the importance of social information makes it more likely that agents will follow the majority option. We show that these results quantitatively explain collective behaviour in fish and experiments of social influence in humans. © 2017 The Authors.

  4. Molecule kernels: a descriptor- and alignment-free quantitative structure-activity relationship approach.

    PubMed

    Mohr, Johannes A; Jain, Brijnesh J; Obermayer, Klaus

    2008-09-01

    Quantitative structure activity relationship (QSAR) analysis is traditionally based on extracting a set of molecular descriptors and using them to build a predictive model. In this work, we propose a QSAR approach based directly on the similarity between the 3D structures of a set of molecules measured by a so-called molecule kernel, which is independent of the spatial prealignment of the compounds. Predictors can be build using the molecule kernel in conjunction with the potential support vector machine (P-SVM), a recently proposed machine learning method for dyadic data. The resulting models make direct use of the structural similarities between the compounds in the test set and a subset of the training set and do not require an explicit descriptor construction. We evaluated the predictive performance of the proposed method on one classification and four regression QSAR datasets and compared its results to the results reported in the literature for several state-of-the-art descriptor-based and 3D QSAR approaches. In this comparison, the proposed molecule kernel method performed better than the other QSAR methods.

  5. Anomalous chiral transport in heavy ion collisions from Anomalous-Viscous Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Shi, Shuzhe; Jiang, Yin; Lilleskov, Elias; Liao, Jinfeng

    2018-07-01

    Chiral anomaly is a fundamental aspect of quantum theories with chiral fermions. How such microscopic anomaly manifests itself in a macroscopic many-body system with chiral fermions, is a highly nontrivial question that has recently attracted significant interest. As it turns out, unusual transport currents can be induced by chiral anomaly under suitable conditions in such systems, with the notable example of the Chiral Magnetic Effect (CME) where a vector current (e.g. electric current) is generated along an external magnetic field. A lot of efforts have been made to search for CME in heavy ion collisions, by measuring the charge separation effect induced by the CME transport. A crucial challenge in such effort, is the quantitative prediction for the CME signal. In this paper, we develop the Anomalous-Viscous Fluid Dynamics (AVFD) framework, which implements the anomalous fluid dynamics to describe the evolution of fermion currents in QGP, on top of the neutral bulk background described by the VISH2+1 hydrodynamic simulations for heavy ion collisions. With this new tool, we quantitatively and systematically investigate the dependence of the CME signal to a series of theoretical inputs and associated uncertainties. With realistic estimates of initial conditions and magnetic field lifetime, the predicted CME signal is quantitatively consistent with measured change separation data in 200GeV Au-Au collisions. Based on analysis of Au-Au collisions, we further make predictions for the CME observable to be measured in the planned isobaric (Ru-Ru v.s. Zr-Zr) collision experiment, which could provide a most decisive test of the CME in heavy ion collisions.

  6. Intelligent Soft Computing on Forex: Exchange Rates Forecasting with Hybrid Radial Basis Neural Network

    PubMed Central

    Marcek, Dusan; Durisova, Maria

    2016-01-01

    This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process. PMID:26977450

  7. Intelligent Soft Computing on Forex: Exchange Rates Forecasting with Hybrid Radial Basis Neural Network.

    PubMed

    Falat, Lukas; Marcek, Dusan; Durisova, Maria

    2016-01-01

    This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process.

  8. A quantitative evaluation of ethylene production in the recombinant cyanobacterium Synechocystis sp. PCC 6803 harboring the ethylene-forming enzyme by membrane inlet mass spectrometry.

    PubMed

    Zavřel, Tomáš; Knoop, Henning; Steuer, Ralf; Jones, Patrik R; Červený, Jan; Trtílek, Martin

    2016-02-01

    The prediction of the world's future energy consumption and global climate change makes it desirable to identify new technologies to replace or augment fossil fuels by environmentally sustainable alternatives. One appealing sustainable energy concept is harvesting solar energy via photosynthesis coupled to conversion of CO2 into chemical feedstock and fuel. In this work, the production of ethylene, the most widely used petrochemical produced exclusively from fossil fuels, in the model cyanobacterium Synechocystis sp. PCC 6803 is studied. A novel instrumentation setup for quantitative monitoring of ethylene production using a combination of flat-panel photobioreactor coupled to a membrane-inlet mass spectrometer is introduced. Carbon partitioning is estimated using a quantitative model of cyanobacterial metabolism. The results show that ethylene is produced under a wide range of light intensities with an optimum at modest irradiances. The results allow production conditions to be optimized in a highly controlled setup. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Reclamation of mined lands in the western coal region

    USGS Publications Warehouse

    Narten, Perry F.; Litner, S.F.; Allingham, J.W.; Foster, Lee; Larsen, D.M.; McWreath, H.C.

    1983-01-01

    In 1978, a group of scientists from several Federal agencies examined reclamation work at 22 coal mines in seven western States. The results of these examinations were not used to derive quantitative predictions of the outcome of reclamation work but rather to determine the general requirements for revegetation success. Locally, reclamation efforts are affected by climate, especially precipitation; the landform of the restored surface; the nature of the overburden material; the nature of the surface soil; and the natural ecological system. The goals of reclamation efforts are now broader than ever. Regulations call not only for reducing the steepness of the final surface and establishing a cover of mostly perennial native vegetation, but for restoring the land for specific land uses, achieving diversity both in types of plants and in number of species, and reintroduction of biological and ecological processes. If specific sites are monitored over a long enough period of time, quantitative predictions of success for individual mines may be possible, and such predictions can be included in environmental impact statements to help in the decision-making process. The results of this study indicate that current reclamation objectives can be met when the reclamation effort is designed on the basis of site-specific needs and when existing technology is used.

  10. Dual Processes in Decision Making and Developmental Neuroscience: A Fuzzy-Trace Model

    PubMed Central

    Reyna, Valerie F.; Brainerd, Charles J.

    2011-01-01

    From Piaget to the present, traditional and dual-process theories have predicted improvement in reasoning from childhood to adulthood, and improvement has been observed. However, developmental reversals—that reasoning biases emerge with development —have also been observed in a growing list of paradigms. We explain how fuzzy-trace theory predicts both improvement and developmental reversals in reasoning and decision making. Drawing on research on logical and quantitative reasoning, as well as on risky decision making in the laboratory and in life, we illustrate how the same small set of theoretical principles apply to typical neurodevelopment, encompassing childhood, adolescence, and adulthood, and to neurological conditions such as autism and Alzheimer's disease. For example, framing effects—that risk preferences shift when the same decisions are phrases in terms of gains versus losses—emerge in early adolescence as gist-based intuition develops. In autistic individuals, who rely less on gist-based intuition and more on verbatim-based analysis, framing biases are attenuated (i.e., they outperform typically developing control subjects). In adults, simple manipulations based on fuzzy-trace theory can make framing effects appear and disappear depending on whether gist-based intuition or verbatim-based analysis is induced. These theoretical principles are summarized and integrated in a new mathematical model that specifies how dual modes of reasoning combine to produce predictable variability in performance. In particular, we show how the most popular and extensively studied model of decision making—prospect theory—can be derived from fuzzy-trace theory by combining analytical (verbatim-based) and intuitive (gist-based) processes. PMID:22096268

  11. Using demography and movement behavior to predict range expansion of the southern sea otter.

    USGS Publications Warehouse

    Tinker, M.T.; Doak, D.F.; Estes, J.A.

    2008-01-01

    In addition to forecasting population growth, basic demographic data combined with movement data provide a means for predicting rates of range expansion. Quantitative models of range expansion have rarely been applied to large vertebrates, although such tools could be useful for restoration and management of many threatened but recovering populations. Using the southern sea otter (Enhydra lutris nereis) as a case study, we utilized integro-difference equations in combination with a stage-structured projection matrix that incorporated spatial variation in dispersal and demography to make forecasts of population recovery and range recolonization. In addition to these basic predictions, we emphasize how to make these modeling predictions useful in a management context through the inclusion of parameter uncertainty and sensitivity analysis. Our models resulted in hind-cast (1989–2003) predictions of net population growth and range expansion that closely matched observed patterns. We next made projections of future range expansion and population growth, incorporating uncertainty in all model parameters, and explored the sensitivity of model predictions to variation in spatially explicit survival and dispersal rates. The predicted rate of southward range expansion (median = 5.2 km/yr) was sensitive to both dispersal and survival rates; elasticity analysis indicated that changes in adult survival would have the greatest potential effect on the rate of range expansion, while perturbation analysis showed that variation in subadult dispersal contributed most to variance in model predictions. Variation in survival and dispersal of females at the south end of the range contributed most of the variance in predicted southward range expansion. Our approach provides guidance for the acquisition of further data and a means of forecasting the consequence of specific management actions. Similar methods could aid in the management of other recovering populations.

  12. The salt marsh vegetation spread dynamics simulation and prediction based on conditions optimized CA

    NASA Astrophysics Data System (ADS)

    Guan, Yujuan; Zhang, Liquan

    2006-10-01

    The biodiversity conservation and management of the salt marsh vegetation relies on processing their spatial information. Nowadays, more attentions are focused on their classification surveying and describing qualitatively dynamics based on RS images interpreted, rather than on simulating and predicting their dynamics quantitatively, which is of greater importance for managing and planning the salt marsh vegetation. In this paper, our notion is to make a dynamic model on large-scale and to provide a virtual laboratory in which researchers can run it according requirements. Firstly, the characteristic of the cellular automata was analyzed and a conclusion indicated that it was necessary for a CA model to be extended geographically under varying conditions of space-time circumstance in order to make results matched the facts accurately. Based on the conventional cellular automata model, the author introduced several new conditions to optimize it for simulating the vegetation objectively, such as elevation, growth speed, invading ability, variation and inheriting and so on. Hence the CA cells and remote sensing image pixels, cell neighbors and pixel neighbors, cell rules and nature of the plants were unified respectively. Taking JiuDuanSha as the test site, where holds mainly Phragmites australis (P.australis) community, Scirpus mariqueter (S.mariqueter) community and Spartina alterniflora (S.alterniflora) community. The paper explored the process of making simulation and predictions about these salt marsh vegetable changing with the conditions optimized CA (COCA) model, and examined the links among data, statistical models, and ecological predictions. This study exploited the potential of applying Conditioned Optimized CA model technique to solve this problem.

  13. Physical mechanism of mind changes and tradeoffs among speed, accuracy, and energy cost in brain decision making: Landscape, flux, and path perspectives

    NASA Astrophysics Data System (ADS)

    Han, Yan; Kun, Zhang; Jin, Wang

    2016-07-01

    Cognitive behaviors are determined by underlying neural networks. Many brain functions, such as learning and memory, have been successfully described by attractor dynamics. For decision making in the brain, a quantitative description of global attractor landscapes has not yet been completely given. Here, we developed a theoretical framework to quantify the landscape associated with the steady state probability distributions and associated steady state curl flux, measuring the degree of non-equilibrium through the degree of detailed balance breaking for decision making. We quantified the decision-making processes with optimal paths from the undecided attractor states to the decided attractor states, which are identified as basins of attractions, on the landscape. Both landscape and flux determine the kinetic paths and speed. The kinetics and global stability of decision making are explored by quantifying the landscape topography through the barrier heights and the mean first passage time. Our theoretical predictions are in agreement with experimental observations: more errors occur under time pressure. We quantitatively explored two mechanisms of the speed-accuracy tradeoff with speed emphasis and further uncovered the tradeoffs among speed, accuracy, and energy cost. Our results imply that there is an optimal balance among speed, accuracy, and the energy cost in decision making. We uncovered the possible mechanisms of changes of mind and how mind changes improve performance in decision processes. Our landscape approach can help facilitate an understanding of the underlying physical mechanisms of cognitive processes and identify the key factors in the corresponding neural networks. Project supported by the National Natural Science Foundation of China (Grant Nos. 21190040, 91430217, and 11305176).

  14. Multialternative drift-diffusion model predicts the relationship between visual fixations and choice in value-based decisions.

    PubMed

    Krajbich, Ian; Rangel, Antonio

    2011-08-16

    How do we make decisions when confronted with several alternatives (e.g., on a supermarket shelf)? Previous work has shown that accumulator models, such as the drift-diffusion model, can provide accurate descriptions of the psychometric data for binary value-based choices, and that the choice process is guided by visual attention. However, the computational processes used to make choices in more complicated situations involving three or more options are unknown. We propose a model of trinary value-based choice that generalizes what is known about binary choice, and test it using an eye-tracking experiment. We find that the model provides a quantitatively accurate description of the relationship between choice, reaction time, and visual fixation data using the same parameters that were estimated in previous work on binary choice. Our findings suggest that the brain uses similar computational processes to make binary and trinary choices.

  15. Commutability of Cytomegalovirus WHO International Standard in Different Matrices

    PubMed Central

    Jones, Sara; Webb, Erika M.; Barry, Catherine P.; Choi, Won S.; Abravaya, Klara B.; Schneider, George J.

    2016-01-01

    Commutability of quantitative standards allows patient results to be compared across molecular diagnostic methods and laboratories. This is critical to establishing quantitative thresholds for use in clinical decision-making. A matrix effect associated with the 1st cytomegalovirus (CMV) WHO international standard (IS) was identified using the Abbott RealTime CMV assay. A commutability study was performed to compare the CMV WHO IS and patient specimens diluted in plasma and whole blood. Patient specimens showed similar CMV DNA quantitation values regardless of the diluent or extraction procedure used. The CMV WHO IS, on the other hand, exhibited a matrix effect. The CMV concentration reported for the WHO IS diluted in plasma was within the 95% prediction interval established with patient samples. In contrast, the reported DNA concentration of the CMV WHO IS diluted in whole blood was reduced approximately 0.4 log copies/ml, and values fell outside the 95% prediction interval. Calibrating the assay by using the CMV WHO IS diluted in whole blood would introduce a bias for CMV whole-blood quantitation; samples would be reported as having higher measured concentrations, by approximately 0.4 log IU/ml. Based on the commutability study with patient samples, the RealTime CMV assay was standardized based on the CMV WHO IS diluted in plasma. A revision of the instructions for use of the CMV WHO IS should be considered to alert users of the potential impact from the diluent matrix. The identification of a matrix effect with the CMV WHO IS underscores the importance of assessing commutability of the IS in order to achieve consistent results across methods. PMID:27030491

  16. Mechanistic and quantitative insight into cell surface targeted molecular imaging agent design.

    PubMed

    Zhang, Liang; Bhatnagar, Sumit; Deschenes, Emily; Thurber, Greg M

    2016-05-05

    Molecular imaging agent design involves simultaneously optimizing multiple probe properties. While several desired characteristics are straightforward, including high affinity and low non-specific background signal, in practice there are quantitative trade-offs between these properties. These include plasma clearance, where fast clearance lowers background signal but can reduce target uptake, and binding, where high affinity compounds sometimes suffer from lower stability or increased non-specific interactions. Further complicating probe development, many of the optimal parameters vary depending on both target tissue and imaging agent properties, making empirical approaches or previous experience difficult to translate. Here, we focus on low molecular weight compounds targeting extracellular receptors, which have some of the highest contrast values for imaging agents. We use a mechanistic approach to provide a quantitative framework for weighing trade-offs between molecules. Our results show that specific target uptake is well-described by quantitative simulations for a variety of targeting agents, whereas non-specific background signal is more difficult to predict. Two in vitro experimental methods for estimating background signal in vivo are compared - non-specific cellular uptake and plasma protein binding. Together, these data provide a quantitative method to guide probe design and focus animal work for more cost-effective and time-efficient development of molecular imaging agents.

  17. A Quantitative ADME-base Tool for Exploring Human ...

    EPA Pesticide Factsheets

    Exposure to a wide range of chemicals through our daily habits and routines is ubiquitous and largely unavoidable within modern society. The potential for human exposure, however, has not been quantified for the vast majority of chemicals with wide commercial use. Creative advances in exposure science are needed to support efficient and effective evaluation and management of chemical risks, particularly for chemicals in consumer products. The U.S. Environmental Protection Agency Office of Research and Development is developing, or collaborating in the development of, scientifically-defensible methods for making quantitative or semi-quantitative exposure predictions. The Exposure Prioritization (Ex Priori) model is a simplified, quantitative visual dashboard that provides a rank-ordered internalized dose metric to simultaneously explore exposures across chemical space (not chemical by chemical). Diverse data streams are integrated within the interface such that different exposure scenarios for “individual,” “population,” or “professional” time-use profiles can be interchanged to tailor exposure and quantitatively explore multi-chemical signatures of exposure, internalized dose (uptake), body burden, and elimination. Ex Priori has been designed as an adaptable systems framework that synthesizes knowledge from various domains and is amenable to new knowledge/information. As such, it algorithmically captures the totality of exposure across pathways. It

  18. Quantitative Imaging in Cancer Clinical Trials

    PubMed Central

    Yankeelov, Thomas E.; Mankoff, David A.; Schwartz, Lawrence H.; Lieberman, Frank S.; Buatti, John M.; Mountz, James M.; Erickson, Bradley J.; Fennessy, Fiona M.M.; Huang, Wei; Kalpathy-Cramer, Jayashree; Wahl, Richard L.; Linden, Hannah M.; Kinahan, Paul; Zhao, Binsheng; Hylton, Nola M.; Gillies, Robert J.; Clarke, Laurence; Nordstrom, Robert; Rubin, Daniel L.

    2015-01-01

    As anti-cancer therapies designed to target specific molecular pathways have been developed, it has become critical to develop methods to assess the response induced by such agents. While traditional, anatomic CT and MRI exams are useful in many settings, there is increasing evidence that these methods cannot answer the fundamental biological and physiological questions essential for assessment and, eventually, prediction of treatment response in the clinical trial setting, especially in the critical period soon after treatment is initiated. To optimally apply advances in quantitative imaging methods to trials of targeted cancer therapy, new infrastructure improvements are needed that incorporate these emerging techniques into the settings where they are most likely to have impact. In this review, we first elucidate the needs for therapeutic response assessment in the era of molecularly targeted therapy and describe how quantitative imaging can most effectively provide scientifically and clinically relevant data. We then describe the tools and methods required to apply quantitative imaging and provide concrete examples of work making these advances practically available for routine application in clinical trials. We conclude by proposing strategies to surmount barriers to wider incorporation of these quantitative imaging methods into clinical trials and, eventually, clinical practice. Our goal is to encourage and guide the oncology community to deploy standardized quantitative imaging techniques in clinical trials to further personalize care for cancer patients, and to provide a more efficient path for the development of improved targeted therapies. PMID:26773162

  19. Predicting the size of individual and group differences on speeded cognitive tasks.

    PubMed

    Chen, Jing; Hale, Sandra; Myerson, Joel

    2007-06-01

    An a priori test of the difference engine model (Myerson, Hale, Zheng, Jenkins, & Widaman, 2003) was conducted using a large, diverse sample of individuals who performed three speeded verbal tasks and three speeded visuospatial tasks. Results demonstrated that, as predicted by the model, the group standard deviation (SD) on any task was proportional to the amount of processing required by that task. Both individual performances as well as those of fast and slow subgroups could be accurately predicted by the model using no free parameters, just an individual or subgroup's mean z-score and the values of theoretical constructs estimated from fits to the group SDs. Taken together, these results are consistent with post hoc analyses reported by Myerson et al. and provide even stronger supporting evidence. In particular, the ability to make quantitative predictions without using any free parameters provides the clearest demonstration to date of the power of an analytic approach on the basis of the difference engine.

  20. Evaluation of IKTS Transparent Polycrystalline Magnesium Aluminate Spinel (MgAl2O4) for Armor and Infrared Dome/Window Applications

    DTIC Science & Technology

    2013-03-01

    interacted with (15). 4.3.3 Experimental Procedure Two MgAl2O4 spinel samples with nominal 0.6- and 1.6-μm mean grain sizes were tested using advanced...unable to make specific quantitative predictions at this time. Due to the nature of the experimental process, this technique is suitable only for...Information From Spherical Indentation; ARL-TR-4229; U.S. Army Research Laboratory: Aberdeen Proving Ground, MD, 2007. 24. ASTM E112. Standard Test

  1. Influence of study goals on study design and execution.

    PubMed

    Kirklin, J W; Blackstone, E H; Naftel, D C; Turner, M E

    1997-12-01

    From the viewpoint of a clinician who makes recommendations to patients about choosing from the multiple possible management schemes, quantitative information derived from statistical analyses of observational studies is useful. Although random assignment of therapy is optimal, appropriately performed studies in which therapy has been nonrandomly "assigned" are considered acceptable, albeit occasionally with limitations in inferences. The analyses are considered most useful when they generate multivariable equations suitable for predicting time-related outcomes in individual patients. Graphic presentations improve communication with patients and facilitate truly informed consent.

  2. Testing neoclassical and turbulent effects on poloidal rotation in the core of DIII-D

    DOE PAGES

    Chrystal, Colin; Burrell, Keith H.; Grierson, Brian A.; ...

    2014-07-09

    Experimental tests of ion poloidal rotation theories have been performed on DIII-D using a novel impurity poloidal rotation diagnostic. These tests show significant disagreements with theoretical predictions in various conditions, including L-mode plasmas with internal transport barriers (ITB), H-mode plasmas, and QH-mode plasmas. The theories tested include standard neoclassical theory, turbulence driven Reynolds stress, and fast-ion friction on the thermal ions. Poloidal rotation is observed to spin up at the formation of an ITB and makes a significant contribution to the measurement of themore » $$\\vec{E}$$ × $$\\vec{B}$$ shear that forms the ITB. In ITB cases, neoclassical theory agrees quantitatively with the experimental measurements only in the steep gradient region. Significant quantitative disagreement with neoclassical predictions is seen in the cores of ITB, QH-, and H-mode plasmas, demonstrating that neoclassical theory is an incomplete description of poloidal rotation. The addition of turbulence driven Reynolds stress does not remedy this disagreement; linear stability calculations and Doppler backscattering measurements show that disagreement increases as turbulence levels decline. Furthermore, the effect of fast-ion friction, by itself, does not lead to improved agreement; in QH-mode plasmas, neoclassical predictions are closest to experimental results in plasmas with the largest fast ion friction. Finally, predictions from a new model that combines all three effects show somewhat better agreement in the H-mode case, but discrepancies well outside the experimental error bars remain.« less

  3. Measurement and prediction of the thermomechanical response of shape memory alloy hybrid composite beams

    NASA Astrophysics Data System (ADS)

    Davis, Brian; Turner, Travis L.; Seelecke, Stefan

    2005-05-01

    Previous work at NASA Langley Research Center (LaRC) involved fabrication and testing of composite beams with embedded, pre-strained shape memory alloy (SMA) ribbons within the beam structures. That study also provided comparison of experimental results with numerical predictions from a research code making use of a new thermoelastic model for shape memory alloy hybrid composite (SMAHC) structures. The previous work showed qualitative validation of the numerical model. However, deficiencies in the experimental-numerical correlation were noted and hypotheses for the discrepancies were given for further investigation. The goal of this work is to refine the experimental measurement and numerical modeling approaches in order to better understand the discrepancies, improve the correlation between prediction and measurement, and provide rigorous quantitative validation of the numerical analysis/design tool. The experimental investigation is refined by a more thorough test procedure and incorporation of higher fidelity measurements such as infrared thermography and projection moire interferometry. The numerical results are produced by a recently commercialized version of the constitutive model as implemented in ABAQUS and are refined by incorporation of additional measured parameters such as geometric imperfection. Thermal buckling, post-buckling, and random responses to thermal and inertial (base acceleration) loads are studied. The results demonstrate the effectiveness of SMAHC structures in controlling static and dynamic responses by adaptive stiffening. Excellent agreement is achieved between the predicted and measured results of the static and dynamic thermomechanical response, thereby providing quantitative validation of the numerical tool.

  4. Measurement and Prediction of the Thermomechanical Response of Shape Memory Alloy Hybrid Composite Beams

    NASA Technical Reports Server (NTRS)

    Davis, Brian; Turner, Travis L.; Seelecke, Stefan

    2005-01-01

    Previous work at NASA Langley Research Center (LaRC) involved fabrication and testing of composite beams with embedded, pre-strained shape memory alloy (SMA) ribbons within the beam structures. That study also provided comparison of experimental results with numerical predictions from a research code making use of a new thermoelastic model for shape memory alloy hybrid composite (SMAHC) structures. The previous work showed qualitative validation of the numerical model. However, deficiencies in the experimental-numerical correlation were noted and hypotheses for the discrepancies were given for further investigation. The goal of this work is to refine the experimental measurement and numerical modeling approaches in order to better understand the discrepancies, improve the correlation between prediction and measurement, and provide rigorous quantitative validation of the numerical analysis/design tool. The experimental investigation is refined by a more thorough test procedure and incorporation of higher fidelity measurements such as infrared thermography and projection moire interferometry. The numerical results are produced by a recently commercialized version of the constitutive model as implemented in ABAQUS and are refined by incorporation of additional measured parameters such as geometric imperfection. Thermal buckling, post-buckling, and random responses to thermal and inertial (base acceleration) loads are studied. The results demonstrate the effectiveness of SMAHC structures in controlling static and dynamic responses by adaptive stiffening. Excellent agreement is achieved between the predicted and measured results of the static and dynamic thermomechanical response, thereby providing quantitative validation of the numerical tool.

  5. An integrative formal model of motivation and decision making: The MGPM*.

    PubMed

    Ballard, Timothy; Yeo, Gillian; Loft, Shayne; Vancouver, Jeffrey B; Neal, Andrew

    2016-09-01

    We develop and test an integrative formal model of motivation and decision making. The model, referred to as the extended multiple-goal pursuit model (MGPM*), is an integration of the multiple-goal pursuit model (Vancouver, Weinhardt, & Schmidt, 2010) and decision field theory (Busemeyer & Townsend, 1993). Simulations of the model generated predictions regarding the effects of goal type (approach vs. avoidance), risk, and time sensitivity on prioritization. We tested these predictions in an experiment in which participants pursued different combinations of approach and avoidance goals under different levels of risk. The empirical results were consistent with the predictions of the MGPM*. Specifically, participants pursuing 1 approach and 1 avoidance goal shifted priority from the approach to the avoidance goal over time. Among participants pursuing 2 approach goals, those with low time sensitivity prioritized the goal with the larger discrepancy, whereas those with high time sensitivity prioritized the goal with the smaller discrepancy. Participants pursuing 2 avoidance goals generally prioritized the goal with the smaller discrepancy. Finally, all of these effects became weaker as the level of risk increased. We used quantitative model comparison to show that the MGPM* explained the data better than the original multiple-goal pursuit model, and that the major extensions from the original model were justified. The MGPM* represents a step forward in the development of a general theory of decision making during multiple-goal pursuit. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  6. Quantitative precipitation forecasts in the Alps - an assessment from the Forecast Demonstration Project MAP D-PHASE

    NASA Astrophysics Data System (ADS)

    Ament, F.; Weusthoff, T.; Arpagaus, M.; Rotach, M.

    2009-04-01

    The main aim of the WWRP Forecast Demonstration Project MAP D-PHASE is to demonstrate the performance of today's models to forecast heavy precipitation and flood events in the Alpine region. Therefore an end-to-end, real-time forecasting system was installed and operated during the D PHASE Operations Period from June to November 2007. Part of this system are 30 numerical weather prediction models (deterministic as well as ensemble systems) operated by weather services and research institutes, which issue alerts if predicted precipitation accumulations exceed critical thresholds. Additionally to the real-time alerts, all relevant model fields of these simulations are stored in a central data archive. This comprehensive data set allows a detailed assessment of today's quantitative precipitation forecast (QPF) performance in the Alpine region. We will present results of QPF verifications against Swiss radar and rain gauge data both from a qualitative point of view, in terms of alerts, as well as from a quantitative perspective, in terms of precipitation rate. Various influencing factors like lead time, accumulation time, selection of warning thresholds, or bias corrections will be discussed. Additional to traditional verifications of area average precipitation amounts, the performance of the models to predict the correct precipitation statistics without requiring a point-to-point match will be described by using modern Fuzzy verification techniques. Both analyses reveal significant advantages of deep convection resolving models compared to coarser models with parameterized convection. An intercomparison of the model forecasts themselves reveals a remarkably high variability between different models, and makes it worthwhile to evaluate the potential of a multi-model ensemble. Various multi-model ensemble strategies will be tested by combining D-PHASE models to virtual ensemble systems.

  7. X-Ray and UV Photoelectron Spectroscopy | Materials Science | NREL

    Science.gov Websites

    backsheet material, showing excellent quantitative agreement between measured and predicted peak area ratios quantitative agreement between measured and predicted peak area ratios. Subtle differences in polymer functionality are assessed by deviations from stoichiometry. Elemental Analysis Uses quantitative identification

  8. Weather Prediction Center (WPC) Home Page

    Science.gov Websites

    grids, quantitative precipitation, and winter weather outlook probabilities can be found at: http Short Range Products » More Medium Range Products Quantitative Precipitation Forecasts Legacy Page Discussion (Day 1-3) Quantitative Precipitation Forecast Discussion NWS Weather Prediction Center College

  9. Decision making model for Foreign Object Debris/Damage (FOD) elimination in aeronautics using quantitative modeling approach

    NASA Astrophysics Data System (ADS)

    Lafon, Jose J.

    (FOD) Foreign Object Debris/Damage has been a costly issue for the commercial and military aircraft manufacturers at their production lines every day. FOD can put pilots, passengers and other crews' lives into high-risk. FOD refers to any type of foreign object, particle, debris or agent in the manufacturing environment, which could contaminate/damage the product or otherwise undermine quality standards. Nowadays, FOD is currently addressed with prevention programs, elimination techniques, and designation of FOD areas, controlled access to FOD areas, restrictions of personal items entering designated areas, tool accountability, etc. All of the efforts mentioned before, have not shown a significant reduction in FOD occurrence in the manufacturing processes. This research presents a Decision Making Model approach based on a logistic regression predictive model that was previously made by other researchers. With a general idea of the FOD expected, elimination plans can be put in place and start eradicating the problem minimizing the cost and time spend on the prediction, detection and/or removal of FOD.

  10. Social cycling and conditional responses in the Rock-Paper-Scissors game

    PubMed Central

    Wang, Zhijian; Xu, Bin; Zhou, Hai-Jun

    2014-01-01

    How humans make decisions in non-cooperative strategic interactions is a big question. For the fundamental Rock-Paper-Scissors (RPS) model game system, classic Nash equilibrium (NE) theory predicts that players randomize completely their action choices to avoid being exploited, while evolutionary game theory of bounded rationality in general predicts persistent cyclic motions, especially in finite populations. However as empirical studies have been relatively sparse, it is still a controversial issue as to which theoretical framework is more appropriate to describe decision-making of human subjects. Here we observe population-level persistent cyclic motions in a laboratory experiment of the discrete-time iterated RPS game under the traditional random pairwise-matching protocol. This collective behavior contradicts with the NE theory but is quantitatively explained, without any adjustable parameter, by a microscopic model of win-lose-tie conditional response. Theoretical calculations suggest that if all players adopt the same optimized conditional response strategy, their accumulated payoff will be much higher than the reference value of the NE mixed strategy. Our work demonstrates the feasibility of understanding human competition behaviors from the angle of non-equilibrium statistical physics. PMID:25060115

  11. Mechanical critical phenomena and the elastic response of fiber networks

    NASA Astrophysics Data System (ADS)

    Mackintosh, Fred

    The mechanics of cells and tissues are largely governed by scaffolds of filamentous proteins that make up the cytoskeleton, as well as extracellular matrices. Evidence is emerging that such networks can exhibit rich mechanical phase behavior. A classic example of a mechanical phase transition was identified by Maxwell for macroscopic engineering structures: networks of struts or springs exhibit a continuous, second-order phase transition at the isostatic point, where the number of constraints imposed by connectivity just equals the number of mechanical degrees of freedom. We present recent theoretical predictions and experimental evidence for mechanical phase transitions in in both synthetic and biopolymer networks. We show, in particular, excellent quantitative agreement between the mechanics of collagen matrices and the predictions of a strain-controlled phase transition in sub-isostatic networks.

  12. Intrinsic Atomic Orbitals: An Unbiased Bridge between Quantum Theory and Chemical Concepts.

    PubMed

    Knizia, Gerald

    2013-11-12

    Modern quantum chemistry can make quantitative predictions on an immense array of chemical systems. However, the interpretation of those predictions is often complicated by the complex wave function expansions used. Here we show that an exceptionally simple algebraic construction allows for defining atomic core and valence orbitals, polarized by the molecular environment, which can exactly represent self-consistent field wave functions. This construction provides an unbiased and direct connection between quantum chemistry and empirical chemical concepts, and can be used, for example, to calculate the nature of bonding in molecules, in chemical terms, from first principles. In particular, we find consistency with electronegativities (χ), C 1s core-level shifts, resonance substituent parameters (σR), Lewis structures, and oxidation states of transition-metal complexes.

  13. Obtaining big data of vegetation using artificial neural network

    NASA Astrophysics Data System (ADS)

    Ise, T.; Minagawa, M.; Onishi, M.

    2017-12-01

    To carry out predictive studies concerning ecosystems, obtaining appropriate datasets is one of the key factors. Recently, applications of neural network such as deep learning have successfully overcome difficulties in data acquisition and added large datasets for predictive science. For example, deep learning is very powerful in identifying and counting people, cars, etc. However, for vegetation science, deep learning has not been widely used. In general, differing from animals, plants have characteristics of modular growth. For example, numbers of leaves and stems which one individual plant typically possesses are not predetermined but change flexibly according to environmental conditions. This is clearly different from that the standard model of human face has predetermined numbers of parts, such as two eyes, one mouth, and so on. This characteristics of plants can make object identification difficult. In this study, a simple but effective technique was used to overcome the difficulty of visual identification of plants, and automated classification of plant types and quantitative analyses were become possible. For instance, when our method was applied to classify bryophytes, one of the most difficult plant types for computer vision due to their amorphous shapes, the performance of identification model was typically over 90% success. With this technology, it may be possible to obtain the big data of plant type, size, density etc. from satellite and/or drone imageries, in a quantitative manner. this will allow progress in predictive biogeosciences.

  14. How we learn to make decisions: rapid propagation of reinforcement learning prediction errors in humans.

    PubMed

    Krigolson, Olav E; Hassall, Cameron D; Handy, Todd C

    2014-03-01

    Our ability to make decisions is predicated upon our knowledge of the outcomes of the actions available to us. Reinforcement learning theory posits that actions followed by a reward or punishment acquire value through the computation of prediction errors-discrepancies between the predicted and the actual reward. A multitude of neuroimaging studies have demonstrated that rewards and punishments evoke neural responses that appear to reflect reinforcement learning prediction errors [e.g., Krigolson, O. E., Pierce, L. J., Holroyd, C. B., & Tanaka, J. W. Learning to become an expert: Reinforcement learning and the acquisition of perceptual expertise. Journal of Cognitive Neuroscience, 21, 1833-1840, 2009; Bayer, H. M., & Glimcher, P. W. Midbrain dopamine neurons encode a quantitative reward prediction error signal. Neuron, 47, 129-141, 2005; O'Doherty, J. P. Reward representations and reward-related learning in the human brain: Insights from neuroimaging. Current Opinion in Neurobiology, 14, 769-776, 2004; Holroyd, C. B., & Coles, M. G. H. The neural basis of human error processing: Reinforcement learning, dopamine, and the error-related negativity. Psychological Review, 109, 679-709, 2002]. Here, we used the brain ERP technique to demonstrate that not only do rewards elicit a neural response akin to a prediction error but also that this signal rapidly diminished and propagated to the time of choice presentation with learning. Specifically, in a simple, learnable gambling task, we show that novel rewards elicited a feedback error-related negativity that rapidly decreased in amplitude with learning. Furthermore, we demonstrate the existence of a reward positivity at choice presentation, a previously unreported ERP component that has a similar timing and topography as the feedback error-related negativity that increased in amplitude with learning. The pattern of results we observed mirrored the output of a computational model that we implemented to compute reward prediction errors and the changes in amplitude of these prediction errors at the time of choice presentation and reward delivery. Our results provide further support that the computations that underlie human learning and decision-making follow reinforcement learning principles.

  15. Revealing interaction mode between HIV-1 protease and mannitol analog inhibitor.

    PubMed

    Yan, Guan-Wen; Chen, Yue; Li, Yixue; Chen, Hai-Feng

    2012-06-01

    HIV protease is a key enzyme to play a key role in the HIV-1 replication cycle and control the maturation from HIV viruses to an infectious virion. HIV-1 protease has become an important target for anti-HIV-1 drug development. Here, we used molecular dynamics simulation to study the binding mode between mannitol derivatives and HIV-1 protease. The results suggest that the most active compound (M35) has more stable hydrogen bonds and stable native contacts than the less active one (M17). These mannitol derivatives might have similar interaction mode with HIV-1 protease. Then, 3D-QSAR was used to construct quantitative structure-activity models. The cross-validated q(2) values are found as 0.728 and 0.611 for CoMFA and CoMSIA, respectively. And the non-cross-validated r(2) values are 0.973 and 0.950. Nine test set compounds validate the model. The results show that this model possesses better prediction ability than the previous work. This model can be used to design new chemical entities and make quantitative prediction of the bioactivities for HIV-1 protease inhibitors before resorting to in vitro and in vivo experiment. © 2012 John Wiley & Sons A/S.

  16. Microdose clinical trial: quantitative determination of nicardipine and prediction of metabolites in human plasma.

    PubMed

    Yamane, Naoe; Takami, Tomonori; Tozuka, Zenzaburo; Sugiyama, Yuichi; Yamazaki, Akira; Kumagai, Yuji

    2009-01-01

    A sample treatment procedure and high-sensitive liquid chromatography/tandem mass spectrometry (LC/MS/MS) method for quantitative determination of nicardipine in human plasma were developed for a microdose clinical trial with nicardipine, a non-radioisotope labeled drug. The calibration curve was linear in the range of 1-500 pg/mL using 1 mL of plasma. Analytical method validation for the clinical dose, for which the calibration curve was linear in the range of 0.2-100 ng/mL using 20 microL of plasma, was also conducted. Each method was successfully applied to making determinations in plasma using LC/MS/MS after administration of a microdose (100 microg) and clinical dose (20 mg) to each of six healthy volunteers. We tested new approaches in the search for metabolites in plasma after microdosing. In vitro metabolites of nicardipine were characterized using linear ion trap-fourier transform ion cyclotron resonance mass spectrometry (LIT-FTICRMS) and the nine metabolites predicted to be in plasma were analyzed using LC/MS/MS. There is a strong possibility that analysis of metabolites by LC/MS/MS may advance to utilization in microdose clinical trials with non-radioisotope labeled drugs.

  17. Predicting dire outcomes of patients with community acquired pneumonia.

    PubMed

    Cooper, Gregory F; Abraham, Vijoy; Aliferis, Constantin F; Aronis, John M; Buchanan, Bruce G; Caruana, Richard; Fine, Michael J; Janosky, Janine E; Livingston, Gary; Mitchell, Tom; Monti, Stefano; Spirtes, Peter

    2005-10-01

    Community-acquired pneumonia (CAP) is an important clinical condition with regard to patient mortality, patient morbidity, and healthcare resource utilization. The assessment of the likely clinical course of a CAP patient can significantly influence decision making about whether to treat the patient as an inpatient or as an outpatient. That decision can in turn influence resource utilization, as well as patient well being. Predicting dire outcomes, such as mortality or severe clinical complications, is a particularly important component in assessing the clinical course of patients. We used a training set of 1601 CAP patient cases to construct 11 statistical and machine-learning models that predict dire outcomes. We evaluated the resulting models on 686 additional CAP-patient cases. The primary goal was not to compare these learning algorithms as a study end point; rather, it was to develop the best model possible to predict dire outcomes. A special version of an artificial neural network (NN) model predicted dire outcomes the best. Using the 686 test cases, we estimated the expected healthcare quality and cost impact of applying the NN model in practice. The particular, quantitative results of this analysis are based on a number of assumptions that we make explicit; they will require further study and validation. Nonetheless, the general implication of the analysis seems robust, namely, that even small improvements in predictive performance for prevalent and costly diseases, such as CAP, are likely to result in significant improvements in the quality and efficiency of healthcare delivery. Therefore, seeking models with the highest possible level of predictive performance is important. Consequently, seeking ever better machine-learning and statistical modeling methods is of great practical significance.

  18. Information model of trainee characteristics with definition of stochastic behavior of dynamic system

    NASA Astrophysics Data System (ADS)

    Sumin, V. I.; Smolentseva, T. E.; Belokurov, S. V.; Lankin, O. V.

    2018-03-01

    In the work the process of formation of trainee characteristics with their subsequent change is analyzed and analyzed. Characteristics of trainees were obtained as a result of testing for each section of information on the chosen discipline. The results obtained during testing were input to the dynamic system. The area of control actions consisting of elements of the dynamic system is formed. The limit of deterministic predictability of element trajectories in dynamical systems based on local or global attractors is revealed. The dimension of the phase space of the dynamic system is determined, which allows estimating the parameters of the initial system. On the basis of time series of observations, it is possible to determine the predictability interval of all parameters, which make it possible to determine the behavior of the system discretely in time. Then the measure of predictability will be the sum of Lyapunov’s positive indicators, which are a quantitative measure for all elements of the system. The components for the formation of an algorithm allowing to determine the correlation dimension of the attractor for known initial experimental values of the variables are revealed. The generated algorithm makes it possible to carry out an experimental study of the dynamics of changes in the trainee’s parameters with initial uncertainty.

  19. Principles of effective communication with patients who have intellectual disability among primary care physicians.

    PubMed

    Werner, S; Yalon-Chamovitz, S; Tenne Rinde, M; Heymann, A D

    2017-07-01

    Examine physicians' implementation of effective communication principles with patients with intellectual disabilities (ID) and its predictors. Focus groups helped construct a quantitative questionnaire. The questionnaire (completed by 440 physicians) examined utilization of effective communication principles, attitudes toward individuals with ID, subjective knowledge and number of patients with ID. Subjective knowledge of ID and more patients with ID increased utilization of effective communication principles. Provision of knowledge that allows patients to make their own medical decisions was predicted by more patients with ID, lower attitudes that treatment of this population group is not desirable, less negative affect and greater perception that treatment of this group is part of the physician's role. Effective preparation of patients with ID for treatment was predicted by higher perception of treatment of this group as part of the physician's role, lower perception of this field as undesirable and higher perception of these individuals as unable to make their own choice. Simplification of information was predicted by a greater perception of treatment of this group as part of the physician's role and more negative affect. Greater familiarity may enhance care for these patients. Increase exposure to patients with ID within training. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Quantitative fetal fibronectin and cervical length to predict preterm birth in asymptomatic women with previous cervical surgery.

    PubMed

    Vandermolen, Brooke I; Hezelgrave, Natasha L; Smout, Elizabeth M; Abbott, Danielle S; Seed, Paul T; Shennan, Andrew H

    2016-10-01

    Quantitative fetal fibronectin testing has demonstrated accuracy for prediction of spontaneous preterm birth in asymptomatic women with a history of preterm birth. Predictive accuracy in women with previous cervical surgery (a potentially different risk mechanism) is not known. We sought to compare the predictive accuracy of cervicovaginal fluid quantitative fetal fibronectin and cervical length testing in asymptomatic women with previous cervical surgery to that in women with 1 previous preterm birth. We conducted a prospective blinded secondary analysis of a larger observational study of cervicovaginal fluid quantitative fetal fibronectin concentration in asymptomatic women measured with a Hologic 10Q system (Hologic, Marlborough, MA). Prediction of spontaneous preterm birth (<30, <34, and <37 weeks) with cervicovaginal fluid quantitative fetal fibronectin concentration in primiparous women who had undergone at least 1 invasive cervical procedure (n = 473) was compared with prediction in women who had previous spontaneous preterm birth, preterm prelabor rupture of membranes, or late miscarriage (n = 821). Relationship with cervical length was explored. The rate of spontaneous preterm birth <34 weeks in the cervical surgery group was 3% compared with 9% in previous spontaneous preterm birth group. Receiver operating characteristic curves comparing quantitative fetal fibronectin for prediction at all 3 gestational end points were comparable between the cervical surgery and previous spontaneous preterm birth groups (34 weeks: area under the curve, 0.78 [95% confidence interval 0.64-0.93] vs 0.71 [95% confidence interval 0.64-0.78]; P = .39). Prediction of spontaneous preterm birth using cervical length compared with quantitative fetal fibronectin for prediction of preterm birth <34 weeks of gestation offered similar prediction (area under the curve, 0.88 [95% confidence interval 0.79-0.96] vs 0.77 [95% confidence interval 0.62-0.92], P = .12 in the cervical surgery group; and 0.77 [95% confidence interval 0.70-0.84] vs 0.74 [95% confidence interval 0.67-0.81], P = .32 in the previous spontaneous preterm birth group). Prediction of spontaneous preterm birth using cervicovaginal fluid quantitative fetal fibronectin in asymptomatic women with cervical surgery is valid, and has comparative accuracy to that in women with a history of spontaneous preterm birth. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Oxidative DNA damage background estimated by a system model of base excision repair

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sokhansanj, B A; Wilson, III, D M

    Human DNA can be damaged by natural metabolism through free radical production. It has been suggested that the equilibrium between innate damage and cellular DNA repair results in an oxidative DNA damage background that potentially contributes to disease and aging. Efforts to quantitatively characterize the human oxidative DNA damage background level based on measuring 8-oxoguanine lesions as a biomarker have led to estimates varying over 3-4 orders of magnitude, depending on the method of measurement. We applied a previously developed and validated quantitative pathway model of human DNA base excision repair, integrating experimentally determined endogenous damage rates and model parametersmore » from multiple sources. Our estimates of at most 100 8-oxoguanine lesions per cell are consistent with the low end of data from biochemical and cell biology experiments, a result robust to model limitations and parameter variation. Our results show the power of quantitative system modeling to interpret composite experimental data and make biologically and physiologically relevant predictions for complex human DNA repair pathway mechanisms and capacity.« less

  2. Requirements for the formal representation of pathophysiology mechanisms by clinicians

    PubMed Central

    Helvensteijn, M.; Kokash, N.; Martorelli, I.; Sarwar, D.; Islam, S.; Grenon, P.; Hunter, P.

    2016-01-01

    Knowledge of multiscale mechanisms in pathophysiology is the bedrock of clinical practice. If quantitative methods, predicting patient-specific behaviour of these pathophysiology mechanisms, are to be brought to bear on clinical decision-making, the Human Physiome community and Clinical community must share a common computational blueprint for pathophysiology mechanisms. A number of obstacles stand in the way of this sharing—not least the technical and operational challenges that must be overcome to ensure that (i) the explicit biological meanings of the Physiome's quantitative methods to represent mechanisms are open to articulation, verification and study by clinicians, and that (ii) clinicians are given the tools and training to explicitly express disease manifestations in direct contribution to modelling. To this end, the Physiome and Clinical communities must co-develop a common computational toolkit, based on this blueprint, to bridge the representation of knowledge of pathophysiology mechanisms (a) that is implicitly depicted in electronic health records and the literature, with (b) that found in mathematical models explicitly describing mechanisms. In particular, this paper makes use of a step-wise description of a specific disease mechanism as a means to elicit the requirements of representing pathophysiological meaning explicitly. The computational blueprint developed from these requirements addresses the Clinical community goals to (i) organize and manage healthcare resources in terms of relevant disease-related knowledge of mechanisms and (ii) train the next generation of physicians in the application of quantitative methods relevant to their research and practice. PMID:27051514

  3. Quantitative self-assembly prediction yields targeted nanomedicines

    NASA Astrophysics Data System (ADS)

    Shamay, Yosi; Shah, Janki; Işık, Mehtap; Mizrachi, Aviram; Leibold, Josef; Tschaharganeh, Darjus F.; Roxbury, Daniel; Budhathoki-Uprety, Januka; Nawaly, Karla; Sugarman, James L.; Baut, Emily; Neiman, Michelle R.; Dacek, Megan; Ganesh, Kripa S.; Johnson, Darren C.; Sridharan, Ramya; Chu, Karen L.; Rajasekhar, Vinagolu K.; Lowe, Scott W.; Chodera, John D.; Heller, Daniel A.

    2018-02-01

    Development of targeted nanoparticle drug carriers often requires complex synthetic schemes involving both supramolecular self-assembly and chemical modification. These processes are generally difficult to predict, execute, and control. We describe herein a targeted drug delivery system that is accurately and quantitatively predicted to self-assemble into nanoparticles based on the molecular structures of precursor molecules, which are the drugs themselves. The drugs assemble with the aid of sulfated indocyanines into particles with ultrahigh drug loadings of up to 90%. We devised quantitative structure-nanoparticle assembly prediction (QSNAP) models to identify and validate electrotopological molecular descriptors as highly predictive indicators of nano-assembly and nanoparticle size. The resulting nanoparticles selectively targeted kinase inhibitors to caveolin-1-expressing human colon cancer and autochthonous liver cancer models to yield striking therapeutic effects while avoiding pERK inhibition in healthy skin. This finding enables the computational design of nanomedicines based on quantitative models for drug payload selection.

  4. Zooming in on neutrino oscillations with DUNE

    NASA Astrophysics Data System (ADS)

    Srivastava, Rahul; Ternes, Christoph A.; Tórtola, Mariam; Valle, José W. F.

    2018-05-01

    We examine the capabilities of the DUNE experiment as a probe of the neutrino mixing paradigm. Taking the current status of neutrino oscillations and the design specifications of DUNE, we determine the experiment's potential to probe the structure of neutrino mixing and C P violation. We focus on the poorly determined parameters θ23 and δC P and consider both two and seven years of run. We take various benchmarks as our true values, such as the current preferred values of θ23 and δC P, as well as several theory-motivated choices. We determine quantitatively DUNE's potential to perform a precision measurement of θ23, as well as to test the C P violation hypothesis in a model-independent way. We find that, after running for seven years, DUNE will make a substantial step in the precise determination of these parameters, bringing to quantitative test the predictions of various theories of neutrino mixing.

  5. Vocal development in a Waddington landscape

    PubMed Central

    Teramoto, Yayoi; Takahashi, Daniel Y; Holmes, Philip; Ghazanfar, Asif A

    2017-01-01

    Vocal development is the adaptive coordination of the vocal apparatus, muscles, the nervous system, and social interaction. Here, we use a quantitative framework based on optimal control theory and Waddington’s landscape metaphor to provide an integrated view of this process. With a biomechanical model of the marmoset monkey vocal apparatus and behavioral developmental data, we show that only the combination of the developing vocal tract, vocal apparatus muscles and nervous system can fully account for the patterns of vocal development. Together, these elements influence the shape of the monkeys’ vocal developmental landscape, tilting, rotating or shifting it in different ways. We can thus use this framework to make quantitative predictions regarding how interfering factors or experimental perturbations can change the landscape within a species, or to explain comparative differences in vocal development across species DOI: http://dx.doi.org/10.7554/eLife.20782.001 PMID:28092262

  6. Hierarchy of non-glucose sugars in Escherichia coli.

    PubMed

    Aidelberg, Guy; Towbin, Benjamin D; Rothschild, Daphna; Dekel, Erez; Bren, Anat; Alon, Uri

    2014-12-24

    Understanding how cells make decisions, and why they make the decisions they make, is of fundamental interest in systems biology. To address this, we study the decisions made by E. coli on which genes to express when presented with two different sugars. It is well-known that glucose, E. coli's preferred carbon source, represses the uptake of other sugars by means of global and gene-specific mechanisms. However, less is known about the utilization of glucose-free sugar mixtures which are found in the natural environment of E. coli and in biotechnology. Here, we combine experiment and theory to map the choices of E. coli among 6 different non-glucose carbon sources. We used robotic assays and fluorescence reporter strains to make precise measurements of promoter activity and growth rate in all pairs of these sugars. We find that the sugars can be ranked in a hierarchy: in a mixture of a higher and a lower sugar, the lower sugar system shows reduced promoter activity. The hierarchy corresponds to the growth rate supported by each sugar- the faster the growth rate, the higher the sugar on the hierarchy. The hierarchy is 'soft' in the sense that the lower sugar promoters are not completely repressed. Measurement of the activity of the master regulator CRP-cAMP shows that the hierarchy can be quantitatively explained based on differential activation of the promoters by CRP-cAMP. Comparing sugar system activation as a function of time in sugar pair mixtures at sub-saturating concentrations, we find cases of sequential activation, and also cases of simultaneous expression of both systems. Such simultaneous expression is not predicted by simple models of growth rate optimization, which predict only sequential activation. We extend these models by suggesting multi-objective optimization for both growing rapidly now and preparing the cell for future growth on the poorer sugar. We find a defined hierarchy of sugar utilization, which can be quantitatively explained by differential activation by the master regulator cAMP-CRP. The present approach can be used to understand cell decisions when presented with mixtures of conditions.

  7. Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP.

    PubMed

    Deng, Li; Wang, Guohua; Chen, Bo

    2015-01-01

    In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency.

  8. Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP

    PubMed Central

    Wang, Guohua; Chen, Bo

    2015-01-01

    In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency. PMID:26448740

  9. Cognitive niches: an ecological model of strategy selection.

    PubMed

    Marewski, Julian N; Schooler, Lael J

    2011-07-01

    How do people select among different strategies to accomplish a given task? Across disciplines, the strategy selection problem represents a major challenge. We propose a quantitative model that predicts how selection emerges through the interplay among strategies, cognitive capacities, and the environment. This interplay carves out for each strategy a cognitive niche, that is, a limited number of situations in which the strategy can be applied, simplifying strategy selection. To illustrate our proposal, we consider selection in the context of 2 theories: the simple heuristics framework and the ACT-R (adaptive control of thought-rational) architecture of cognition. From the heuristics framework, we adopt the thesis that people make decisions by selecting from a repertoire of simple decision strategies that exploit regularities in the environment and draw on cognitive capacities, such as memory and time perception. ACT-R provides a quantitative theory of how these capacities adapt to the environment. In 14 simulations and 10 experiments, we consider the choice between strategies that operate on the accessibility of memories and those that depend on elaborate knowledge about the world. Based on Internet statistics, our model quantitatively predicts people's familiarity with and knowledge of real-world objects, the distributional characteristics of the associated speed of memory retrieval, and the cognitive niches of classic decision strategies, including those of the fluency, recognition, integration, lexicographic, and sequential-sampling heuristics. In doing so, the model specifies when people will be able to apply different strategies and how accurate, fast, and effortless people's decisions will be.

  10. Development of quantitative screen for 1550 chemicals with GC-MS.

    PubMed

    Bergmann, Alan J; Points, Gary L; Scott, Richard P; Wilson, Glenn; Anderson, Kim A

    2018-05-01

    With hundreds of thousands of chemicals in the environment, effective monitoring requires high-throughput analytical techniques. This paper presents a quantitative screening method for 1550 chemicals based on statistical modeling of responses with identification and integration performed using deconvolution reporting software. The method was evaluated with representative environmental samples. We tested biological extracts, low-density polyethylene, and silicone passive sampling devices spiked with known concentrations of 196 representative chemicals. A multiple linear regression (R 2  = 0.80) was developed with molecular weight, logP, polar surface area, and fractional ion abundance to predict chemical responses within a factor of 2.5. Linearity beyond the calibration had R 2  > 0.97 for three orders of magnitude. Median limits of quantitation were estimated to be 201 pg/μL (1.9× standard deviation). The number of detected chemicals and the accuracy of quantitation were similar for environmental samples and standard solutions. To our knowledge, this is the most precise method for the largest number of semi-volatile organic chemicals lacking authentic standards. Accessible instrumentation and software make this method cost effective in quantifying a large, customizable list of chemicals. When paired with silicone wristband passive samplers, this quantitative screen will be very useful for epidemiology where binning of concentrations is common. Graphical abstract A multiple linear regression of chemical responses measured with GC-MS allowed quantitation of 1550 chemicals in samples such as silicone wristbands.

  11. Forensic Comparison and Matching of Fingerprints: Using Quantitative Image Measures for Estimating Error Rates through Understanding and Predicting Difficulty

    PubMed Central

    Kellman, Philip J.; Mnookin, Jennifer L.; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E.

    2014-01-01

    Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and subjective assessment of difficulty in fingerprint comparisons. PMID:24788812

  12. Energy Education: The Quantitative Voice

    NASA Astrophysics Data System (ADS)

    Wolfson, Richard

    2010-02-01

    A serious study of energy use and its consequences has to be quantitative. It makes little sense to push your favorite renewable energy source if it can't provide enough energy to make a dent in humankind's prodigious energy consumption. Conversely, it makes no sense to dismiss alternatives---solar in particular---that supply Earth with energy at some 10,000 times our human energy consumption rate. But being quantitative---especially with nonscience students or the general public---is a delicate business. This talk draws on the speaker's experience presenting energy issues to diverse audiences through single lectures, entire courses, and a textbook. The emphasis is on developing a quick, ``back-of-the-envelope'' approach to quantitative understanding of energy issues. )

  13. Modeling and parameterization of photoelectrons emitted in condensed matter by linearly polarized synchrotron radiation

    NASA Astrophysics Data System (ADS)

    Jablonski, A.

    2018-01-01

    Growing availability of synchrotron facilities stimulates an interest in quantitative applications of hard X-ray photoemission spectroscopy (HAXPES) using linearly polarized radiation. An advantage of this approach is the possibility of continuous variation of radiation energy that makes it possible to control the sampling depth for a measurement. Quantitative applications are based on accurate and reliable theory relating the measured spectral features to needed characteristics of the surface region of solids. A major complication in the case of polarized radiation is an involved structure of the photoemission cross-section for hard X-rays. In the present work, details of the relevant formalism are described and algorithms implementing this formalism for different experimental configurations are proposed. The photoelectron signal intensity may be considerably affected by variation in the positioning of the polarization vector with respect to the surface plane. This information is critical for any quantitative application of HAXPES by polarized X-rays. Different quantitative applications based on photoelectrons with energies up to 10 keV are considered here: (i) determination of surface composition, (ii) estimation of sampling depth, and (iii) measurements of an overlayer thickness. Parameters facilitating these applications (mean escape depths, information depths, effective attenuation lengths) were calculated for a number of photoelectron lines in four elemental solids (Si, Cu, Ag and Au) in different experimental configurations and locations of the polarization vector. One of the considered configurations, with polarization vector located in a plane perpendicular to the surface, was recommended for quantitative applications of HAXPES. In this configurations, it was found that the considered parameters vary weakly in the range of photoelectron emission angles from normal emission to about 50° with respect to the surface normal. The averaged values of the mean escape depth and effective attenuation length were approximated with accurate predictive formulas. The predicted effective attenuation lengths were compared with published values; major discrepancies observed can be ascribed to a possibility of discontinuous structure of the deposited overlayer.

  14. Utilization of Integrated Assessment Modeling for determining geologic CO2 storage security

    NASA Astrophysics Data System (ADS)

    Pawar, R.

    2017-12-01

    Geologic storage of carbon dioxide (CO2) has been extensively studied as a potential technology to mitigate atmospheric concentration of CO2. Multiple international research & development efforts, large-scale demonstration and commercial projects are helping advance the technology. One of the critical areas of active investigation is prediction of long-term CO2 storage security and risks. A quantitative methodology for predicting a storage site's long-term performance is critical for making key decisions necessary for successful deployment of commercial scale projects where projects will require quantitative assessments of potential long-term liabilities. These predictions are challenging given that they require simulating CO2 and in-situ fluid movements as well as interactions through the primary storage reservoir, potential leakage pathways (such as wellbores, faults, etc.) and shallow resources such as groundwater aquifers. They need to take into account the inherent variability and uncertainties at geologic sites. This talk will provide an overview of an approach based on integrated assessment modeling (IAM) to predict long-term performance of a geologic storage site including, storage reservoir, potential leakage pathways and shallow groundwater aquifers. The approach utilizes reduced order models (ROMs) to capture the complex physical/chemical interactions resulting due to CO2 movement and interactions but are computationally extremely efficient. Applicability of the approach will be demonstrated through examples that are focused on key storage security questions such as what is the probability of leakage of CO2 from a storage reservoir? how does storage security vary for different geologic environments and operational conditions? how site parameter variability and uncertainties affect storage security, etc.

  15. WPC Quantitative Precipitation Forecasts - Day 1

    Science.gov Websites

    to all federal, state, and local government web resources and services. Quantitative Precipitation Prediction Center 5830 University Research Court College Park, Maryland 20740 Weather Prediction Center Web

  16. Molecular engineering of colloidal liquid crystals using DNA origami

    NASA Astrophysics Data System (ADS)

    Siavashpouri, Mahsa; Wachauf, Christian; Zakhary, Mark; Praetorius, Florian; Dietz, Hendrik; Dogic, Zvonimir

    Understanding the microscopic origin of cholesteric phase remains a foundational, yet unresolved problem in the field of liquid crystals. Lack of experimental model system that allows for the systematic control of the microscopic chiral structure makes it difficult to investigate this problem for several years. Here, using DNA origami technology, we systematically vary the chirality of the colloidal particles with molecular precision and establish a quantitative relationship between the microscopic structure of particles and the macroscopic cholesteric pitch. Our study presents a new methodology for predicting bulk behavior of diverse phases based on the microscopic architectures of the constituent molecules.

  17. Fuel Property Determination of Biodiesel-Diesel Blends By Terahertz Spectrum

    NASA Astrophysics Data System (ADS)

    Zhao, Hui; Zhao, Kun; Bao, Rima

    2012-05-01

    The frequency-dependent absorption characteristics of biodiesel and its blends with conventional diesel fuel have been researched in the spectral range of 0.2-1.5 THz by the terahertz time-domain spectroscopy (THz-TDS). The absorption coefficient presented a regular increasing with biodiesel content. A nonlinear multivariate model that correlating cetane number and solidifying point of bio-diesel blends with absorption coefficient has been established, making the quantitative analysis of fuel properties simple. The results made the cetane number and solidifying point prediction possible by THz-TDS technology and indicated a bright future in practical application.

  18. Quantifying Disease Progression in Amyotrophic Lateral Sclerosis

    PubMed Central

    Simon, Neil G; Turner, Martin R; Vucic, Steve; Al-Chalabi, Ammar; Shefner, Jeremy; Lomen-Hoerth, Catherine; Kiernan, Matthew C

    2014-01-01

    Amyotrophic lateral sclerosis (ALS) exhibits characteristic variability of onset and rate of disease progression, with inherent clinical heterogeneity making disease quantitation difficult. Recent advances in understanding pathogenic mechanisms linked to the development of ALS impose an increasing need to develop strategies to predict and more objectively measure disease progression. This review explores phenotypic and genetic determinants of disease progression in ALS, and examines established and evolving biomarkers that may contribute to robust measurement in longitudinal clinical studies. With targeted neuroprotective strategies on the horizon, developing efficiencies in clinical trial design may facilitate timely entry of novel treatments into the clinic. PMID:25223628

  19. Analysis and Modeling of Ground Operations at Hub Airports

    NASA Technical Reports Server (NTRS)

    Atkins, Stephen (Technical Monitor); Andersson, Kari; Carr, Francis; Feron, Eric; Hall, William D.

    2000-01-01

    Building simple and accurate models of hub airports can considerably help one understand airport dynamics, and may provide quantitative estimates of operational airport improvements. In this paper, three models are proposed to capture the dynamics of busy hub airport operations. Two simple queuing models are introduced to capture the taxi-out and taxi-in processes. An integer programming model aimed at representing airline decision-making attempts to capture the dynamics of the aircraft turnaround process. These models can be applied for predictive purposes. They may also be used to evaluate control strategies for improving overall airport efficiency.

  20. Patterns, Probabilities, and People: Making Sense of Quantitative Change in Complex Systems

    ERIC Educational Resources Information Center

    Wilkerson-Jerde, Michelle Hoda; Wilensky, Uri J.

    2015-01-01

    The learning sciences community has made significant progress in understanding how people think and learn about complex systems. But less is known about how people make sense of the quantitative patterns and mathematical formalisms often used to study these systems. In this article, we make a case for attending to and supporting connections…

  1. Assessment of cognitive bias in decision-making and leadership styles among critical care nurses: a mixed methods study.

    PubMed

    Lean Keng, Soon; AlQudah, Hani Nawaf Ibrahim

    2017-02-01

    To raise awareness of critical care nurses' cognitive bias in decision-making, its relationship with leadership styles and its impact on care delivery. The relationship between critical care nurses' decision-making and leadership styles in hospitals has been widely studied, but the influence of cognitive bias on decision-making and leadership styles in critical care environments remains poorly understood, particularly in Jordan. Two-phase mixed methods sequential explanatory design and grounded theory. critical care unit, Prince Hamza Hospital, Jordan. Participant sampling: convenience sampling Phase 1 (quantitative, n = 96), purposive sampling Phase 2 (qualitative, n = 20). Pilot tested quantitative survey of 96 critical care nurses in 2012. Qualitative in-depth interviews, informed by quantitative results, with 20 critical care nurses in 2013. Descriptive and simple linear regression quantitative data analyses. Thematic (constant comparative) qualitative data analysis. Quantitative - correlations found between rationality and cognitive bias, rationality and task-oriented leadership styles, cognitive bias and democratic communication styles and cognitive bias and task-oriented leadership styles. Qualitative - 'being competent', 'organizational structures', 'feeling self-confident' and 'being supported' in the work environment identified as key factors influencing critical care nurses' cognitive bias in decision-making and leadership styles. Two-way impact (strengthening and weakening) of cognitive bias in decision-making and leadership styles on critical care nurses' practice performance. There is a need to heighten critical care nurses' consciousness of cognitive bias in decision-making and leadership styles and its impact and to develop organization-level strategies to increase non-biased decision-making. © 2016 John Wiley & Sons Ltd.

  2. QR-STEM: Energy and Environment as a Context for Improving QR and STEM Understandings of 6-12 Grade Teachers II. The Quantitative Reasoning

    NASA Astrophysics Data System (ADS)

    Mayes, R.; Lyford, M. E.; Myers, J. D.

    2009-12-01

    The Quantitative Reasoning in STEM (QR STEM) project is a state level Mathematics and Science Partnership Project (MSP) with a focus on the mathematics and statistics that underlies the understanding of complex global scientific issues. This session is a companion session to the QR STEM: The Science presentation. The focus of this session is the quantitative reasoning aspects of the project. As students move from understandings that range from local to global in perspective on issues of energy and environment, there is a significant increase in the need for mathematical and statistical conceptual understanding. These understandings must be accessible to the students within the scientific context, requiring the special understandings that are endemic within quantitative reasoning. The QR STEM project brings together interdisciplinary teams of higher education faculty and middle/high school teachers to explore complex problems in energy and environment. The disciplines include life sciences, physics, chemistry, earth science, statistics, and mathematics. These interdisciplinary teams develop open ended performance tasks to implement in the classroom, based on scientific concepts that underpin energy and environment. Quantitative reasoning is broken down into three components: Quantitative Literacy, Quantitative Interpretation, and Quantitative Modeling. Quantitative Literacy is composed of arithmetic concepts such as proportional reasoning, numeracy, and descriptive statistics. Quantitative Interpretation includes algebraic and geometric concepts that underlie the ability to interpret a model of natural phenomena which is provided for the student. This model may be a table, graph, or equation from which the student is to make predictions or identify trends, or from which they would use statistics to explore correlations or patterns in data. Quantitative modeling is the ability to develop the model from data, including the ability to test hypothesis using statistical procedures. We use the term model very broadly, so it includes visual models such as box models, as well as best fit equation models and hypothesis testing. One of the powerful outcomes of the project is the conversation which takes place between science teachers and mathematics teachers. First they realize that though they are teaching concepts that cross their disciplines, the barrier of scientific language within their subjects restricts students from applying the concepts across subjects. Second the mathematics teachers discover the context of science as a means of providing real world situations that engage students in the utility of mathematics as a tool for solving problems. Third the science teachers discover the barrier to understanding science that is presented by poor quantitative reasoning ability. Finally the students are engaged in exploring energy and environment in a manner which exposes the importance of seeing a problem from multiple interdisciplinary perspectives. The outcome is a democratic citizen capable of making informed decisions, and perhaps a future scientist.

  3. Cancer imaging phenomics toolkit: quantitative imaging analytics for precision diagnostics and predictive modeling of clinical outcome.

    PubMed

    Davatzikos, Christos; Rathore, Saima; Bakas, Spyridon; Pati, Sarthak; Bergman, Mark; Kalarot, Ratheesh; Sridharan, Patmaa; Gastounioti, Aimilia; Jahani, Nariman; Cohen, Eric; Akbari, Hamed; Tunc, Birkan; Doshi, Jimit; Parker, Drew; Hsieh, Michael; Sotiras, Aristeidis; Li, Hongming; Ou, Yangming; Doot, Robert K; Bilello, Michel; Fan, Yong; Shinohara, Russell T; Yushkevich, Paul; Verma, Ragini; Kontos, Despina

    2018-01-01

    The growth of multiparametric imaging protocols has paved the way for quantitative imaging phenotypes that predict treatment response and clinical outcome, reflect underlying cancer molecular characteristics and spatiotemporal heterogeneity, and can guide personalized treatment planning. This growth has underlined the need for efficient quantitative analytics to derive high-dimensional imaging signatures of diagnostic and predictive value in this emerging era of integrated precision diagnostics. This paper presents cancer imaging phenomics toolkit (CaPTk), a new and dynamically growing software platform for analysis of radiographic images of cancer, currently focusing on brain, breast, and lung cancer. CaPTk leverages the value of quantitative imaging analytics along with machine learning to derive phenotypic imaging signatures, based on two-level functionality. First, image analysis algorithms are used to extract comprehensive panels of diverse and complementary features, such as multiparametric intensity histogram distributions, texture, shape, kinetics, connectomics, and spatial patterns. At the second level, these quantitative imaging signatures are fed into multivariate machine learning models to produce diagnostic, prognostic, and predictive biomarkers. Results from clinical studies in three areas are shown: (i) computational neuro-oncology of brain gliomas for precision diagnostics, prediction of outcome, and treatment planning; (ii) prediction of treatment response for breast and lung cancer, and (iii) risk assessment for breast cancer.

  4. Nondestructive X-ray diffraction measurement of warpage in silicon dies embedded in integrated circuit packages.

    PubMed

    Tanner, B K; Danilewsky, A N; Vijayaraghavan, R K; Cowley, A; McNally, P J

    2017-04-01

    Transmission X-ray diffraction imaging in both monochromatic and white beam section mode has been used to measure quantitatively the displacement and warpage stress in encapsulated silicon devices. The displacement dependence with position on the die was found to agree well with that predicted from a simple model of warpage stress. For uQFN microcontrollers, glued only at the corners, the measured misorientation contours are consistent with those predicted using finite element analysis. The absolute displacement, measured along a line through the die centre, was comparable to that reported independently by high-resolution X-ray diffraction and optical interferometry of similar samples. It is demonstrated that the precision is greater than the spread of values found in randomly selected batches of commercial devices, making the techniques viable for industrial inspection purposes.

  5. Brain and cognitive-behavioural development after asphyxia at term birth.

    PubMed

    de Haan, Michelle; Wyatt, John S; Roth, Simon; Vargha-Khadem, Faraneh; Gadian, David; Mishkin, Mortimer

    2006-07-01

    Perinatal asphyxia occurs in approximately 1-6 per 1000 live full-term births. Different patterns of brain damage can result, though the relation of these patterns to long-term cognitive-behavioural outcome remains under investigation. The hippocampus is one brain region that can be damaged (typically not in isolation), and this site of damage has been implicated in two different long-term outcomes, cognitive memory impairment and the psychiatric disorder schizophrenia. Factors in addition to the acute episode of asphyxia likely contribute to these specific outcomes, making prediction difficult. Future studies that better document long-term cognitive-behavioural outcome, quantitatively identify patterns of brain injury over development and consider additional variables that may modulate the impact of asphyxia on cognitive and behavioural function will forward the goals of predicting long-term outcome and understanding the mechanisms by which it unfolds.

  6. Predicting the impact of land management decisions on overland flow generation: Implications for cesium migration in forested Fukushima watersheds

    NASA Astrophysics Data System (ADS)

    Siirila-Woodburn, Erica R.; Steefel, Carl I.; Williams, Kenneth H.; Birkholzer, Jens T.

    2018-03-01

    The effects of land use and land cover (LULC) change on environmental systems across the land surface's "critical zone" are highly uncertain, often making prediction and risk management decision difficult. In a series of numerical experiments with an integrated hydrologic model, overland flow generation is quantified for both present day and forest thinning scenarios. A typhoon storm event in a watershed near the Fukushima Dai-ichi Nuclear Power Plant is used as an example application in which the interplay between LULC change and overland flow generation is important given that sediment-bound radionuclides may cause secondary contamination via surface water transport. Results illustrate the nonlinearity of the integrated system spanning from the deep groundwater to the atmosphere, and provide quantitative tools when determining the tradeoffs of different risk-mitigation strategies.

  7. Laser-Induced Fluorescence Measurements and Modeling of Nitric Oxide in Counterflow Diffusion Flames

    NASA Technical Reports Server (NTRS)

    Ravikrishna, Rayavarapu V.

    2000-01-01

    The feasibility of making quantitative nonintrusive NO concentration ([NO]) measurements in nonpremixed flames has been assessed by obtaining laser-induced fluorescence (LIF) measurements of [NO] in counterflow diffusion flames at atmospheric and higher pressures. Comparisons at atmospheric pressure between laser-saturated fluorescence (LSF) and linear LIF measurements in four diluted ethane-air counterflow diffusion flames with strain rates from 5 to 48/s yielded excellent agreement from fuel-lean to moderately fuel-rich conditions, thus indicating the utility of a model-based quenching correction technique, which was then extended to higher pressures. Quantitative LIF measurements of [NO] in three diluted methane-air counterflow diffusion flames with strain rates from 5 to 35/s were compared with OPPDIF model predictions using the GRI (version 2.11) chemical kinetic mechanism. The comparisons revealed that the GRI mechanism underpredicts prompt-NO by 30-50% at atmospheric pressure. Based on these measurements, a modified reaction rate coefficient for the prompt-NO initiation reaction was proposed which causes the predictions to match experimental data. Temperature measurements using thin filament pyrometry (TFP) in conjunction with a new calibration method utilizing a near-adiabatic H2-air Hencken burner gave very good comparisons with model predictions in these counterflow diffusion flames. Quantitative LIF measurements of [NO] were also obtained in four methane-air counterflow partially-premixed flames with fuel-side equivalence ratios (phi(sub B)) of 1.45, 1.6, 1.8 and 2.0. The measurements were in excellent agreement with model predictions when accounting for radiative heat loss. Spatial separation between regions dominated by the prompt and thermal NO mechanisms was observed in the phi(sub B) = 1.45 flame. The modified rate coefficient proposed earlier for the prompt-NO initiation reaction improved agreement between code predictions and measurements in the region where prompt-NO dominates. Finally, LIF measurements of NO were obtained in counterflow diffusion flames at 2 to 5 atm. Comparisons between [NO] measurements and predictions show that the GRI mechanism underpredicts prompt-NO by a factor of two to three at all pressures. In general, the results indicate a need for refinement of the CH chemistry, especially the pressure-dependent CH formation and destruction reactions.

  8. Parents' and Physicians' Perceptions of Children's Participation in Decision-making in Paediatric Oncology: A Quantitative Study.

    PubMed

    Rost, Michael; Wangmo, Tenzin; Niggli, Felix; Hartmann, Karin; Hengartner, Heinz; Ansari, Marc; Brazzola, Pierluigi; Rischewski, Johannes; Beck-Popovic, Maja; Kühne, Thomas; Elger, Bernice S

    2017-12-01

    The goal is to present how shared decision-making in paediatric oncology occurs from the viewpoints of parents and physicians. Eight Swiss Pediatric Oncology Group centres participated in this prospective study. The sample comprised a parent and physician of the minor patient (<18 years). Surveys were statistically analysed by comparing physicians' and parents' perspectives and by evaluating factors associated with children's actual involvement. Perspectives of ninety-one parents and twenty physicians were obtained for 151 children. Results indicate that for six aspects of information provision examined, parents' and physicians' perceptions differed. Moreover, parents felt that the children were more competent to understand diagnosis and prognosis, assessed the disease of the children as worse, and reported higher satisfaction with decision-making on the part of the children. A patient's age and gender predicted involvement. Older children and girls were more likely to be involved. In the decision-making process, parents held a less active role than they actually wanted. Physicians should take measures to ensure that provided information is understood correctly. Furthermore, they should work towards creating awareness for systematic differences between parents and physicians with respect to the perception of the child, the disease, and shared decision-making.

  9. Student Experiments on the Effects of Dam Removal on the Elwha River

    NASA Astrophysics Data System (ADS)

    Sandland, T. O.; Grack Nelson, A. L.

    2006-12-01

    The National Center for Earth Surface Dynamics (NCED) is an NSF funded Science and Technology Center devoted to developing a quantitative, predictive science of the ecological and physical processes that define and shape rivers and river networks. The Science Museum of Minnesota's (SMM) Earthscapes River Restoration classes provide k-12 students, teachers, and the public opportunities to explore NCED concepts and, like NCED scientists, move from a qualitative to a quantitative-based understanding of river systems. During a series of classes, students work with an experimental model of the Elwha River in Washington State to gain an understanding of the processes that define and shape river systems. Currently, two large dams on the Elwha are scheduled for removal to restore salmon habitat. Students design different dam removal scenarios to test and make qualitative observations describing and comparing how the modeled system evolves over time. In a following session, after discussing the ambiguity of the previous session's qualitative data, student research teams conduct a quantitative experiment to collect detailed measurements of the system. Finally, students interpret, critique, and compare the data the groups collected and ultimately develop and advocate a recommendation for the "ideal" dam removal scenario. SMM is currently conducting a formative evaluation of River Restoration classes to improve their educational effectiveness and guide development of an educator's manual. As of August 2006, pre- and post-surveys have been administered to 167 students to gauge student learning and engagement. The surveys have found the program successful in teaching students why scientists use river models and what processes and phenomena are at work in river systems. Most notable is the increase in student awareness of sediment in river systems. A post-visit survey was also administered to 20 teachers who used the models in their classrooms. This survey provided feedback about teachers' experience with the program and will help inform the development of a future educator's manual. All teachers found the program to be effective at providing opportunities for students to make qualitative observations and most (95%) found the program effective at providing students opportunities to make quantitative measurements. A full summary of evaluation results will be shared at the meeting.

  10. Integration of biological data by kernels on graph nodes allows prediction of new genes involved in mitotic chromosome condensation

    PubMed Central

    Hériché, Jean-Karim; Lees, Jon G.; Morilla, Ian; Walter, Thomas; Petrova, Boryana; Roberti, M. Julia; Hossain, M. Julius; Adler, Priit; Fernández, José M.; Krallinger, Martin; Haering, Christian H.; Vilo, Jaak; Valencia, Alfonso; Ranea, Juan A.; Orengo, Christine; Ellenberg, Jan

    2014-01-01

    The advent of genome-wide RNA interference (RNAi)–based screens puts us in the position to identify genes for all functions human cells carry out. However, for many functions, assay complexity and cost make genome-scale knockdown experiments impossible. Methods to predict genes required for cell functions are therefore needed to focus RNAi screens from the whole genome on the most likely candidates. Although different bioinformatics tools for gene function prediction exist, they lack experimental validation and are therefore rarely used by experimentalists. To address this, we developed an effective computational gene selection strategy that represents public data about genes as graphs and then analyzes these graphs using kernels on graph nodes to predict functional relationships. To demonstrate its performance, we predicted human genes required for a poorly understood cellular function—mitotic chromosome condensation—and experimentally validated the top 100 candidates with a focused RNAi screen by automated microscopy. Quantitative analysis of the images demonstrated that the candidates were indeed strongly enriched in condensation genes, including the discovery of several new factors. By combining bioinformatics prediction with experimental validation, our study shows that kernels on graph nodes are powerful tools to integrate public biological data and predict genes involved in cellular functions of interest. PMID:24943848

  11. Predicting Multicomponent Adsorption Isotherms in Open-Metal Site Materials Using Force Field Calculations Based on Energy Decomposed Density Functional Theory.

    PubMed

    Heinen, Jurn; Burtch, Nicholas C; Walton, Krista S; Fonseca Guerra, Célia; Dubbeldam, David

    2016-12-12

    For the design of adsorptive-separation units, knowledge is required of the multicomponent adsorption behavior. Ideal adsorbed solution theory (IAST) breaks down for olefin adsorption in open-metal site (OMS) materials due to non-ideal donor-acceptor interactions. Using a density-function-theory-based energy decomposition scheme, we develop a physically justifiable classical force field that incorporates the missing orbital interactions using an appropriate functional form. Our first-principles derived force field shows greatly improved quantitative agreement with the inflection points, initial uptake, saturation capacity, and enthalpies of adsorption obtained from our in-house adsorption experiments. While IAST fails to make accurate predictions, our improved force field model is able to correctly predict the multicomponent behavior. Our approach is also transferable to other OMS structures, allowing the accurate study of their separation performances for olefins/paraffins and further mixtures involving complex donor-acceptor interactions. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Reward rate optimization in two-alternative decision making: empirical tests of theoretical predictions.

    PubMed

    Simen, Patrick; Contreras, David; Buck, Cara; Hu, Peter; Holmes, Philip; Cohen, Jonathan D

    2009-12-01

    The drift-diffusion model (DDM) implements an optimal decision procedure for stationary, 2-alternative forced-choice tasks. The height of a decision threshold applied to accumulating information on each trial determines a speed-accuracy tradeoff (SAT) for the DDM, thereby accounting for a ubiquitous feature of human performance in speeded response tasks. However, little is known about how participants settle on particular tradeoffs. One possibility is that they select SATs that maximize a subjective rate of reward earned for performance. For the DDM, there exist unique, reward-rate-maximizing values for its threshold and starting point parameters in free-response tasks that reward correct responses (R. Bogacz, E. Brown, J. Moehlis, P. Holmes, & J. D. Cohen, 2006). These optimal values vary as a function of response-stimulus interval, prior stimulus probability, and relative reward magnitude for correct responses. We tested the resulting quantitative predictions regarding response time, accuracy, and response bias under these task manipulations and found that grouped data conformed well to the predictions of an optimally parameterized DDM.

  13. Plastics in the Ocean: Engaging Students in Core Competencies Through Issues-Based Activities in the Science Classroom.

    NASA Astrophysics Data System (ADS)

    Fergusson-Kolmes, L. A.

    2016-02-01

    Plastic pollution in the ocean is a critical issue. The high profile of this issue in the popular media makes it an opportune vehicle for promoting deeper understanding of the topic while also advancing student learning in the core competency areas identified in the NSF's Vision and Change document: integration of the process of science, quantitative reasoning, modeling and simulation, and an understanding of the relationship between science and society. This is a challenging task in an introductory non-majors class where the students may have very limited math skills and no prior science background. In this case activities are described that ask students to use an understanding of density to make predictions and test them as they consider the fate of different kinds of plastics in the marine environment. A comparison of the results from different sampling regimes introduces students to the difficulties of carrying out scientific investigations in the complex marine environment as well as building quantitative literacy skills. Activities that call on students to make connections between global issues of plastic pollution and personal actions include extraction of microplastic from personal care products, inventories of local plastic-recycling options and estimations of contributions to the waste stream on an individual level. This combination of hands-on-activities in an accessible context serves to help students appreciate the immediacy of the threat of plastic pollution and calls them to reflect on possible solutions.

  14. Quantitative imaging biomarkers: the application of advanced image processing and analysis to clinical and preclinical decision making.

    PubMed

    Prescott, Jeffrey William

    2013-02-01

    The importance of medical imaging for clinical decision making has been steadily increasing over the last four decades. Recently, there has also been an emphasis on medical imaging for preclinical decision making, i.e., for use in pharamaceutical and medical device development. There is also a drive towards quantification of imaging findings by using quantitative imaging biomarkers, which can improve sensitivity, specificity, accuracy and reproducibility of imaged characteristics used for diagnostic and therapeutic decisions. An important component of the discovery, characterization, validation and application of quantitative imaging biomarkers is the extraction of information and meaning from images through image processing and subsequent analysis. However, many advanced image processing and analysis methods are not applied directly to questions of clinical interest, i.e., for diagnostic and therapeutic decision making, which is a consideration that should be closely linked to the development of such algorithms. This article is meant to address these concerns. First, quantitative imaging biomarkers are introduced by providing definitions and concepts. Then, potential applications of advanced image processing and analysis to areas of quantitative imaging biomarker research are described; specifically, research into osteoarthritis (OA), Alzheimer's disease (AD) and cancer is presented. Then, challenges in quantitative imaging biomarker research are discussed. Finally, a conceptual framework for integrating clinical and preclinical considerations into the development of quantitative imaging biomarkers and their computer-assisted methods of extraction is presented.

  15. Predicting Hybrid Performances for Quality Traits through Genomic-Assisted Approaches in Central European Wheat

    PubMed Central

    Liu, Guozheng; Zhao, Yusheng; Gowda, Manje; Longin, C. Friedrich H.; Reif, Jochen C.; Mette, Michael F.

    2016-01-01

    Bread-making quality traits are central targets for wheat breeding. The objectives of our study were to (1) examine the presence of major effect QTLs for quality traits in a Central European elite wheat population, (2) explore the optimal strategy for predicting the hybrid performance for wheat quality traits, and (3) investigate the effects of marker density and the composition and size of the training population on the accuracy of prediction of hybrid performance. In total 135 inbred lines of Central European bread wheat (Triticum aestivum L.) and 1,604 hybrids derived from them were evaluated for seven quality traits in up to six environments. The 135 parental lines were genotyped using a 90k single-nucleotide polymorphism array. Genome-wide association mapping initially suggested presence of several quantitative trait loci (QTLs), but cross-validation rather indicated the absence of major effect QTLs for all quality traits except of 1000-kernel weight. Genomic selection substantially outperformed marker-assisted selection in predicting hybrid performance. A resampling study revealed that increasing the effective population size in the estimation set of hybrids is relevant to boost the accuracy of prediction for an unrelated test population. PMID:27383841

  16. Pitfalls and Precautions When Using Predicted Failure Data for Quantitative Analysis of Safety Risk for Human Rated Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Hatfield, Glen S.; Hark, Frank; Stott, James

    2016-01-01

    Launch vehicle reliability analysis is largely dependent upon using predicted failure rates from data sources such as MIL-HDBK-217F. Reliability prediction methodologies based on component data do not take into account risks attributable to manufacturing, assembly, and process controls. These sources often dominate component level reliability or risk of failure probability. While consequences of failure is often understood in assessing risk, using predicted values in a risk model to estimate the probability of occurrence will likely underestimate the risk. Managers and decision makers often use the probability of occurrence in determining whether to accept the risk or require a design modification. Due to the absence of system level test and operational data inherent in aerospace applications, the actual risk threshold for acceptance may not be appropriately characterized for decision making purposes. This paper will establish a method and approach to identify the pitfalls and precautions of accepting risk based solely upon predicted failure data. This approach will provide a set of guidelines that may be useful to arrive at a more realistic quantification of risk prior to acceptance by a program.

  17. A Market-Basket Approach to Predict the Acute Aquatic Toxicity of Munitions and Energetic Materials.

    PubMed

    Burgoon, Lyle D

    2016-06-01

    An ongoing challenge in chemical production, including the production of insensitive munitions and energetics, is the ability to make predictions about potential environmental hazards early in the process. To address this challenge, a quantitative structure activity relationship model was developed to predict acute fathead minnow toxicity of insensitive munitions and energetic materials. Computational predictive toxicology models like this one may be used to identify and prioritize environmentally safer materials early in their development. The developed model is based on the Apriori market-basket/frequent itemset mining approach to identify probabilistic prediction rules using chemical atom-pairs and the lethality data for 57 compounds from a fathead minnow acute toxicity assay. Lethality data were discretized into four categories based on the Globally Harmonized System of Classification and Labelling of Chemicals. Apriori identified toxicophores for categories two and three. The model classified 32 of the 57 compounds correctly, with a fivefold cross-validation classification rate of 74 %. A structure-based surrogate approach classified the remaining 25 chemicals correctly at 48 %. This result is unsurprising as these 25 chemicals were fairly unique within the larger set.

  18. Bayes` theorem and quantitative risk assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaplan, S.

    1994-12-31

    This paper argues that for a quantitative risk analysis (QRA) to be useful for public and private decision making, and for rallying the support necessary to implement those decisions, it is necessary that the QRA results be ``trustable.`` Trustable means that the results are based solidly and logically on all the relevant evidence available. This, in turn, means that the quantitative results must be derived from the evidence using Bayes` theorem. Thus, it argues that one should strive to make their QRAs more clearly and explicitly Bayesian, and in this way make them more ``evidence dependent`` than ``personality dependent.``

  19. Serum Albumin and Body Weight as Biomarkers for the Antemortem Identification of Bone and Gastrointestinal Disease in the Common Marmoset

    PubMed Central

    Baxter, Victoria K.; Shaw, Gillian C.; Sotuyo, Nathaniel P.; Carlson, Cathy S.; Olson, Erik J.; Zink, M. Christine; Mankowski, Joseph L.; Adams, Robert J.

    2013-01-01

    The increasing use of the common marmoset (Callithrix jacchus) in research makes it important to diagnose spontaneous disease that may confound experimental studies. Bone disease and gastrointestinal disease are two major causes of morbidity and mortality in captive marmosets, but currently no effective antemortem tests are available to identify affected animals prior to the terminal stage of disease. In this study we propose that bone disease and gastrointestinal disease are associated disease entities in marmosets and aim to establish the efficacy of several economical antemortem tests in identifying and predicting disease. Tissues from marmosets were examined to define affected animals and unaffected controls. Complete blood count, serum chemistry values, body weight, quantitative radiographs, and tissue-specific biochemical markers were evaluated as candidate biomarkers for disease. Bone and gastrointestinal disease were associated, with marmosets being over seven times more likely to have either concurrent bone and gastrointestinal disease or neither disease as opposed to lesions in only one organ system. When used in tandem, serum albumin <3.5 g/dL and body weight <325 g identified 100% of the marmosets affected with concurrent bone and gastrointestinal disease. Progressive body weight loss of 0.05% of peak body weight per day predicted which marmosets would develop disease prior to the terminal stage. Bone tissue-specific tests, such as quantitative analysis of radiographs and serum parathyroid hormone levels, were effective for distinguishing between marmosets with bone disease and those without. These results provide an avenue for making informed decisions regarding the removal of affected marmosets from studies in a timely manner, preserving the integrity of research results. PMID:24324827

  20. Environmental determinants of tropical forest and savanna distribution: A quantitative model evaluation and its implication

    NASA Astrophysics Data System (ADS)

    Zeng, Zhenzhong; Chen, Anping; Piao, Shilong; Rabin, Sam; Shen, Zehao

    2014-07-01

    The distributions of tropical ecosystems are rapidly being altered by climate change and anthropogenic activities. One possible trend—the loss of tropical forests and replacement by savannas—could result in significant shifts in ecosystem services and biodiversity loss. However, the influence and the relative importance of environmental factors in regulating the distribution of tropical forest and savanna biomes are still poorly understood, which makes it difficult to predict future tropical forest and savanna distributions in the context of climate change. Here we use boosted regression trees to quantitatively evaluate the importance of environmental predictors—mainly climatic, edaphic, and fire factors—for the tropical forest-savanna distribution at a mesoscale across the tropics (between 15°N and 35°S). Our results demonstrate that climate alone can explain most of the distribution of tropical forest and savanna at the scale considered; dry season average precipitation is the single most important determinant across tropical Asia-Australia, Africa, and South America. Given the strong tendency of increased seasonality and decreased dry season precipitation predicted by global climate models, we estimate that about 28% of what is now tropical forest would likely be lost to savanna by the late 21st century under the future scenario considered. This study highlights the importance of climate seasonality and interannual variability in predicting the distribution of tropical forest and savanna, supporting the climate as the primary driver in the savanna biogeography.

  1. Distinction between Externally vs. Internally Guided Decision-Making: Operational Differences, Meta-Analytical Comparisons and Their Theoretical Implications

    PubMed Central

    Nakao, Takashi; Ohira, Hideki; Northoff, Georg

    2012-01-01

    Most experimental studies of decision-making have specifically examined situations in which a single less-predictable correct answer exists (externally guided decision-making under uncertainty). Along with such externally guided decision-making, there are instances of decision-making in which no correct answer based on external circumstances is available for the subject (internally guided decision-making). Such decisions are usually made in the context of moral decision-making as well as in preference judgment, where the answer depends on the subject’s own, i.e., internal, preferences rather than on external, i.e., circumstantial, criteria. The neuronal and psychological mechanisms that allow guidance of decisions based on more internally oriented criteria in the absence of external ones remain unclear. This study was undertaken to compare decision-making of these two kinds empirically and theoretically. First, we reviewed studies of decision-making to clarify experimental–operational differences between externally guided and internally guided decision-making. Second, using multi-level kernel density analysis, a whole-brain-based quantitative meta-analysis of neuroimaging studies was performed. Our meta-analysis revealed that the neural network used predominantly for internally guided decision-making differs from that for externally guided decision-making under uncertainty. This result suggests that studying only externally guided decision-making under uncertainty is insufficient to account for decision-making processes in the brain. Finally, based on the review and results of the meta-analysis, we discuss the differences and relations between decision-making of these two types in terms of their operational, neuronal, and theoretical characteristics. PMID:22403525

  2. WHAT MAKES A GOOD PEDIATRIC TRANSPLANT LUNG: INSIGHTS FROM IN VIVO LUNG MORPHOMETRY WITH HYPERPOLARIZED 3HE MRI (WHAT MAKES A GOOD PEDIATRIC TRANSPLANT LUNG)

    PubMed Central

    Fishman, Emily F.; Quirk, James D.; Sweet, Stuart C.; Woods, Jason C.; Gierada, David S.; Conradi, Mark S.; Siegel, Marilyn J.; Yablonskiy, Dmitriy A.

    2016-01-01

    Background Obtaining information on transplanted lung microstructure is an important part of the current care for monitoring transplant recipients. However, until now this information was only available from invasive lung biopsy. The objective of this study was to evaluate the use of an innovative non-invasive technique in vivo lung morphometry with hyperpolarized 3He MRI - to characterize lung microstructure in the pediatric lung transplant population. This technique yields quantitative measurements of acinar airways’ (alveolar ducts and sacs) parameters, such as acinar airways radii and alveolar depth. Methods Six pediatric lung transplant recipients with cystic fibrosis underwent in vivo lung morphometry MRI, pulmonary function testing, and quantitative CT. Results We found a strong correlation between lung lifespan and alveolar depth - patients with more shallow alveoli were likely to have a negative outcome sooner than those with larger alveolar depth. Combining morphometric results with CT we also determined mean alveolar wall thickness and found substantial increases in this parameter in some patients that negatively correlated with DLCO. Conclusion In vivo lung morphometry uniquely provides previously unavailable information on lung microstructure that may be predictive of a negative outcome and has a potential to aid in lung selection for transplantation. PMID:28120553

  3. Multi-centre diagnostic classification of individual structural neuroimaging scans from patients with major depressive disorder.

    PubMed

    Mwangi, Benson; Ebmeier, Klaus P; Matthews, Keith; Steele, J Douglas

    2012-05-01

    Quantitative abnormalities of brain structure in patients with major depressive disorder have been reported at a group level for decades. However, these structural differences appear subtle in comparison with conventional radiologically defined abnormalities, with considerable inter-subject variability. Consequently, it has not been possible to readily identify scans from patients with major depressive disorder at an individual level. Recently, machine learning techniques such as relevance vector machines and support vector machines have been applied to predictive classification of individual scans with variable success. Here we describe a novel hybrid method, which combines machine learning with feature selection and characterization, with the latter aimed at maximizing the accuracy of machine learning prediction. The method was tested using a multi-centre dataset of T(1)-weighted 'structural' scans. A total of 62 patients with major depressive disorder and matched controls were recruited from referred secondary care clinical populations in Aberdeen and Edinburgh, UK. The generalization ability and predictive accuracy of the classifiers was tested using data left out of the training process. High prediction accuracy was achieved (~90%). While feature selection was important for maximizing high predictive accuracy with machine learning, feature characterization contributed only a modest improvement to relevance vector machine-based prediction (~5%). Notably, while the only information provided for training the classifiers was T(1)-weighted scans plus a categorical label (major depressive disorder versus controls), both relevance vector machine and support vector machine 'weighting factors' (used for making predictions) correlated strongly with subjective ratings of illness severity. These results indicate that machine learning techniques have the potential to inform clinical practice and research, as they can make accurate predictions about brain scan data from individual subjects. Furthermore, machine learning weighting factors may reflect an objective biomarker of major depressive disorder illness severity, based on abnormalities of brain structure.

  4. Evaluation of a statistics-based Ames mutagenicity QSAR model and interpretation of the results obtained.

    PubMed

    Barber, Chris; Cayley, Alex; Hanser, Thierry; Harding, Alex; Heghes, Crina; Vessey, Jonathan D; Werner, Stephane; Weiner, Sandy K; Wichard, Joerg; Giddings, Amanda; Glowienke, Susanne; Parenty, Alexis; Brigo, Alessandro; Spirkl, Hans-Peter; Amberg, Alexander; Kemper, Ray; Greene, Nigel

    2016-04-01

    The relative wealth of bacterial mutagenicity data available in the public literature means that in silico quantitative/qualitative structure activity relationship (QSAR) systems can readily be built for this endpoint. A good means of evaluating the performance of such systems is to use private unpublished data sets, which generally represent a more distinct chemical space than publicly available test sets and, as a result, provide a greater challenge to the model. However, raw performance metrics should not be the only factor considered when judging this type of software since expert interpretation of the results obtained may allow for further improvements in predictivity. Enough information should be provided by a QSAR to allow the user to make general, scientifically-based arguments in order to assess and overrule predictions when necessary. With all this in mind, we sought to validate the performance of the statistics-based in vitro bacterial mutagenicity prediction system Sarah Nexus (version 1.1) against private test data sets supplied by nine different pharmaceutical companies. The results of these evaluations were then analysed in order to identify findings presented by the model which would be useful for the user to take into consideration when interpreting the results and making their final decision about the mutagenic potential of a given compound. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Flory-Stockmayer analysis on reprocessable polymer networks

    NASA Astrophysics Data System (ADS)

    Li, Lingqiao; Chen, Xi; Jin, Kailong; Torkelson, John

    Reprocessable polymer networks can undergo structure rearrangement through dynamic chemistries under proper conditions, making them a promising candidate for recyclable crosslinked materials, e.g. tires. This research field has been focusing on various chemistries. However, there has been lacking of an essential physical theory explaining the relationship between abundancy of dynamic linkages and reprocessability. Based on the classical Flory-Stockmayer analysis on network gelation, we developed a similar analysis on reprocessable polymer networks to quantitatively predict the critical condition for reprocessability. Our theory indicates that it is unnecessary for all bonds to be dynamic to make the resulting network reprocessable. As long as there is no percolated permanent network in the system, the material can fully rearrange. To experimentally validate our theory, we used a thiol-epoxy network model system with various dynamic linkage compositions. The stress relaxation behavior of resulting materials supports our theoretical prediction: only 50 % of linkages between crosslinks need to be dynamic for a tri-arm network to be reprocessable. Therefore, this analysis provides the first fundamental theoretical platform for designing and evaluating reprocessable polymer networks. We thank McCormick Research Catalyst Award Fund and ISEN cluster fellowship (L. L.) for funding support.

  6. Using Data Independent Acquisition (DIA) to Model High-responding Peptides for Targeted Proteomics Experiments*

    PubMed Central

    Searle, Brian C.; Egertson, Jarrett D.; Bollinger, James G.; Stergachis, Andrew B.; MacCoss, Michael J.

    2015-01-01

    Targeted mass spectrometry is an essential tool for detecting quantitative changes in low abundant proteins throughout the proteome. Although selected reaction monitoring (SRM) is the preferred method for quantifying peptides in complex samples, the process of designing SRM assays is laborious. Peptides have widely varying signal responses dictated by sequence-specific physiochemical properties; one major challenge is in selecting representative peptides to target as a proxy for protein abundance. Here we present PREGO, a software tool that predicts high-responding peptides for SRM experiments. PREGO predicts peptide responses with an artificial neural network trained using 11 minimally redundant, maximally relevant properties. Crucial to its success, PREGO is trained using fragment ion intensities of equimolar synthetic peptides extracted from data independent acquisition experiments. Because of similarities in instrumentation and the nature of data collection, relative peptide responses from data independent acquisition experiments are a suitable substitute for SRM experiments because they both make quantitative measurements from integrated fragment ion chromatograms. Using an SRM experiment containing 12,973 peptides from 724 synthetic proteins, PREGO exhibits a 40–85% improvement over previously published approaches at selecting high-responding peptides. These results also represent a dramatic improvement over the rules-based peptide selection approaches commonly used in the literature. PMID:26100116

  7. A comparison of major petroleum life cycle models | Science ...

    EPA Pesticide Factsheets

    Many organizations have attempted to develop an accurate well-to-pump life cycle model of petroleum products in order to inform decision makers of the consequences of its use. Our paper studies five of these models, demonstrating the differences in their predictions and attempting to evaluate their data quality. Carbon dioxide well-to-pump emissions for gasoline showed a variation of 35 %, and other pollutants such as ammonia and particulate matter varied up to 100 %. Differences in allocation do not appear to explain differences in predictions. Effects of these deviations on well-to-wheels passenger vehicle and truck transportation life cycle models may be minimal for effects such as global warming potential (6 % spread), but for respiratory effects of criteria pollutants (41 % spread) and other impact categories, they can be significant. A data quality assessment of the models’ documentation revealed real differences between models in temporal and geographic representativeness, completeness, as well as transparency. Stakeholders may need to consider carefully the tradeoffs inherent when selecting a model to conduct life cycle assessments for systems that make heavy use of petroleum products. This is a qualitative and quantitative comparison of petroleum LCA models intended for an expert audience interested in better understanding the data quality of existing petroleum life cycle models and the quantitative differences between these models.

  8. Meeting the Sustainable Development Goals leads to lower world population growth

    PubMed Central

    Abel, Guy J.; Barakat, Bilal; KC, Samir; Lutz, Wolfgang

    2016-01-01

    Here we show the extent to which the expected world population growth could be lowered by successfully implementing the recently agreed-upon Sustainable Development Goals (SDGs). The SDGs include specific quantitative targets on mortality, reproductive health, and education for all girls by 2030, measures that will directly and indirectly affect future demographic trends. Based on a multidimensional model of population dynamics that stratifies national populations by age, sex, and level of education with educational fertility and mortality differentials, we translate these goals into SDG population scenarios, resulting in population sizes between 8.2 and 8.7 billion in 2100. Because these results lie outside the 95% prediction range given by the 2015 United Nations probabilistic population projections, we complement the study with sensitivity analyses of these projections that suggest that those prediction intervals are too narrow because of uncertainty in baseline data, conservative assumptions on correlations, and the possibility of new policies influencing these trends. Although the analysis presented here rests on several assumptions about the implementation of the SDGs and the persistence of educational, fertility, and mortality differentials, it quantitatively illustrates the view that demography is not destiny and that policies can make a decisive difference. In particular, advances in female education and reproductive health can contribute greatly to reducing world population growth. PMID:27911797

  9. Meeting the Sustainable Development Goals leads to lower world population growth.

    PubMed

    Abel, Guy J; Barakat, Bilal; Kc, Samir; Lutz, Wolfgang

    2016-12-13

    Here we show the extent to which the expected world population growth could be lowered by successfully implementing the recently agreed-upon Sustainable Development Goals (SDGs). The SDGs include specific quantitative targets on mortality, reproductive health, and education for all girls by 2030, measures that will directly and indirectly affect future demographic trends. Based on a multidimensional model of population dynamics that stratifies national populations by age, sex, and level of education with educational fertility and mortality differentials, we translate these goals into SDG population scenarios, resulting in population sizes between 8.2 and 8.7 billion in 2100. Because these results lie outside the 95% prediction range given by the 2015 United Nations probabilistic population projections, we complement the study with sensitivity analyses of these projections that suggest that those prediction intervals are too narrow because of uncertainty in baseline data, conservative assumptions on correlations, and the possibility of new policies influencing these trends. Although the analysis presented here rests on several assumptions about the implementation of the SDGs and the persistence of educational, fertility, and mortality differentials, it quantitatively illustrates the view that demography is not destiny and that policies can make a decisive difference. In particular, advances in female education and reproductive health can contribute greatly to reducing world population growth.

  10. Development and Assessment of Modules to Integrate Quantitative Skills in Introductory Biology Courses.

    PubMed

    Hoffman, Kathleen; Leupen, Sarah; Dowell, Kathy; Kephart, Kerrie; Leips, Jeff

    2016-01-01

    Redesigning undergraduate biology courses to integrate quantitative reasoning and skill development is critical to prepare students for careers in modern medicine and scientific research. In this paper, we report on the development, implementation, and assessment of stand-alone modules that integrate quantitative reasoning into introductory biology courses. Modules are designed to improve skills in quantitative numeracy, interpreting data sets using visual tools, and making inferences about biological phenomena using mathematical/statistical models. We also examine demographic/background data that predict student improvement in these skills through exposure to these modules. We carried out pre/postassessment tests across four semesters and used student interviews in one semester to examine how students at different levels approached quantitative problems. We found that students improved in all skills in most semesters, although there was variation in the degree of improvement among skills from semester to semester. One demographic variable, transfer status, stood out as a major predictor of the degree to which students improved (transfer students achieved much lower gains every semester, despite the fact that pretest scores in each focus area were similar between transfer and nontransfer students). We propose that increased exposure to quantitative skill development in biology courses is effective at building competency in quantitative reasoning. © 2016 K. Hoffman, S. Leupen, et al. CBE—Life Sciences Education © 2016 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  11. Essential Set of Molecular Descriptors for ADME Prediction in Drug and Environmental Chemical Space

    EPA Science Inventory

    Historically, the disciplines of pharmacology and toxicology have embraced quantitative structure-activity relationships (QSAR) and quantitative structure-property relationships (QSPR) to predict ADME properties or biological activities of untested chemicals. The question arises ...

  12. Procalcitonin as a biomarker for severe Plasmodium falciparum disease: a critical appraisal of a semi-quantitative point-of-care test in a cohort of travellers with imported malaria.

    PubMed

    Hesselink, Dennis A; Burgerhart, Jan-Steven; Bosmans-Timmerarends, Hanna; Petit, Pieter; van Genderen, Perry J J

    2009-09-01

    Imported malaria occurs as a relatively rare event in developed countries. As a consequence, most clinicians have little experience in making clinical assessments of disease severity and decisions regarding the need for parenteral therapy or high-level monitoring. In this study, the diagnostic accuracy of procalcitonin (PCT) for severe Plasmodium falciparum disease was assessed in a cohort of 100 consecutive travellers with various species of imported malaria. In all patients, PCT was measured on admission with a semi-quantitative 'point-of-care' test. Patients with severe P. falciparum malaria had significantly higher median PCT levels on admission as compared with patients with uncomplicated P. falciparum disease. In addition, PCT levels in patients with non-falciparum malaria were also higher compared with patients with non-severe falciparum malaria but lower compared with severe P. falciparum malaria. At a cut-off point of 10 ng/mL, PCT had a sensitivity of 0,67 and a specificity of 0,94 for severe falciparum disease. However, at lower cut-off points the specificity and positive predictive value were rather poor although the sensitivity and negative predictive value remained high. Potential drawbacks in the interpretation of elevated PCT levels on admission may be caused by infections with non-falciparum species and by concomitant bacterial infections. Semi-quantitative determination of PCT on admission is of limited use in the initial clinical assessment of disease severity in travellers with imported malaria, especially in settings with limited experience with the treatment of malaria.

  13. Prediction of trabecular bone qualitative properties using scanning quantitative ultrasound

    PubMed Central

    Qin, Yi-Xian; Lin, Wei; Mittra, Erik; Xia, Yi; Cheng, Jiqi; Judex, Stefan; Rubin, Clint; Müller, Ralph

    2012-01-01

    Microgravity induced bone loss represents a critical health problem in astronauts, particularly occurred in weight-supporting skeleton, which leads to osteopenia and increase of fracture risk. Lack of suitable evaluation modality makes it difficult for monitoring skeletal status in long term space mission and increases potential risk of complication. Such disuse osteopenia and osteoporosis compromise trabecular bone density, and architectural and mechanical properties. While X-ray based imaging would not be practical in space, quantitative ultrasound may provide advantages to characterize bone density and strength through wave propagation in complex trabecular structure. This study used a scanning confocal acoustic diagnostic and navigation system (SCAN) to evaluate trabecular bone quality in 60 cubic trabecular samples harvested from adult sheep. Ultrasound image based SCAN measurements in structural and strength properties were validated by μCT and compressive mechanical testing. This result indicated a moderately strong negative correlations observed between broadband ultrasonic attenuation (BUA) and μCT-determined bone volume fraction (BV/TV, R2=0.53). Strong correlations were observed between ultrasound velocity (UV) and bone’s mechanical strength and structural parameters, i.e., bulk Young’s modulus (R2=0.67) and BV/TV (R2=0.85). The predictions for bone density and mechanical strength were significantly improved by using a linear combination of both BUA and UV, yielding R2=0.92 for BV/TV and R2=0.71 for bulk Young’s modulus. These results imply that quantitative ultrasound can characterize trabecular structural and mechanical properties through measurements of particular ultrasound parameters, and potentially provide an excellent estimation for bone’s structural integrity. PMID:23976803

  14. Prediction of trabecular bone qualitative properties using scanning quantitative ultrasound

    NASA Astrophysics Data System (ADS)

    Qin, Yi-Xian; Lin, Wei; Mittra, Erik; Xia, Yi; Cheng, Jiqi; Judex, Stefan; Rubin, Clint; Müller, Ralph

    2013-11-01

    Microgravity induced bone loss represents a critical health problem in astronauts, particularly occurred in weight-supporting skeleton, which leads to osteopenia and increase of fracture risk. Lack of suitable evaluation modality makes it difficult for monitoring skeletal status in long term space mission and increases potential risk of complication. Such disuse osteopenia and osteoporosis compromise trabecular bone density, and architectural and mechanical properties. While X-ray based imaging would not be practical in space, quantitative ultrasound may provide advantages to characterize bone density and strength through wave propagation in complex trabecular structure. This study used a scanning confocal acoustic diagnostic and navigation system (SCAN) to evaluate trabecular bone quality in 60 cubic trabecular samples harvested from adult sheep. Ultrasound image based SCAN measurements in structural and strength properties were validated by μCT and compressive mechanical testing. This result indicated a moderately strong negative correlations observed between broadband ultrasonic attenuation (BUA) and μCT-determined bone volume fraction (BV/TV, R2=0.53). Strong correlations were observed between ultrasound velocity (UV) and bone's mechanical strength and structural parameters, i.e., bulk Young's modulus (R2=0.67) and BV/TV (R2=0.85). The predictions for bone density and mechanical strength were significantly improved by using a linear combination of both BUA and UV, yielding R2=0.92 for BV/TV and R2=0.71 for bulk Young's modulus. These results imply that quantitative ultrasound can characterize trabecular structural and mechanical properties through measurements of particular ultrasound parameters, and potentially provide an excellent estimation for bone's structural integrity.

  15. Use of the MagNA Pure LC Automated Nucleic Acid Extraction System followed by Real-Time Reverse Transcription-PCR for Ultrasensitive Quantitation of Hepatitis C Virus RNA

    PubMed Central

    Cook, Linda; Ng, Ka-Wing; Bagabag, Arthur; Corey, Lawrence; Jerome, Keith R.

    2004-01-01

    Hepatitis C virus (HCV) infection is an increasing health problem worldwide. Quantitative assays for HCV viral load are valuable in predicting response to therapy and for following treatment efficacy. Unfortunately, most quantitative tests for HCV RNA are limited by poor sensitivity. We have developed a convenient, highly sensitive real-time reverse transcription-PCR assay for HCV RNA. The assay amplifies a portion of the 5′ untranslated region of HCV, which is then quantitated using the TaqMan 7700 detection system. Extraction of viral RNA for our assay is fully automated with the MagNA Pure LC extraction system (Roche). Our assay has a 100% detection rate for samples containing 50 IU of HCV RNA/ml and is linear up to viral loads of at least 109 IU/ml. The assay detects genotypes 1a, 2a, and 3a with equal efficiency. Quantitative results by our assay correlate well with HCV viral load as determined by the Bayer VERSANT HCV RNA 3.0 bDNA assay. In clinical use, our assay is highly reproducible, with high and low control specimens showing a coefficient of variation for the logarithmic result of 2.8 and 7.0%, respectively. The combination of reproducibility, extreme sensitivity, and ease of performance makes this assay an attractive option for routine HCV viral load testing. PMID:15365000

  16. Uniting Cheminformatics and Chemical Theory To Predict the Intrinsic Aqueous Solubility of Crystalline Druglike Molecules

    PubMed Central

    2014-01-01

    We present four models of solution free-energy prediction for druglike molecules utilizing cheminformatics descriptors and theoretically calculated thermodynamic values. We make predictions of solution free energy using physics-based theory alone and using machine learning/quantitative structure–property relationship (QSPR) models. We also develop machine learning models where the theoretical energies and cheminformatics descriptors are used as combined input. These models are used to predict solvation free energy. While direct theoretical calculation does not give accurate results in this approach, machine learning is able to give predictions with a root mean squared error (RMSE) of ∼1.1 log S units in a 10-fold cross-validation for our Drug-Like-Solubility-100 (DLS-100) dataset of 100 druglike molecules. We find that a model built using energy terms from our theoretical methodology as descriptors is marginally less predictive than one built on Chemistry Development Kit (CDK) descriptors. Combining both sets of descriptors allows a further but very modest improvement in the predictions. However, in some cases, this is a statistically significant enhancement. These results suggest that there is little complementarity between the chemical information provided by these two sets of descriptors, despite their different sources and methods of calculation. Our machine learning models are also able to predict the well-known Solubility Challenge dataset with an RMSE value of 0.9–1.0 log S units. PMID:24564264

  17. Genome-Assisted Prediction of Quantitative Traits Using the R Package sommer.

    PubMed

    Covarrubias-Pazaran, Giovanny

    2016-01-01

    Most traits of agronomic importance are quantitative in nature, and genetic markers have been used for decades to dissect such traits. Recently, genomic selection has earned attention as next generation sequencing technologies became feasible for major and minor crops. Mixed models have become a key tool for fitting genomic selection models, but most current genomic selection software can only include a single variance component other than the error, making hybrid prediction using additive, dominance and epistatic effects unfeasible for species displaying heterotic effects. Moreover, Likelihood-based software for fitting mixed models with multiple random effects that allows the user to specify the variance-covariance structure of random effects has not been fully exploited. A new open-source R package called sommer is presented to facilitate the use of mixed models for genomic selection and hybrid prediction purposes using more than one variance component and allowing specification of covariance structures. The use of sommer for genomic prediction is demonstrated through several examples using maize and wheat genotypic and phenotypic data. At its core, the program contains three algorithms for estimating variance components: Average information (AI), Expectation-Maximization (EM) and Efficient Mixed Model Association (EMMA). Kernels for calculating the additive, dominance and epistatic relationship matrices are included, along with other useful functions for genomic analysis. Results from sommer were comparable to other software, but the analysis was faster than Bayesian counterparts in the magnitude of hours to days. In addition, ability to deal with missing data, combined with greater flexibility and speed than other REML-based software was achieved by putting together some of the most efficient algorithms to fit models in a gentle environment such as R.

  18. The smell of environmental change: Using floral scent to explain shifts in pollinator attraction1

    PubMed Central

    Burkle, Laura A.; Runyon, Justin B.

    2017-01-01

    As diverse environmental changes continue to influence the structure and function of plant–pollinator interactions across spatial and temporal scales, we will need to enlist numerous approaches to understand these changes. Quantitative examination of floral volatile organic compounds (VOCs) is one approach that is gaining popularity, and recent work suggests that floral VOCs hold substantial promise for better understanding and predicting the effects of environmental change on plant–pollinator interactions. Until recently, few ecologists were employing chemical approaches to investigate mechanisms by which components of environmental change may disrupt these essential mutualisms. In an attempt to make these approaches more accessible, we summarize the main field, laboratory, and statistical methods involved in capturing, quantifying, and analyzing floral VOCs in the context of changing environments. We also highlight some outstanding questions that we consider to be highly relevant to making progress in this field. PMID:28690928

  19. Carbothermic Synthesis of 820 m UN Kernels: Literature Review, Thermodynamics, Analysis, and Related Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lindemer, Terrence; Voit, Stewart L; Silva, Chinthaka M

    2014-01-01

    The U.S. Department of Energy is considering a new nuclear fuel that would be less susceptible to ruptures during a loss-of-coolant accident. The fuel would consist of tristructural isotropic coated particles with large, dense uranium nitride (UN) kernels. This effort explores many factors involved in using gel-derived uranium oxide-carbon microspheres to make large UN kernels. Analysis of recent studies with sufficient experimental details is provided. Extensive thermodynamic calculations are used to predict carbon monoxide and other pressures for several different reactions that may be involved in conversion of uranium oxides and carbides to UN. Experimentally, the method for making themore » gel-derived microspheres is described. These were used in a microbalance with an attached mass spectrometer to determine details of carbothermic conversion in argon, nitrogen, or vacuum. A quantitative model is derived from experiments for vacuum conversion to an uranium oxide-carbide kernel.« less

  20. PREDICTING TOXICOLOGICAL ENDPOINTS OF CHEMICALS USING QUANTITATIVE STRUCTURE-ACTIVITY RELATIONSHIPS (QSARS)

    EPA Science Inventory

    Quantitative structure-activity relationships (QSARs) are being developed to predict the toxicological endpoints for untested chemicals similar in structure to chemicals that have known experimental toxicological data. Based on a very large number of predetermined descriptors, a...

  1. Single Cell Genomics: Approaches and Utility in Immunology

    PubMed Central

    Neu, Karlynn E; Tang, Qingming; Wilson, Patrick C; Khan, Aly A

    2017-01-01

    Single cell genomics offers powerful tools for studying lymphocytes, which make it possible to observe rare and intermediate cell states that cannot be resolved at the population-level. Advances in computer science and single cell sequencing technology have created a data-driven revolution in immunology. The challenge for immunologists is to harness computing and turn an avalanche of quantitative data into meaningful discovery of immunological principles, predictive models, and strategies for therapeutics. Here, we review the current literature on computational analysis of single cell RNA-seq data and discuss underlying assumptions, methods, and applications in immunology, and highlight important directions for future research. PMID:28094102

  2. Using CTX Image Features to Predict HiRISE-Equivalent Rock Density

    NASA Technical Reports Server (NTRS)

    Serrano, Navid; Huertas, Andres; McGuire, Patrick; Mayer, David; Ardvidson, Raymond

    2010-01-01

    Methods have been developed to quantitatively assess rock hazards at candidate landing sites with the aid of images from the HiRISE camera onboard NASA s Mars Reconnaissance Orbiter. HiRISE is able to resolve rocks as small as 1-m in diameter. Some sites of interest do not have adequate coverage with the highest resolution sensors and there is a need to infer relevant information (like site safety or underlying geomorphology). The proposed approach would make it possible to obtain rock density estimates at a level close to or equal to those obtained from high-resolution sensors where individual rocks are discernable.

  3. A symmetrical subtraction combined with interpolated values for eliminating scattering from fluorescence EEM data

    NASA Astrophysics Data System (ADS)

    Xu, Jing; Liu, Xiaofei; Wang, Yutian

    2016-08-01

    Parallel factor analysis is a widely used method to extract qualitative and quantitative information of the analyte of interest from fluorescence emission-excitation matrix containing unknown components. Big amplitude of scattering will influence the results of parallel factor analysis. Many methods of eliminating scattering have been proposed. Each of these methods has its advantages and disadvantages. The combination of symmetrical subtraction and interpolated values has been discussed. The combination refers to both the combination of results and the combination of methods. Nine methods were used for comparison. The results show the combination of results can make a better concentration prediction for all the components.

  4. Observation of the immune response of cells and tissue through multimodal label-free microscopy

    NASA Astrophysics Data System (ADS)

    Pavillon, Nicolas; Smith, Nicholas I.

    2017-02-01

    We present applications of a label-free approach to assess the immune response based on the combination of interferometric microscopy and Raman spectroscopy, which makes it possible to simultaneously acquire morphological and molecular information of live cells. We employ this approach to derive statistical models for predicting the activation state of macrophage cells based both on morphological parameters extracted from the high-throughput full-field quantitative phase imaging, and on the molecular content information acquired through Raman spectroscopy. We also employ a system for 3D imaging based on coherence gating, enabling specific targeting of the Raman channel to structures of interest within tissue.

  5. Precise determination of the refractive index of suspended particles: light transmission as a function of refractive index mismatch

    NASA Astrophysics Data System (ADS)

    McClymer, J. P.

    2016-08-01

    Many fluids appear white because refractive index differences lead to multiple scattering. In this paper, we use safe, low-cost commercial index matching fluids to quantitatively study light transmission as a function of index mismatch, reduce multiple scattering to allow single scattering probes, and to precisely determine the index of refraction of suspended material. The transmission profile is compared with Rayleigh-Gans and Mie theory predictions. The procedure is accessible as a student laboratory project, while providing advantages over other standard methods of measuring the refractive index of an unknown nanoparticle, making it valuable to researchers.

  6. Neurobiological and memory models of risky decision making in adolescents versus young adults.

    PubMed

    Reyna, Valerie F; Estrada, Steven M; DeMarinis, Jessica A; Myers, Regina M; Stanisz, Janine M; Mills, Britain A

    2011-09-01

    Predictions of fuzzy-trace theory and neurobiological approaches are examined regarding risk taking in a classic decision-making task--the framing task--as well as in the context of real-life risk taking. We report the 1st study of framing effects in adolescents versus adults, varying risk and reward, and relate choices to individual differences, sexual behavior, and behavioral intentions. As predicted by fuzzy-trace theory, adolescents modulated risk taking according to risk and reward. Adults showed standard framing, reflecting greater emphasis on gist-based (qualitative) reasoning, but adolescents displayed reverse framing when potential gains for risk taking were high, reflecting greater emphasis on verbatim-based (quantitative) reasoning. Reverse framing signals a different way of thinking compared with standard framing (reverse framing also differs from simply choosing the risky option). Measures of verbatim- and gist-based reasoning about risk, sensation seeking, behavioral activation, and inhibition were used to extract dimensions of risk proneness: Sensation seeking increased and then decreased, whereas inhibition increased from early adolescence to young adulthood, predicted by neurobiological theories. Two additional dimensions, verbatim- and gist-based reasoning about risk, loaded separately and predicted unique variance in risk taking. Importantly, framing responses predicted real-life risk taking. Reasoning was the most consistent predictor of real-life risk taking: (a) Intentions to have sex, sexual behavior, and number of partners decreased when gist-based reasoning was triggered by retrieval cues in questions about perceived risk, whereas (b) intentions to have sex and number of partners increased when verbatim-based reasoning was triggered by different retrieval cues in questions about perceived risk. (c) 2011 APA, all rights reserved.

  7. Iterative Refinement of a Binding Pocket Model: Active Computational Steering of Lead Optimization

    PubMed Central

    2012-01-01

    Computational approaches for binding affinity prediction are most frequently demonstrated through cross-validation within a series of molecules or through performance shown on a blinded test set. Here, we show how such a system performs in an iterative, temporal lead optimization exercise. A series of gyrase inhibitors with known synthetic order formed the set of molecules that could be selected for “synthesis.” Beginning with a small number of molecules, based only on structures and activities, a model was constructed. Compound selection was done computationally, each time making five selections based on confident predictions of high activity and five selections based on a quantitative measure of three-dimensional structural novelty. Compound selection was followed by model refinement using the new data. Iterative computational candidate selection produced rapid improvements in selected compound activity, and incorporation of explicitly novel compounds uncovered much more diverse active inhibitors than strategies lacking active novelty selection. PMID:23046104

  8. Light nuclei of even mass number in the Skyrme model

    NASA Astrophysics Data System (ADS)

    Battye, R. A.; Manton, N. S.; Sutcliffe, P. M.; Wood, S. W.

    2009-09-01

    We consider the semiclassical rigid-body quantization of Skyrmion solutions of mass numbers B=4,6,8,10, and 12. We determine the allowed quantum states for each Skyrmion and find that they often match the observed states of nuclei. The spin and isospin inertia tensors of these Skyrmions are accurately calculated for the first time and are used to determine the excitation energies of the quantum states. We calculate the energy level splittings, using a suitably chosen parameter set for each mass number. We find good qualitative and encouraging quantitative agreement with experiment. In particular, the rotational bands of beryllium-8 and carbon-12, along with isospin 1 triplets and isospin 2 quintets, are especially well reproduced. We also predict the existence of states that have not yet been observed and make predictions for the unknown quantum numbers of some observed states.

  9. EFS: an ensemble feature selection tool implemented as R-package and web-application.

    PubMed

    Neumann, Ursula; Genze, Nikita; Heider, Dominik

    2017-01-01

    Feature selection methods aim at identifying a subset of features that improve the prediction performance of subsequent classification models and thereby also simplify their interpretability. Preceding studies demonstrated that single feature selection methods can have specific biases, whereas an ensemble feature selection has the advantage to alleviate and compensate for these biases. The software EFS (Ensemble Feature Selection) makes use of multiple feature selection methods and combines their normalized outputs to a quantitative ensemble importance. Currently, eight different feature selection methods have been integrated in EFS, which can be used separately or combined in an ensemble. EFS identifies relevant features while compensating specific biases of single methods due to an ensemble approach. Thereby, EFS can improve the prediction accuracy and interpretability in subsequent binary classification models. EFS can be downloaded as an R-package from CRAN or used via a web application at http://EFS.heiderlab.de.

  10. Incorporating learning goals about modeling into an upper-division physics laboratory experiment

    NASA Astrophysics Data System (ADS)

    Zwickl, Benjamin M.; Finkelstein, Noah; Lewandowski, H. J.

    2014-09-01

    Implementing a laboratory activity involves a complex interplay among learning goals, available resources, feedback about the existing course, best practices for teaching, and an overall philosophy about teaching labs. Building on our previous work, which described a process of transforming an entire lab course, we now turn our attention to how an individual lab activity on the polarization of light was redesigned to include a renewed emphasis on one broad learning goal: modeling. By using this common optics lab as a concrete case study of a broadly applicable approach, we highlight many aspects of the activity development and show how modeling is used to integrate sophisticated conceptual and quantitative reasoning into the experimental process through the various aspects of modeling: constructing models, making predictions, interpreting data, comparing measurements with predictions, and refining models. One significant outcome is a natural way to integrate an analysis and discussion of systematic error into a lab activity.

  11. Anatomy of scientific evolution.

    PubMed

    Yun, Jinhyuk; Kim, Pan-Jun; Jeong, Hawoong

    2015-01-01

    The quest for historically impactful science and technology provides invaluable insight into the innovation dynamics of human society, yet many studies are limited to qualitative and small-scale approaches. Here, we investigate scientific evolution through systematic analysis of a massive corpus of digitized English texts between 1800 and 2008. Our analysis reveals great predictability for long-prevailing scientific concepts based on the levels of their prior usage. Interestingly, once a threshold of early adoption rates is passed even slightly, scientific concepts can exhibit sudden leaps in their eventual lifetimes. We developed a mechanistic model to account for such results, indicating that slowly-but-commonly adopted science and technology surprisingly tend to have higher innate strength than fast-and-commonly adopted ones. The model prediction for disciplines other than science was also well verified. Our approach sheds light on unbiased and quantitative analysis of scientific evolution in society, and may provide a useful basis for policy-making.

  12. Probing the geometry of copper and silver adatoms on magnetite: quantitative experiment versus theory† †Electronic supplementary information (ESI) available: Experimental and computational details, as well as further details on the results and analyses. See DOI: 10.1039/c7nr07319d

    PubMed Central

    Meier, Matthias; Jakub, Zdeněk; Balajka, Jan; Hulva, Jan; Bliem, Roland; Thakur, Pardeep K.; Lee, Tien-Lin; Franchini, Cesare; Schmid, Michael; Diebold, Ulrike; Allegretti, Francesco; Parkinson, Gareth S.

    2018-01-01

    Accurately modelling the structure of a catalyst is a fundamental prerequisite for correctly predicting reaction pathways, but a lack of clear experimental benchmarks makes it difficult to determine the optimal theoretical approach. Here, we utilize the normal incidence X-ray standing wave (NIXSW) technique to precisely determine the three dimensional geometry of Ag1 and Cu1 adatoms on Fe3O4(001). Both adatoms occupy bulk-continuation cation sites, but with a markedly different height above the surface (0.43 ± 0.03 Å (Cu1) and 0.96 ± 0.03 Å (Ag1)). HSE-based calculations accurately predict the experimental geometry, but the more common PBE + U and PBEsol + U approaches perform poorly. PMID:29334395

  13. Fish gotta swim, Birds gotta fly, I gotta do Feynmann Graphs 'til I die: A continuum Theory of Flocking

    NASA Astrophysics Data System (ADS)

    Toner, John; Tu, Yu-Hai

    2002-05-01

    We have developed a new continuum dynamical model for the collective motion of large "flocks" of biological organisms (e.g., flocks of birds, schools of fish, herds of wildebeest, hordes of bacteria, slime molds, etc.) . This model does for flocks what the Navier-Stokes equation does for fluids. The model predicts that, unlike simple fluids, flocks show huge fluctuation effects in spatial dimensions d < 4 that radically change their behavior. In d=2, it is only these effects that make it possible for the flock to move coherently at all. This explains why a million wildebeest can march together across the Serengeti plain, despite the fact that a million physicists gathered on the same plane could NOT all POINT in the same direction. Detailed quantitative predictions of this theory agree beautifully with computer simulations of flock motion.

  14. Using convolutional neural networks to explore the microbiome.

    PubMed

    Reiman, Derek; Metwally, Ahmed; Yang Dai

    2017-07-01

    The microbiome has been shown to have an impact on the development of various diseases in the host. Being able to make an accurate prediction of the phenotype of a genomic sample based on its microbial taxonomic abundance profile is an important problem for personalized medicine. In this paper, we examine the potential of using a deep learning framework, a convolutional neural network (CNN), for such a prediction. To facilitate the CNN learning, we explore the structure of abundance profiles by creating the phylogenetic tree and by designing a scheme to embed the tree to a matrix that retains the spatial relationship of nodes in the tree and their quantitative characteristics. The proposed CNN framework is highly accurate, achieving a 99.47% of accuracy based on the evaluation on a dataset 1967 samples of three phenotypes. Our result demonstrated the feasibility and promising aspect of CNN in the classification of sample phenotype.

  15. Anatomy of Scientific Evolution

    PubMed Central

    Yun, Jinhyuk; Kim, Pan-Jun; Jeong, Hawoong

    2015-01-01

    The quest for historically impactful science and technology provides invaluable insight into the innovation dynamics of human society, yet many studies are limited to qualitative and small-scale approaches. Here, we investigate scientific evolution through systematic analysis of a massive corpus of digitized English texts between 1800 and 2008. Our analysis reveals great predictability for long-prevailing scientific concepts based on the levels of their prior usage. Interestingly, once a threshold of early adoption rates is passed even slightly, scientific concepts can exhibit sudden leaps in their eventual lifetimes. We developed a mechanistic model to account for such results, indicating that slowly-but-commonly adopted science and technology surprisingly tend to have higher innate strength than fast-and-commonly adopted ones. The model prediction for disciplines other than science was also well verified. Our approach sheds light on unbiased and quantitative analysis of scientific evolution in society, and may provide a useful basis for policy-making. PMID:25671617

  16. The four principles: Can they be measured and do they predict ethical decision making?

    PubMed Central

    2012-01-01

    Background The four principles of Beauchamp and Childress - autonomy, non-maleficence, beneficence and justice - have been extremely influential in the field of medical ethics, and are fundamental for understanding the current approach to ethical assessment in health care. This study tests whether these principles can be quantitatively measured on an individual level, and then subsequently if they are used in the decision making process when individuals are faced with ethical dilemmas. Methods The Analytic Hierarchy Process was used as a tool for the measurement of the principles. Four scenarios, which involved conflicts between the medical ethical principles, were presented to participants who then made judgments about the ethicality of the action in the scenario, and their intentions to act in the same manner if they were in the situation. Results Individual preferences for these medical ethical principles can be measured using the Analytic Hierarchy Process. This technique provides a useful tool in which to highlight individual medical ethical values. On average, individuals have a significant preference for non-maleficence over the other principles, however, and perhaps counter-intuitively, this preference does not seem to relate to applied ethical judgements in specific ethical dilemmas. Conclusions People state they value these medical ethical principles but they do not actually seem to use them directly in the decision making process. The reasons for this are explained through the lack of a behavioural model to account for the relevant situational factors not captured by the principles. The limitations of the principles in predicting ethical decision making are discussed. PMID:22606995

  17. The four principles: can they be measured and do they predict ethical decision making?

    PubMed

    Page, Katie

    2012-05-20

    The four principles of Beauchamp and Childress--autonomy, non-maleficence, beneficence and justice--have been extremely influential in the field of medical ethics, and are fundamental for understanding the current approach to ethical assessment in health care. This study tests whether these principles can be quantitatively measured on an individual level, and then subsequently if they are used in the decision making process when individuals are faced with ethical dilemmas. The Analytic Hierarchy Process was used as a tool for the measurement of the principles. Four scenarios, which involved conflicts between the medical ethical principles, were presented to participants who then made judgments about the ethicality of the action in the scenario, and their intentions to act in the same manner if they were in the situation. Individual preferences for these medical ethical principles can be measured using the Analytic Hierarchy Process. This technique provides a useful tool in which to highlight individual medical ethical values. On average, individuals have a significant preference for non-maleficence over the other principles, however, and perhaps counter-intuitively, this preference does not seem to relate to applied ethical judgements in specific ethical dilemmas. People state they value these medical ethical principles but they do not actually seem to use them directly in the decision making process. The reasons for this are explained through the lack of a behavioural model to account for the relevant situational factors not captured by the principles. The limitations of the principles in predicting ethical decision making are discussed.

  18. Correlation of admissions statistics to graduate student success in medical physics

    PubMed Central

    McSpadden, Erin; Rakowski, Joseph; Nalichowski, Adrian; Yudelev, Mark; Snyder, Michael

    2014-01-01

    The purpose of this work is to develop metrics for evaluation of medical physics graduate student performance, assess relationships between success and other quantifiable factors, and determine whether graduate student performance can be accurately predicted by admissions statistics. A cohort of 108 medical physics graduate students from a single institution were rated for performance after matriculation based on final scores in specific courses, first year graduate Grade Point Average (GPA), performance on the program exit exam, performance in oral review sessions, and faculty rating. Admissions statistics including matriculating program (MS vs. PhD); undergraduate degree type, GPA, and country; graduate degree; general and subject GRE scores; traditional vs. nontraditional status; and ranking by admissions committee were evaluated for potential correlation with the performance metrics. GRE verbal and quantitative scores were correlated with higher scores in the most difficult courses in the program and with the program exit exam; however, the GRE section most correlated with overall faculty rating was the analytical writing section. Students with undergraduate degrees in engineering had a higher faculty rating than those from other disciplines and faculty rating was strongly correlated with undergraduate country. Undergraduate GPA was not statistically correlated with any success metrics investigated in this study. However, the high degree of selection on GPA and quantitative GRE scores during the admissions process results in relatively narrow ranges for these quantities. As such, these results do not necessarily imply that one should not strongly consider traditional metrics, such as undergraduate GPA and quantitative GRE score, during the admissions process. They suggest that once applicants have been initially filtered by these metrics, additional selection should be performed via the other metrics shown here to be correlated with success. The parameters used to make admissions decisions for our program are accurate in predicting student success, as illustrated by the very strong statistical correlation between admissions rank and course average, first year graduate GPA, and faculty rating (p<0.002). Overall, this study indicates that an undergraduate degree in physics should not be considered a fundamental requirement for entry into our program and that within the relatively narrow range of undergraduate GPA and quantitative GRE scores of those admitted into our program, additional variations in these metrics are not important predictors of success. While the high degree of selection on particular statistics involved in the admissions process, along with the relatively small sample size, makes it difficult to draw concrete conclusions about the meaning of correlations here, these results suggest that success in medical physics is based on more than quantitative capabilities. Specifically, they indicate that analytical and communication skills play a major role in student success in our program, as well as predicted future success by program faculty members. Finally, this study confirms that our current admissions process is effective in identifying candidates who will be successful in our program and are expected to be successful after graduation, and provides additional insight useful in improving our admissions selection process. PACS number: 01.40.‐d PMID:24423842

  19. Quantitative Decision Making.

    ERIC Educational Resources Information Center

    Baldwin, Grover H.

    The use of quantitative decision making tools provides the decision maker with a range of alternatives among which to decide, permits acceptance and use of the optimal solution, and decreases risk. Training line administrators in the use of these tools can help school business officials obtain reliable information upon which to base district…

  20. Prospective Mathematics Teachers' Sense Making of Polynomial Multiplication and Factorization Modeled with Algebra Tiles

    ERIC Educational Resources Information Center

    Caglayan, Günhan

    2013-01-01

    This study is about prospective secondary mathematics teachers' understanding and sense making of representational quantities generated by algebra tiles, the quantitative units (linear vs. areal) inherent in the nature of these quantities, and the quantitative addition and multiplication operations--referent preserving versus referent…

  1. Quantitative evaluation of haze formation of koji and progression of internal haze by drying of koji during koji making.

    PubMed

    Ito, Kazunari; Gomi, Katsuya; Kariyama, Masahiro; Miyake, Tsuyoshi

    2017-07-01

    The construction of an experimental system that can mimic koji making in the manufacturing setting of a sake brewery is initially required for the quantitative evaluation of mycelia grown on/in koji pellets (haze formation). Koji making with rice was investigated with a solid-state fermentation (SSF) system using a non-airflow box (NAB), which produced uniform conditions in the culture substrate with high reproducibility and allowed for the control of favorable conditions in the substrate during culture. The SSF system using NAB accurately reproduced koji making in a manufacturing setting. To evaluate haze formation during koji making, surfaces and cross sections of koji pellets obtained from koji making tests were observed using a digital microscope. Image analysis was used to distinguish between haze and non-haze sections of koji pellets, enabling the evaluation of haze formation in a batch by measuring the haze rate of a specific number of koji pellets. This method allowed us to obtain continuous and quantitative data on the time course of haze formation. Moreover, drying koji during the late stage of koji making was revealed to cause further penetration of mycelia into koji pellets (internal haze). The koji making test with the SSF system using NAB and quantitative evaluation of haze formation in a batch by image analysis is a useful method for understanding the relations between haze formation and koji making conditions. Copyright © 2017 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.

  2. Effect of quantum nuclear motion on hydrogen bonding

    NASA Astrophysics Data System (ADS)

    McKenzie, Ross H.; Bekker, Christiaan; Athokpam, Bijyalaxmi; Ramesh, Sai G.

    2014-05-01

    This work considers how the properties of hydrogen bonded complexes, X-H⋯Y, are modified by the quantum motion of the shared proton. Using a simple two-diabatic state model Hamiltonian, the analysis of the symmetric case, where the donor (X) and acceptor (Y) have the same proton affinity, is carried out. For quantitative comparisons, a parametrization specific to the O-H⋯O complexes is used. The vibrational energy levels of the one-dimensional ground state adiabatic potential of the model are used to make quantitative comparisons with a vast body of condensed phase data, spanning a donor-acceptor separation (R) range of about 2.4 - 3.0 Å, i.e., from strong to weak hydrogen bonds. The position of the proton (which determines the X-H bond length) and its longitudinal vibrational frequency, along with the isotope effects in both are described quantitatively. An analysis of the secondary geometric isotope effect, using a simple extension of the two-state model, yields an improved agreement of the predicted variation with R of frequency isotope effects. The role of bending modes is also considered: their quantum effects compete with those of the stretching mode for weak to moderate H-bond strengths. In spite of the economy in the parametrization of the model used, it offers key insights into the defining features of H-bonds, and semi-quantitatively captures several trends.

  3. Quantitative metal magnetic memory reliability modeling for welded joints

    NASA Astrophysics Data System (ADS)

    Xing, Haiyan; Dang, Yongbin; Wang, Ben; Leng, Jiancheng

    2016-03-01

    Metal magnetic memory(MMM) testing has been widely used to detect welded joints. However, load levels, environmental magnetic field, and measurement noises make the MMM data dispersive and bring difficulty to quantitative evaluation. In order to promote the development of quantitative MMM reliability assessment, a new MMM model is presented for welded joints. Steel Q235 welded specimens are tested along the longitudinal and horizontal lines by TSC-2M-8 instrument in the tensile fatigue experiments. The X-ray testing is carried out synchronously to verify the MMM results. It is found that MMM testing can detect the hidden crack earlier than X-ray testing. Moreover, the MMM gradient vector sum K vs is sensitive to the damage degree, especially at early and hidden damage stages. Considering the dispersion of MMM data, the K vs statistical law is investigated, which shows that K vs obeys Gaussian distribution. So K vs is the suitable MMM parameter to establish reliability model of welded joints. At last, the original quantitative MMM reliability model is first presented based on the improved stress strength interference theory. It is shown that the reliability degree R gradually decreases with the decreasing of the residual life ratio T, and the maximal error between prediction reliability degree R 1 and verification reliability degree R 2 is 9.15%. This presented method provides a novel tool of reliability testing and evaluating in practical engineering for welded joints.

  4. "An integrative formal model of motivation and decision making: The MGPM*": Correction to Ballard et al. (2016).

    PubMed

    2017-02-01

    Reports an error in "An integrative formal model of motivation and decision making: The MGPM*" by Timothy Ballard, Gillian Yeo, Shayne Loft, Jeffrey B. Vancouver and Andrew Neal ( Journal of Applied Psychology , 2016[Sep], Vol 101[9], 1240-1265). Equation A3 contained an error. This correct equation is provided in the erratum. (The following abstract of the original article appeared in record 2016-28692-001.) We develop and test an integrative formal model of motivation and decision making. The model, referred to as the extended multiple-goal pursuit model (MGPM*), is an integration of the multiple-goal pursuit model (Vancouver, Weinhardt, & Schmidt, 2010) and decision field theory (Busemeyer & Townsend, 1993). Simulations of the model generated predictions regarding the effects of goal type (approach vs. avoidance), risk, and time sensitivity on prioritization. We tested these predictions in an experiment in which participants pursued different combinations of approach and avoidance goals under different levels of risk. The empirical results were consistent with the predictions of the MGPM*. Specifically, participants pursuing 1 approach and 1 avoidance goal shifted priority from the approach to the avoidance goal over time. Among participants pursuing 2 approach goals, those with low time sensitivity prioritized the goal with the larger discrepancy, whereas those with high time sensitivity prioritized the goal with the smaller discrepancy. Participants pursuing 2 avoidance goals generally prioritized the goal with the smaller discrepancy. Finally, all of these effects became weaker as the level of risk increased. We used quantitative model comparison to show that the MGPM* explained the data better than the original multiple-goal pursuit model, and that the major extensions from the original model were justified. The MGPM* represents a step forward in the development of a general theory of decision making during multiple-goal pursuit. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  5. Hyperspectral Imaging and SPA-LDA Quantitative Analysis for Detection of Colon Cancer Tissue

    NASA Astrophysics Data System (ADS)

    Yuan, X.; Zhang, D.; Wang, Ch.; Dai, B.; Zhao, M.; Li, B.

    2018-05-01

    Hyperspectral imaging (HSI) has been demonstrated to provide a rapid, precise, and noninvasive method for cancer detection. However, because HSI contains many data, quantitative analysis is often necessary to distill information useful for distinguishing cancerous from normal tissue. To demonstrate that HSI with our proposed algorithm can make this distinction, we built a Vis-NIR HSI setup and made many spectral images of colon tissues, and then used a successive projection algorithm (SPA) to analyze the hyperspectral image data of the tissues. This was used to build an identification model based on linear discrimination analysis (LDA) using the relative reflectance values of the effective wavelengths. Other tissues were used as a prediction set to verify the reliability of the identification model. The results suggest that Vis-NIR hyperspectral images, together with the spectroscopic classification method, provide a new approach for reliable and safe diagnosis of colon cancer and could lead to advances in cancer diagnosis generally.

  6. Statistical Mechanics of the US Supreme Court

    NASA Astrophysics Data System (ADS)

    Lee, Edward D.; Broedersz, Chase P.; Bialek, William

    2015-07-01

    We build simple models for the distribution of voting patterns in a group, using the Supreme Court of the United States as an example. The maximum entropy model consistent with the observed pairwise correlations among justices' votes, an Ising spin glass, agrees quantitatively with the data. While all correlations (perhaps surprisingly) are positive, the effective pairwise interactions in the spin glass model have both signs, recovering the intuition that ideologically opposite justices negatively influence each another. Despite the competing interactions, a strong tendency toward unanimity emerges from the model, organizing the voting patterns in a relatively simple "energy landscape." Besides unanimity, other energy minima in this landscape, or maxima in probability, correspond to prototypical voting states, such as the ideological split or a tightly correlated, conservative core. The model correctly predicts the correlation of justices with the majority and gives us a measure of their influence on the majority decision. These results suggest that simple models, grounded in statistical physics, can capture essential features of collective decision making quantitatively, even in a complex political context.

  7. Exploratory of society

    NASA Astrophysics Data System (ADS)

    Cederman, L.-E.; Conte, R.; Helbing, D.; Nowak, A.; Schweitzer, F.; Vespignani, A.

    2012-11-01

    A huge flow of quantitative social, demographic and behavioral data is becoming available that traces the activities and interactions of individuals, social patterns, transportation infrastructures and travel fluxes. This has caused, together with innovative computational techniques and methods for modeling social actions in hybrid (natural and artificial) societies, a qualitative change in the ways we model socio-technical systems. For the first time, society can be studied in a comprehensive fashion that addresses social and behavioral complexity. In other words we are in the position to envision the development of large data and computational cyber infrastructure defining an exploratory of society that provides quantitative anticipatory, explanatory and scenario analysis capabilities ranging from emerging infectious disease to conflict and crime surges. The goal of the exploratory of society is to provide the basic infrastructure embedding the framework of tools and knowledge needed for the design of forecast/anticipatory/crisis management approaches to socio technical systems, supporting future decision making procedures by accelerating the scientific cycle that goes from data generation to predictions.

  8. How infants' reaches reveal principles of sensorimotor decision making

    NASA Astrophysics Data System (ADS)

    Dineva, Evelina; Schöner, Gregor

    2018-01-01

    In Piaget's classical A-not-B-task, infants repeatedly make a sensorimotor decision to reach to one of two cued targets. Perseverative errors are induced by switching the cue from A to B, while spontaneous errors are unsolicited reaches to B when only A is cued. We argue that theoretical accounts of sensorimotor decision-making fail to address how motor decisions leave a memory trace that may impact future sensorimotor decisions. Instead, in extant neural models, perseveration is caused solely by the history of stimulation. We present a neural dynamic model of sensorimotor decision-making within the framework of Dynamic Field Theory, in which a dynamic instability amplifies fluctuations in neural activation into macroscopic, stable neural activation states that leave memory traces. The model predicts perseveration, but also a tendency to repeat spontaneous errors. To test the account, we pool data from several A-not-B experiments. A conditional probabilities analysis accounts quantitatively how motor decisions depend on the history of reaching. The results provide evidence for the interdependence among subsequent reaching decisions that is explained by the model, showing that by amplifying small differences in activation and affecting learning, decisions have consequences beyond the individual behavioural act.

  9. The Effects of Emotive Reasoning on Secondary School Students' Decision-Making in the Context of Socioscientific Issues

    NASA Astrophysics Data System (ADS)

    Powell, Wardell A.

    The discrepancy between what students are being taught within K-12 science classrooms and what they experience in the real world has been well documented. This study sought to explore the ways a high school biology curriculum, which integrates socioscientific issues, impacts students' emotive reasoning and their ability to evaluate evidence, make informed decisions on contemporary scientific dilemmas, and integrate scientific content knowledge in their reasoning on SSI. Both quantitative and qualitative methods were used to examine differences within and between an SSI treatment group and a comparison group as well as individual differences among students' responses over a semester of high school biology. Results indicated students used emotions largely to evaluate evidence and make decisions on contentious scientific dilemmas. In addition, the results showed students used newly gained scientific content knowledge to make logical predictions on contentious scientific issues. Statistical significance was found between groups of students in regard to their interest in the use of embryonic stem cell treatments to restore rats' vision, as well as students' abilities to evaluate evidence. Theoretical implications regarding the use of SSI in the classroom are presented.

  10. National Centers for Environmental Prediction

    Science.gov Websites

    ENSEMBLE PRODUCTS & DATA SOURCES Probabilistic Forecasts of Quantitative Precipitation from the NCEP Predictability Research with Indian Monsoon Examples - PDF - 28 Mar 2005 North American Ensemble Forecast System QUANTITATIVE PRECIPITATION *PQPF* In these charts, the probability that 24-hour precipitation amounts over a

  11. An optimized color transformation for the analysis of digital images of hematoxylin & eosin stained slides.

    PubMed

    Zarella, Mark D; Breen, David E; Plagov, Andrei; Garcia, Fernando U

    2015-01-01

    Hematoxylin and eosin (H&E) staining is ubiquitous in pathology practice and research. As digital pathology has evolved, the reliance of quantitative methods that make use of H&E images has similarly expanded. For example, cell counting and nuclear morphometry rely on the accurate demarcation of nuclei from other structures and each other. One of the major obstacles to quantitative analysis of H&E images is the high degree of variability observed between different samples and different laboratories. In an effort to characterize this variability, as well as to provide a substrate that can potentially mitigate this factor in quantitative image analysis, we developed a technique to project H&E images into an optimized space more appropriate for many image analysis procedures. We used a decision tree-based support vector machine learning algorithm to classify 44 H&E stained whole slide images of resected breast tumors according to the histological structures that are present. This procedure takes an H&E image as an input and produces a classification map of the image that predicts the likelihood of a pixel belonging to any one of a set of user-defined structures (e.g., cytoplasm, stroma). By reducing these maps into their constituent pixels in color space, an optimal reference vector is obtained for each structure, which identifies the color attributes that maximally distinguish one structure from other elements in the image. We show that tissue structures can be identified using this semi-automated technique. By comparing structure centroids across different images, we obtained a quantitative depiction of H&E variability for each structure. This measurement can potentially be utilized in the laboratory to help calibrate daily staining or identify troublesome slides. Moreover, by aligning reference vectors derived from this technique, images can be transformed in a way that standardizes their color properties and makes them more amenable to image processing.

  12. Application of stakeholder-based and modelling approaches for supporting robust adaptation decision making under future climatic uncertainty and changing urban-agricultural water demand

    NASA Astrophysics Data System (ADS)

    Bhave, Ajay; Dessai, Suraje; Conway, Declan; Stainforth, David

    2016-04-01

    Deep uncertainty in future climate change and socio-economic conditions necessitates the use of assess-risk-of-policy approaches over predict-then-act approaches for adaptation decision making. Robust Decision Making (RDM) approaches embody this principle and help evaluate the ability of adaptation options to satisfy stakeholder preferences under wide-ranging future conditions. This study involves the simultaneous application of two RDM approaches; qualitative and quantitative, in the Cauvery River Basin in Karnataka (population ~23 million), India. The study aims to (a) determine robust water resources adaptation options for the 2030s and 2050s and (b) compare the usefulness of a qualitative stakeholder-driven approach with a quantitative modelling approach. For developing a large set of future scenarios a combination of climate narratives and socio-economic narratives was used. Using structured expert elicitation with a group of climate experts in the Indian Summer Monsoon, climatic narratives were developed. Socio-economic narratives were developed to reflect potential future urban and agricultural water demand. In the qualitative RDM approach, a stakeholder workshop helped elicit key vulnerabilities, water resources adaptation options and performance criteria for evaluating options. During a second workshop, stakeholders discussed and evaluated adaptation options against the performance criteria for a large number of scenarios of climatic and socio-economic change in the basin. In the quantitative RDM approach, a Water Evaluation And Planning (WEAP) model was forced by precipitation and evapotranspiration data, coherent with the climatic narratives, together with water demand data based on socio-economic narratives. We find that compared to business-as-usual conditions options addressing urban water demand satisfy performance criteria across scenarios and provide co-benefits like energy savings and reduction in groundwater depletion, while options reducing agricultural water demand significantly affect downstream water availability. Water demand options demonstrate potential to improve environmental flow conditions and satisfy legal water supply requirements for downstream riparian states. On the other hand, currently planned large scale infrastructural projects demonstrate reduced value in certain scenarios, illustrating the impacts of lock-in effects of large scale infrastructure. From a methodological perspective, we find that while the stakeholder-driven approach revealed robust options in a resource-light manner and helped initiate much needed interaction amongst stakeholders, the modelling approach provides complementary quantitative information. The study reveals robust adaptation options for this important basin and provides a strong methodological basis for carrying out future studies that support adaptation decision making.

  13. Decision-Making in Multiple Sclerosis Patients: A Systematic Review.

    PubMed

    Neuhaus, Mireille; Calabrese, Pasquale; Annoni, Jean-Marie

    2018-01-01

    Multiple sclerosis (MS) is frequently associated with cognitive and behavioural deficits. A growing number of studies suggest an impact of MS on decision-making abilities. The aim of this systematic review was to assess if (1) performance of MS patients in decision-making tasks was consistently different from controls and (2) whether this modification was associated with cognitive dysfunction and emotional alterations. The search was conducted on Pubmed/Medline database. 12 studies evaluating the difference between MS patients and healthy controls using validated decision-making tasks were included. Outcomes considered were quantitative (net scores) and qualitative measurements (deliberation time and learning from feedback). Quantitative and qualitative decision-making impairment in MS was present in 64.7% of measurements. Patients were equally impaired in tasks for decision-making under risk and ambiguity. A correlation to other cognitive functions was present in 50% of cases, with the highest associations in the domains of processing speed and attentional capacity. In MS patients, qualitative and quantitative modifications may be present in any kind of decision-making task and can appear independently of other cognitive measures. Since decision-making abilities have a significant impact on everyday life, this cognitive aspect has an influential importance in various MS-related treatment settings.

  14. Predicting Children's Reading and Mathematics Achievement from Early Quantitative Knowledge and Domain-General Cognitive Abilities

    PubMed Central

    Chu, Felicia W.; vanMarle, Kristy; Geary, David C.

    2016-01-01

    One hundred children (44 boys) participated in a 3-year longitudinal study of the development of basic quantitative competencies and the relation between these competencies and later mathematics and reading achievement. The children's preliteracy knowledge, intelligence, executive functions, and parental educational background were also assessed. The quantitative tasks assessed a broad range of symbolic and nonsymbolic knowledge and were administered four times across 2 years of preschool. Mathematics achievement was assessed at the end of each of 2 years of preschool, and mathematics and word reading achievement were assessed at the end of kindergarten. Our goals were to determine how domain-general abilities contribute to growth in children's quantitative knowledge and to determine how domain-general and domain-specific abilities contribute to children's preschool mathematics achievement and kindergarten mathematics and reading achievement. We first identified four core quantitative competencies (e.g., knowledge of the cardinal value of number words) that predict later mathematics achievement. The domain-general abilities were then used to predict growth in these competencies across 2 years of preschool, and the combination of domain-general abilities, preliteracy skills, and core quantitative competencies were used to predict mathematics achievement across preschool and mathematics and word reading achievement at the end of kindergarten. Both intelligence and executive functions predicted growth in the four quantitative competencies, especially across the first year of preschool. A combination of domain-general and domain-specific competencies predicted preschoolers' mathematics achievement, with a trend for domain-specific skills to be more strongly related to achievement at the beginning of preschool than at the end of preschool. Preschool preliteracy skills, sensitivity to the relative quantities of collections of objects, and cardinal knowledge predicted reading and mathematics achievement at the end of kindergarten. Preliteracy skills were more strongly related to word reading, whereas sensitivity to relative quantity was more strongly related to mathematics achievement. The overall results indicate that a combination of domain-general and domain-specific abilities contribute to development of children's early mathematics and reading achievement. PMID:27252675

  15. Predicting Children's Reading and Mathematics Achievement from Early Quantitative Knowledge and Domain-General Cognitive Abilities.

    PubMed

    Chu, Felicia W; vanMarle, Kristy; Geary, David C

    2016-01-01

    One hundred children (44 boys) participated in a 3-year longitudinal study of the development of basic quantitative competencies and the relation between these competencies and later mathematics and reading achievement. The children's preliteracy knowledge, intelligence, executive functions, and parental educational background were also assessed. The quantitative tasks assessed a broad range of symbolic and nonsymbolic knowledge and were administered four times across 2 years of preschool. Mathematics achievement was assessed at the end of each of 2 years of preschool, and mathematics and word reading achievement were assessed at the end of kindergarten. Our goals were to determine how domain-general abilities contribute to growth in children's quantitative knowledge and to determine how domain-general and domain-specific abilities contribute to children's preschool mathematics achievement and kindergarten mathematics and reading achievement. We first identified four core quantitative competencies (e.g., knowledge of the cardinal value of number words) that predict later mathematics achievement. The domain-general abilities were then used to predict growth in these competencies across 2 years of preschool, and the combination of domain-general abilities, preliteracy skills, and core quantitative competencies were used to predict mathematics achievement across preschool and mathematics and word reading achievement at the end of kindergarten. Both intelligence and executive functions predicted growth in the four quantitative competencies, especially across the first year of preschool. A combination of domain-general and domain-specific competencies predicted preschoolers' mathematics achievement, with a trend for domain-specific skills to be more strongly related to achievement at the beginning of preschool than at the end of preschool. Preschool preliteracy skills, sensitivity to the relative quantities of collections of objects, and cardinal knowledge predicted reading and mathematics achievement at the end of kindergarten. Preliteracy skills were more strongly related to word reading, whereas sensitivity to relative quantity was more strongly related to mathematics achievement. The overall results indicate that a combination of domain-general and domain-specific abilities contribute to development of children's early mathematics and reading achievement.

  16. Quantitative imaging features of pretreatment CT predict volumetric response to chemotherapy in patients with colorectal liver metastases.

    PubMed

    Creasy, John M; Midya, Abhishek; Chakraborty, Jayasree; Adams, Lauryn B; Gomes, Camilla; Gonen, Mithat; Seastedt, Kenneth P; Sutton, Elizabeth J; Cercek, Andrea; Kemeny, Nancy E; Shia, Jinru; Balachandran, Vinod P; Kingham, T Peter; Allen, Peter J; DeMatteo, Ronald P; Jarnagin, William R; D'Angelica, Michael I; Do, Richard K G; Simpson, Amber L

    2018-06-19

    This study investigates whether quantitative image analysis of pretreatment CT scans can predict volumetric response to chemotherapy for patients with colorectal liver metastases (CRLM). Patients treated with chemotherapy for CRLM (hepatic artery infusion (HAI) combined with systemic or systemic alone) were included in the study. Patients were imaged at baseline and approximately 8 weeks after treatment. Response was measured as the percentage change in tumour volume from baseline. Quantitative imaging features were derived from the index hepatic tumour on pretreatment CT, and features statistically significant on univariate analysis were included in a linear regression model to predict volumetric response. The regression model was constructed from 70% of data, while 30% were reserved for testing. Test data were input into the trained model. Model performance was evaluated with mean absolute prediction error (MAPE) and R 2 . Clinicopatholologic factors were assessed for correlation with response. 157 patients were included, split into training (n = 110) and validation (n = 47) sets. MAPE from the multivariate linear regression model was 16.5% (R 2 = 0.774) and 21.5% in the training and validation sets, respectively. Stratified by HAI utilisation, MAPE in the validation set was 19.6% for HAI and 25.1% for systemic chemotherapy alone. Clinical factors associated with differences in median tumour response were treatment strategy, systemic chemotherapy regimen, age and KRAS mutation status (p < 0.05). Quantitative imaging features extracted from pretreatment CT are promising predictors of volumetric response to chemotherapy in patients with CRLM. Pretreatment predictors of response have the potential to better select patients for specific therapies. • Colorectal liver metastases (CRLM) are downsized with chemotherapy but predicting the patients that will respond to chemotherapy is currently not possible. • Heterogeneity and enhancement patterns of CRLM can be measured with quantitative imaging. • Prediction model constructed that predicts volumetric response with 20% error suggesting that quantitative imaging holds promise to better select patients for specific treatments.

  17. A community resource benchmarking predictions of peptide binding to MHC-I molecules.

    PubMed

    Peters, Bjoern; Bui, Huynh-Hoa; Frankild, Sune; Nielson, Morten; Lundegaard, Claus; Kostem, Emrah; Basch, Derek; Lamberth, Kasper; Harndahl, Mikkel; Fleri, Ward; Wilson, Stephen S; Sidney, John; Lund, Ole; Buus, Soren; Sette, Alessandro

    2006-06-09

    Recognition of peptides bound to major histocompatibility complex (MHC) class I molecules by T lymphocytes is an essential part of immune surveillance. Each MHC allele has a characteristic peptide binding preference, which can be captured in prediction algorithms, allowing for the rapid scan of entire pathogen proteomes for peptide likely to bind MHC. Here we make public a large set of 48,828 quantitative peptide-binding affinity measurements relating to 48 different mouse, human, macaque, and chimpanzee MHC class I alleles. We use this data to establish a set of benchmark predictions with one neural network method and two matrix-based prediction methods extensively utilized in our groups. In general, the neural network outperforms the matrix-based predictions mainly due to its ability to generalize even on a small amount of data. We also retrieved predictions from tools publicly available on the internet. While differences in the data used to generate these predictions hamper direct comparisons, we do conclude that tools based on combinatorial peptide libraries perform remarkably well. The transparent prediction evaluation on this dataset provides tool developers with a benchmark for comparison of newly developed prediction methods. In addition, to generate and evaluate our own prediction methods, we have established an easily extensible web-based prediction framework that allows automated side-by-side comparisons of prediction methods implemented by experts. This is an advance over the current practice of tool developers having to generate reference predictions themselves, which can lead to underestimating the performance of prediction methods they are not as familiar with as their own. The overall goal of this effort is to provide a transparent prediction evaluation allowing bioinformaticians to identify promising features of prediction methods and providing guidance to immunologists regarding the reliability of prediction tools.

  18. Predictions of dispersion and deposition of fallout from nuclear testing using the NOAA-HYSPLIT meteorological model.

    PubMed

    Moroz, Brian E; Beck, Harold L; Bouville, André; Simon, Steven L

    2010-08-01

    The NOAA Hybrid Single-Particle Lagrangian Integrated Trajectory Model (HYSPLIT) was evaluated as a research tool to simulate the dispersion and deposition of radioactive fallout from nuclear tests. Model-based estimates of fallout can be valuable for use in the reconstruction of past exposures from nuclear testing, particularly where little historical fallout monitoring data are available. The ability to make reliable predictions about fallout deposition could also have significant importance for nuclear events in the future. We evaluated the accuracy of the HYSPLIT-predicted geographic patterns of deposition by comparing those predictions against known deposition patterns following specific nuclear tests with an emphasis on nuclear weapons tests conducted in the Marshall Islands. We evaluated the ability of the computer code to quantitatively predict the proportion of fallout particles of specific sizes deposited at specific locations as well as their time of transport. In our simulations of fallout from past nuclear tests, historical meteorological data were used from a reanalysis conducted jointly by the National Centers for Environmental Prediction (NCEP) and the National Center for Atmospheric Research (NCAR). We used a systematic approach in testing the HYSPLIT model by simulating the release of a range of particle sizes from a range of altitudes and evaluating the number and location of particles deposited. Our findings suggest that the quantity and quality of meteorological data are the most important factors for accurate fallout predictions and that, when satisfactory meteorological input data are used, HYSPLIT can produce relatively accurate deposition patterns and fallout arrival times. Furthermore, when no other measurement data are available, HYSPLIT can be used to indicate whether or not fallout might have occurred at a given location and provide, at minimum, crude quantitative estimates of the magnitude of the deposited activity. A variety of simulations of the deposition of fallout from atmospheric nuclear tests conducted in the Marshall Islands (mid-Pacific), at the Nevada Test Site (U.S.), and at the Semipalatinsk Nuclear Test Site (Kazakhstan) were performed. The results of the Marshall Islands simulations were used in a limited fashion to support the dose reconstruction described in companion papers within this volume.

  19. PREDICTIONS OF DISPERSION AND DEPOSITION OF FALLOUT FROM NUCLEAR TESTING USING THE NOAA-HYSPLIT METEOROLOGICAL MODEL

    PubMed Central

    Moroz, Brian E.; Beck, Harold L.; Bouville, André; Simon, Steven L.

    2013-01-01

    The NOAA Hybrid Single-Particle Lagrangian Integrated Trajectory Model (HYSPLIT) was evaluated as a research tool to simulate the dispersion and deposition of radioactive fallout from nuclear tests. Model-based estimates of fallout can be valuable for use in the reconstruction of past exposures from nuclear testing, particularly, where little historical fallout monitoring data is available. The ability to make reliable predictions about fallout deposition could also have significant importance for nuclear events in the future. We evaluated the accuracy of the HYSPLIT-predicted geographic patterns of deposition by comparing those predictions against known deposition patterns following specific nuclear tests with an emphasis on nuclear weapons tests conducted in the Marshall Islands. We evaluated the ability of the computer code to quantitatively predict the proportion of fallout particles of specific sizes deposited at specific locations as well as their time of transport. In our simulations of fallout from past nuclear tests, historical meteorological data were used from a reanalysis conducted jointly by the National Centers for Environmental Prediction (NCEP) and the National Center for Atmospheric Research (NCAR). We used a systematic approach in testing the HYSPLIT model by simulating the release of a range of particles sizes from a range of altitudes and evaluating the number and location of particles deposited. Our findings suggest that the quantity and quality of meteorological data are the most important factors for accurate fallout predictions and that when satisfactory meteorological input data are used, HYSPLIT can produce relatively accurate deposition patterns and fallout arrival times. Furthermore, when no other measurement data are available, HYSPLIT can be used to indicate whether or not fallout might have occurred at a given location and provide, at minimum, crude quantitative estimates of the magnitude of the deposited activity. A variety of simulations of the deposition of fallout from atmospheric nuclear tests conducted in the Marshall Islands, at the Nevada Test Site (USA), and at the Semipalatinsk Nuclear Test Site (Kazakhstan) were performed using reanalysis data composed of historic meteorological observations. The results of the Marshall Islands simulations were used in a limited fashion to support the dose reconstruction described in companion papers within this volume. PMID:20622555

  20. Quantitative Methods for Administrative Decision Making in Junior Colleges.

    ERIC Educational Resources Information Center

    Gold, Benjamin Knox

    With the rapid increase in number and size of junior colleges, administrators must take advantage of the decision-making tools already used in business and industry. This study investigated how these quantitative techniques could be applied to junior college problems. A survey of 195 California junior college administrators found that the problems…

  1. Searching for an Accurate Marker-Based Prediction of an Individual Quantitative Trait in Molecular Plant Breeding

    PubMed Central

    Fu, Yong-Bi; Yang, Mo-Hua; Zeng, Fangqin; Biligetu, Bill

    2017-01-01

    Molecular plant breeding with the aid of molecular markers has played an important role in modern plant breeding over the last two decades. Many marker-based predictions for quantitative traits have been made to enhance parental selection, but the trait prediction accuracy remains generally low, even with the aid of dense, genome-wide SNP markers. To search for more accurate trait-specific prediction with informative SNP markers, we conducted a literature review on the prediction issues in molecular plant breeding and on the applicability of an RNA-Seq technique for developing function-associated specific trait (FAST) SNP markers. To understand whether and how FAST SNP markers could enhance trait prediction, we also performed a theoretical reasoning on the effectiveness of these markers in a trait-specific prediction, and verified the reasoning through computer simulation. To the end, the search yielded an alternative to regular genomic selection with FAST SNP markers that could be explored to achieve more accurate trait-specific prediction. Continuous search for better alternatives is encouraged to enhance marker-based predictions for an individual quantitative trait in molecular plant breeding. PMID:28729875

  2. Accelerated discovery of metallic glasses through iteration of machine learning and high-throughput experiments

    PubMed Central

    Wolverton, Christopher; Hattrick-Simpers, Jason; Mehta, Apurva

    2018-01-01

    With more than a hundred elements in the periodic table, a large number of potential new materials exist to address the technological and societal challenges we face today; however, without some guidance, searching through this vast combinatorial space is frustratingly slow and expensive, especially for materials strongly influenced by processing. We train a machine learning (ML) model on previously reported observations, parameters from physiochemical theories, and make it synthesis method–dependent to guide high-throughput (HiTp) experiments to find a new system of metallic glasses in the Co-V-Zr ternary. Experimental observations are in good agreement with the predictions of the model, but there are quantitative discrepancies in the precise compositions predicted. We use these discrepancies to retrain the ML model. The refined model has significantly improved accuracy not only for the Co-V-Zr system but also across all other available validation data. We then use the refined model to guide the discovery of metallic glasses in two additional previously unreported ternaries. Although our approach of iterative use of ML and HiTp experiments has guided us to rapid discovery of three new glass-forming systems, it has also provided us with a quantitatively accurate, synthesis method–sensitive predictor for metallic glasses that improves performance with use and thus promises to greatly accelerate discovery of many new metallic glasses. We believe that this discovery paradigm is applicable to a wider range of materials and should prove equally powerful for other materials and properties that are synthesis path–dependent and that current physiochemical theories find challenging to predict. PMID:29662953

  3. Rapid Analysis of Deoxynivalenol in Durum Wheat by FT-NIR Spectroscopy

    PubMed Central

    De Girolamo, Annalisa; Cervellieri, Salvatore; Visconti, Angelo; Pascale, Michelangelo

    2014-01-01

    Fourier-transform-near infrared (FT-NIR) spectroscopy has been used to develop quantitative and classification models for the prediction of deoxynivalenol (DON) levels in durum wheat samples. Partial least-squares (PLS) regression analysis was used to determine DON in wheat samples in the range of <50–16,000 µg/kg DON. The model displayed a large root mean square error of prediction value (1,977 µg/kg) as compared to the EU maximum limit for DON in unprocessed durum wheat (i.e., 1,750 µg/kg), thus making the PLS approach unsuitable for quantitative prediction of DON in durum wheat. Linear discriminant analysis (LDA) was successfully used to differentiate wheat samples based on their DON content. A first approach used LDA to group wheat samples into three classes: A (DON ≤ 1,000 µg/kg), B (1,000 < DON ≤ 2,500 µg/kg), and C (DON > 2,500 µg/kg) (LDA I). A second approach was used to discriminate highly contaminated wheat samples based on three different cut-off limits, namely 1,000 (LDA II), 1,200 (LDA III) and 1,400 µg/kg DON (LDA IV). The overall classification and false compliant rates for the three models were 75%–90% and 3%–7%, respectively, with model LDA IV using a cut-off of 1,400 µg/kg fulfilling the requirement of the European official guidelines for screening methods. These findings confirmed the suitability of FT-NIR to screen a large number of wheat samples for DON contamination and to verify the compliance with EU regulation. PMID:25384107

  4. Rapid analysis of deoxynivalenol in durum wheat by FT-NIR spectroscopy.

    PubMed

    De Girolamo, Annalisa; Cervellieri, Salvatore; Visconti, Angelo; Pascale, Michelangelo

    2014-11-06

    Fourier-transform-near infrared (FT-NIR) spectroscopy has been used to develop quantitative and classification models for the prediction of deoxynivalenol (DON) levels in durum wheat samples. Partial least-squares (PLS) regression analysis was used to determine DON in wheat samples in the range of <50-16,000 µg/kg DON. The model displayed a large root mean square error of prediction value (1,977 µg/kg) as compared to the EU maximum limit for DON in unprocessed durum wheat (i.e., 1,750 µg/kg), thus making the PLS approach unsuitable for quantitative prediction of DON in durum wheat. Linear discriminant analysis (LDA) was successfully used to differentiate wheat samples based on their DON content. A first approach used LDA to group wheat samples into three classes: A (DON ≤ 1,000 µg/kg), B (1,000 < DON ≤ 2,500 µg/kg), and C (DON > 2,500 µg/kg) (LDA I). A second approach was used to discriminate highly contaminated wheat samples based on three different cut-off limits, namely 1,000 (LDA II), 1,200 (LDA III) and 1,400 µg/kg DON (LDA IV). The overall classification and false compliant rates for the three models were 75%-90% and 3%-7%, respectively, with model LDA IV using a cut-off of 1,400 µg/kg fulfilling the requirement of the European official guidelines for screening methods. These findings confirmed the suitability of FT-NIR to screen a large number of wheat samples for DON contamination and to verify the compliance with EU regulation.

  5. Accelerated discovery of metallic glasses through iteration of machine learning and high-throughput experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Fang; Ward, Logan; Williams, Travis

    With more than a hundred elements in the periodic table, a large number of potential new materials exist to address the technological and societal challenges we face today; however, without some guidance, searching through this vast combinatorial space is frustratingly slow and expensive, especially for materials strongly influenced by processing. We train a machine learning (ML) model on previously reported observations, parameters from physiochemical theories, and make it synthesis method–dependent to guide high-throughput (HiTp) experiments to find a new system of metallic glasses in the Co-V-Zr ternary. Experimental observations are in good agreement with the predictions of the model, butmore » there are quantitative discrepancies in the precise compositions predicted. We use these discrepancies to retrain the ML model. The refined model has significantly improved accuracy not only for the Co-V-Zr system but also across all other available validation data. We then use the refined model to guide the discovery of metallic glasses in two additional previously unreported ternaries. Although our approach of iterative use of ML and HiTp experiments has guided us to rapid discovery of three new glass-forming systems, it has also provided us with a quantitatively accurate, synthesis method–sensitive predictor for metallic glasses that improves performance with use and thus promises to greatly accelerate discovery of many new metallic glasses. We believe that this discovery paradigm is applicable to a wider range of materials and should prove equally powerful for other materials and properties that are synthesis path–dependent and that current physiochemical theories find challenging to predict.« less

  6. Accelerated discovery of metallic glasses through iteration of machine learning and high-throughput experiments

    DOE PAGES

    Ren, Fang; Ward, Logan; Williams, Travis; ...

    2018-04-01

    With more than a hundred elements in the periodic table, a large number of potential new materials exist to address the technological and societal challenges we face today; however, without some guidance, searching through this vast combinatorial space is frustratingly slow and expensive, especially for materials strongly influenced by processing. We train a machine learning (ML) model on previously reported observations, parameters from physiochemical theories, and make it synthesis method–dependent to guide high-throughput (HiTp) experiments to find a new system of metallic glasses in the Co-V-Zr ternary. Experimental observations are in good agreement with the predictions of the model, butmore » there are quantitative discrepancies in the precise compositions predicted. We use these discrepancies to retrain the ML model. The refined model has significantly improved accuracy not only for the Co-V-Zr system but also across all other available validation data. We then use the refined model to guide the discovery of metallic glasses in two additional previously unreported ternaries. Although our approach of iterative use of ML and HiTp experiments has guided us to rapid discovery of three new glass-forming systems, it has also provided us with a quantitatively accurate, synthesis method–sensitive predictor for metallic glasses that improves performance with use and thus promises to greatly accelerate discovery of many new metallic glasses. We believe that this discovery paradigm is applicable to a wider range of materials and should prove equally powerful for other materials and properties that are synthesis path–dependent and that current physiochemical theories find challenging to predict.« less

  7. Quantitative prediction of phase transformations in silicon during nanoindentation

    NASA Astrophysics Data System (ADS)

    Zhang, Liangchi; Basak, Animesh

    2013-08-01

    This paper establishes the first quantitative relationship between the phases transformed in silicon and the shape characteristics of nanoindentation curves. Based on an integrated analysis using TEM and unit cell properties of phases, the volumes of the phases emerged in a nanoindentation are formulated as a function of pop-out size and depth of nanoindentation impression. This simple formula enables a fast, accurate and quantitative prediction of the phases in a nanoindentation cycle, which has been impossible before.

  8. Quantitative AOP-based predictions for two aromatase inhibitors evaluating the influence of bioaccumulation on prediction accuracy

    EPA Science Inventory

    The adverse outcome pathway (AOP) framework can be used to support the use of mechanistic toxicology data as a basis for risk assessment. For certain risk contexts this includes defining, quantitative linkages between the molecular initiating event (MIE) and subsequent key events...

  9. Studying Biology to Understand Risk: Dosimetry Models and Quantitative Adverse Outcome Pathways

    EPA Science Inventory

    Confidence in the quantitative prediction of risk is increased when the prediction is based to as great an extent as possible on the relevant biological factors that constitute the pathway from exposure to adverse outcome. With the first examples now over 40 years old, physiologi...

  10. DOSIMETRY MODELING OF INHALED FORMALDEHYDE: BINNING NASAL FLUX PREDICTIONS FOR QUANTITATIVE RISK ASSESSMENT

    EPA Science Inventory

    Dosimetry Modeling of Inhaled Formaldehyde: Binning Nasal Flux Predictions for Quantitative Risk Assessment. Kimbell, J.S., Overton, J.H., Subramaniam, R.P., Schlosser, P.M., Morgan, K.T., Conolly, R.B., and Miller, F.J. (2001). Toxicol. Sci. 000, 000:000.

    Interspecies e...

  11. Toxicity challenges in environmental chemicals: Prediction of human plasma protein binding through quantitative structure-activity relationship (QSAR) models

    EPA Science Inventory

    The present study explores the merit of utilizing available pharmaceutical data to construct a quantitative structure-activity relationship (QSAR) for prediction of the fraction of a chemical unbound to plasma protein (Fub) in environmentally relevant compounds. Independent model...

  12. Predicting Player Position for Talent Identification in Association Football

    NASA Astrophysics Data System (ADS)

    Razali, Nazim; Mustapha, Aida; Yatim, Faiz Ahmad; Aziz, Ruhaya Ab

    2017-08-01

    This paper is set to introduce a new framework from the perspective of Computer Science for identifying talents in the sport of football based on the players’ individual qualities; physical, mental, and technical. The combination of qualities as assessed by coaches are then used to predict the players’ position in a match that suits the player the best in a particular team formation. Evaluation of the proposed framework is two-fold; quantitatively via classification experiments to predict player position, and qualitatively via a Talent Identification Site developed to achieve the same goal. Results from the classification experiments using Bayesian Networks, Decision Trees, and K-Nearest Neighbor have shown an average of 98% accuracy, which will promote consistency in decision-making though elimination of personal bias in team selection. The positive reviews on the Football Identification Site based on user acceptance evaluation also indicates that the framework is sufficient to serve as the basis of developing an intelligent team management system in different sports, whereby growth and performance of sport players can be monitored and identified.

  13. Toward a science of tumor forecasting for clinical oncology

    DOE PAGES

    Yankeelov, Thomas E.; Quaranta, Vito; Evans, Katherine J.; ...

    2015-03-15

    We propose that the quantitative cancer biology community makes a concerted effort to apply lessons from weather forecasting to develop an analogous methodology for predicting and evaluating tumor growth and treatment response. Currently, the time course of tumor response is not predicted; instead, response is only assessed post hoc by physical examination or imaging methods. This fundamental practice within clinical oncology limits optimization of a treatment regimen for an individual patient, as well as to determine in real time whether the choice was in fact appropriate. This is especially frustrating at a time when a panoply of molecularly targeted therapiesmore » is available, and precision genetic or proteomic analyses of tumors are an established reality. By learning from the methods of weather and climate modeling, we submit that the forecasting power of biophysical and biomathematical modeling can be harnessed to hasten the arrival of a field of predictive oncology. Furthermore, with a successful methodology toward tumor forecasting, it should be possible to integrate large tumor-specific datasets of varied types and effectively defeat one cancer patient at a time.« less

  14. Dynamics of embryonic stem cell differentiation inferred from single-cell transcriptomics show a series of transitions through discrete cell states

    PubMed Central

    Jang, Sumin; Choubey, Sandeep; Furchtgott, Leon; Zou, Ling-Nan; Doyle, Adele; Menon, Vilas; Loew, Ethan B; Krostag, Anne-Rachel; Martinez, Refugio A; Madisen, Linda; Levi, Boaz P; Ramanathan, Sharad

    2017-01-01

    The complexity of gene regulatory networks that lead multipotent cells to acquire different cell fates makes a quantitative understanding of differentiation challenging. Using a statistical framework to analyze single-cell transcriptomics data, we infer the gene expression dynamics of early mouse embryonic stem (mES) cell differentiation, uncovering discrete transitions across nine cell states. We validate the predicted transitions across discrete states using flow cytometry. Moreover, using live-cell microscopy, we show that individual cells undergo abrupt transitions from a naïve to primed pluripotent state. Using the inferred discrete cell states to build a probabilistic model for the underlying gene regulatory network, we further predict and experimentally verify that these states have unique response to perturbations, thus defining them functionally. Our study provides a framework to infer the dynamics of differentiation from single cell transcriptomics data and to build predictive models of the gene regulatory networks that drive the sequence of cell fate decisions during development. DOI: http://dx.doi.org/10.7554/eLife.20487.001 PMID:28296635

  15. Information-theoretic approach to interactive learning

    NASA Astrophysics Data System (ADS)

    Still, S.

    2009-01-01

    The principles of statistical mechanics and information theory play an important role in learning and have inspired both theory and the design of numerous machine learning algorithms. The new aspect in this paper is a focus on integrating feedback from the learner. A quantitative approach to interactive learning and adaptive behavior is proposed, integrating model- and decision-making into one theoretical framework. This paper follows simple principles by requiring that the observer's world model and action policy should result in maximal predictive power at minimal complexity. Classes of optimal action policies and of optimal models are derived from an objective function that reflects this trade-off between prediction and complexity. The resulting optimal models then summarize, at different levels of abstraction, the process's causal organization in the presence of the learner's actions. A fundamental consequence of the proposed principle is that the learner's optimal action policies balance exploration and control as an emerging property. Interestingly, the explorative component is present in the absence of policy randomness, i.e. in the optimal deterministic behavior. This is a direct result of requiring maximal predictive power in the presence of feedback.

  16. Linear Hyperfine Tuning of Donor Spins in Silicon Using Hydrostatic Strain

    NASA Astrophysics Data System (ADS)

    Mansir, J.; Conti, P.; Zeng, Z.; Pla, J. J.; Bertet, P.; Swift, M. W.; Van de Walle, C. G.; Thewalt, M. L. W.; Sklenard, B.; Niquet, Y. M.; Morton, J. J. L.

    2018-04-01

    We experimentally study the coupling of group V donor spins in silicon to mechanical strain, and measure strain-induced frequency shifts that are linear in strain, in contrast to the quadratic dependence predicted by the valley repopulation model (VRM), and therefore orders of magnitude greater than that predicted by the VRM for small strains |ɛ |<10-5. Through both tight-binding and first principles calculations we find that these shifts arise from a linear tuning of the donor hyperfine interaction term by the hydrostatic component of strain and achieve semiquantitative agreement with the experimental values. Our results provide a framework for making quantitative predictions of donor spins in silicon nanostructures, such as those being used to develop silicon-based quantum processors and memories. The strong spin-strain coupling we measure (up to 150 GHz per strain, for Bi donors in Si) offers a method for donor spin tuning—shifting Bi donor electron spins by over a linewidth with a hydrostatic strain of order 10-6—as well as opportunities for coupling to mechanical resonators.

  17. A lymphocyte spatial distribution graph-based method for automated classification of recurrence risk on lung cancer images

    NASA Astrophysics Data System (ADS)

    Garciá-Arteaga, Juan D.; Corredor, Germán.; Wang, Xiangxue; Velcheti, Vamsidhar; Madabhushi, Anant; Romero, Eduardo

    2017-11-01

    Tumor-infiltrating lymphocytes occurs when various classes of white blood cells migrate from the blood stream towards the tumor, infiltrating it. The presence of TIL is predictive of the response of the patient to therapy. In this paper, we show how the automatic detection of lymphocytes in digital H and E histopathological images and the quantitative evaluation of the global lymphocyte configuration, evaluated through global features extracted from non-parametric graphs, constructed from the lymphocytes' detected positions, can be correlated to the patient's outcome in early-stage non-small cell lung cancer (NSCLC). The method was assessed on a tissue microarray cohort composed of 63 NSCLC cases. From the evaluated graphs, minimum spanning trees and K-nn showed the highest predictive ability, yielding F1 Scores of 0.75 and 0.72 and accuracies of 0.67 and 0.69, respectively. The predictive power of the proposed methodology indicates that graphs may be used to develop objective measures of the infiltration grade of tumors, which can, in turn, be used by pathologists to improve the decision making and treatment planning processes.

  18. Towards a Science of Tumor Forecasting for Clinical Oncology

    PubMed Central

    Yankeelov, Thomas E.; Quaranta, Vito; Evans, Katherine J.; Rericha, Erin C.

    2015-01-01

    We propose that the quantitative cancer biology community make a concerted effort to apply lessons from weather forecasting to develop an analogous methodology for predicting and evaluating tumor growth and treatment response. Currently, the time course of tumor response is not predicted; instead, response is- only assessed post hoc by physical exam or imaging methods. This fundamental practice within clinical oncology limits optimization of atreatment regimen for an individual patient, as well as to determine in real time whether the choice was in fact appropriate. This is especially frustrating at a time when a panoply of molecularly targeted therapies is available, and precision genetic or proteomic analyses of tumors are an established reality. By learning from the methods of weather and climate modeling, we submit that the forecasting power of biophysical and biomathematical modeling can be harnessed to hasten the arrival of a field of predictive oncology. With a successful methodology towards tumor forecasting, it should be possible to integrate large tumor specific datasets of varied types, and effectively defeat cancer one patient at a time. PMID:25592148

  19. Toward a science of tumor forecasting for clinical oncology.

    PubMed

    Yankeelov, Thomas E; Quaranta, Vito; Evans, Katherine J; Rericha, Erin C

    2015-03-15

    We propose that the quantitative cancer biology community makes a concerted effort to apply lessons from weather forecasting to develop an analogous methodology for predicting and evaluating tumor growth and treatment response. Currently, the time course of tumor response is not predicted; instead, response is only assessed post hoc by physical examination or imaging methods. This fundamental practice within clinical oncology limits optimization of a treatment regimen for an individual patient, as well as to determine in real time whether the choice was in fact appropriate. This is especially frustrating at a time when a panoply of molecularly targeted therapies is available, and precision genetic or proteomic analyses of tumors are an established reality. By learning from the methods of weather and climate modeling, we submit that the forecasting power of biophysical and biomathematical modeling can be harnessed to hasten the arrival of a field of predictive oncology. With a successful methodology toward tumor forecasting, it should be possible to integrate large tumor-specific datasets of varied types and effectively defeat one cancer patient at a time. ©2015 American Association for Cancer Research.

  20. Spin-lattice relaxation and the calculation of gain, pump power, and noise temperature in ruby

    NASA Technical Reports Server (NTRS)

    Lyons, J. R.

    1989-01-01

    The use of a quantitative analysis of the dominant source of relaxation in ruby spin systems to make predictions of key maser amplifier parameters is described. The spin-lattice Hamiltonian which describes the interaction of the electron spins with the thermal vibrations of the surrounding lattice is obtained from the literature. Taking into account the vibrational anisotropy of ruby, Fermi's rule is used to calculate the spin transition rates between the maser energy levels. The spin population rate equations are solved for the spin transition relaxation times, and a comparison with previous calculations is made. Predictions of ruby gain, inversion ratio, and noise temperature as a function of physical temperature are made for 8.4-GHz and 32-GHz maser pumping schemes. The theory predicts that ruby oriented at 90 deg will have approximately 50 percent higher gain in dB and slightly lower noise temperature than a 54.7-deg ruby at 32 GHz (assuming pump saturation). A specific calculation relating pump power to inversion ratio is given for a single channel of the 32-GHz reflected wave maser.

  1. PredPsych: A toolbox for predictive machine learning-based approach in experimental psychology research.

    PubMed

    Koul, Atesh; Becchio, Cristina; Cavallo, Andrea

    2017-12-12

    Recent years have seen an increased interest in machine learning-based predictive methods for analyzing quantitative behavioral data in experimental psychology. While these methods can achieve relatively greater sensitivity compared to conventional univariate techniques, they still lack an established and accessible implementation. The aim of current work was to build an open-source R toolbox - "PredPsych" - that could make these methods readily available to all psychologists. PredPsych is a user-friendly, R toolbox based on machine-learning predictive algorithms. In this paper, we present the framework of PredPsych via the analysis of a recently published multiple-subject motion capture dataset. In addition, we discuss examples of possible research questions that can be addressed with the machine-learning algorithms implemented in PredPsych and cannot be easily addressed with univariate statistical analysis. We anticipate that PredPsych will be of use to researchers with limited programming experience not only in the field of psychology, but also in that of clinical neuroscience, enabling computational assessment of putative bio-behavioral markers for both prognosis and diagnosis.

  2. The NASA Short-term Prediction Research and Transition (SPoRT) Center: A Collaborative Model for Accelerating Research into Operations

    NASA Technical Reports Server (NTRS)

    Goodman, S. J.; Lapenta, W.; Jedlovec, G.; Dodge, J.; Bradshaw, T.

    2003-01-01

    The NASA Short-term Prediction Research and Transition (SPoRT) Center in Huntsville, Alabama was created to accelerate the infusion of NASA earth science observations, data assimilation and modeling research into NWS forecast operations and decision-making. The principal focus of experimental products is on the regional scale with an emphasis on forecast improvements on a time scale of 0-24 hours. The SPoRT Center research is aligned with the regional prediction objectives of the US Weather Research Program dealing with 0-1 day forecast issues ranging from convective initiation to 24-hr quantitative precipitation forecasting. The SPoRT Center, together with its other interagency partners, universities, and the NASA/NOAA Joint Center for Satellite Data Assimilation, provides a means and a process to effectively transition NASA Earth Science Enterprise observations and technology to National Weather Service operations and decision makers at both the global/national and regional scales. This paper describes the process for the transition of experimental products into forecast operations, current products undergoing assessment by forecasters, and plans for the future.

  3. The effect of providing feedback on the characteristics of student responses to a videotaped format for high school physics assessment

    NASA Astrophysics Data System (ADS)

    Lawrence, Michael John

    1997-12-01

    The problem of science illiteracy has been well documented. The development of the critical thinking skills in science education are often sacrificed in favor of content coverage. Opportunities for critical thinking within a context of science have been recommended to promote science literacy (AAAS, 1993). One means of doing this is to have students make and explain predictions involving physical phenomena, observe feedback, and then revise the prediction. A videotaped assessment using this process served as the focus for this study. High school physics students were asked to predict and explain what would happen in situations involving optics. They were then given different feedback treatments. The purpose of this study was to: (a) examine the effect of providing feedback on the quality of responses in making both revisions and subsequent predictions, and (b) examine the relationship between content knowledge and qualitative performance. Sixty-four high ability students were separated into three treatment groups: no feedback (NF), visual feedback (F), and teacher-explained feedback (TE). These students responded to six items on the Optics Videotape Assessment and ten optics multiple choice items from the National Physics Exam (NPE). Their teachers had previously attended a professional development institute which emphasized the practice and philosophy of assessments like the Optics Assessment. The assessment responses were categorized by two raters who used a taxonomy that ranged from simple descriptions to complete explanations. NPE performance was compared using one-way ANOVA, Optics Assessment performance was compared using a chi-square test of homogeneity, and a point-biserial correlation was done to compare qualitative and quantitative performance. The study found that students were unable to use feedback to make a significant change in the quality of their responses, whether revision or subsequent prediction. There was no correlation between content knowledge and qualitative performance. It was concluded that for students to succeed on an assessment of this type, their classroom teachers must be given the time to implement the appropriate instruction. Instruction and assessment of this nature are crucial to the development of science literacy.

  4. Satisficing in split-second decision making is characterized by strategic cue discounting.

    PubMed

    Oh, Hanna; Beck, Jeffrey M; Zhu, Pingping; Sommer, Marc A; Ferrari, Silvia; Egner, Tobias

    2016-12-01

    Much of our real-life decision making is bounded by uncertain information, limitations in cognitive resources, and a lack of time to allocate to the decision process. It is thought that humans overcome these limitations through satisficing, fast but "good-enough" heuristic decision making that prioritizes some sources of information (cues) while ignoring others. However, the decision-making strategies we adopt under uncertainty and time pressure, for example during emergencies that demand split-second choices, are presently unknown. To characterize these decision strategies quantitatively, the present study examined how people solve a novel multicue probabilistic classification task under varying time pressure, by tracking shifts in decision strategies using variational Bayesian inference. We found that under low time pressure, participants correctly weighted and integrated all available cues to arrive at near-optimal decisions. With increasingly demanding, subsecond time pressures, however, participants systematically discounted a subset of the cue information by dropping the least informative cue(s) from their decision making process. Thus, the human cognitive apparatus copes with uncertainty and severe time pressure by adopting a "drop-the-worst" cue decision making strategy that minimizes cognitive time and effort investment while preserving the consideration of the most diagnostic cue information, thus maintaining "good-enough" accuracy. This advance in our understanding of satisficing strategies could form the basis of predicting human choices in high time pressure scenarios. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  5. Validation workflow for a clinical Bayesian network model in multidisciplinary decision making in head and neck oncology treatment.

    PubMed

    Cypko, Mario A; Stoehr, Matthaeus; Kozniewski, Marcin; Druzdzel, Marek J; Dietz, Andreas; Berliner, Leonard; Lemke, Heinz U

    2017-11-01

    Oncological treatment is being increasingly complex, and therefore, decision making in multidisciplinary teams is becoming the key activity in the clinical pathways. The increased complexity is related to the number and variability of possible treatment decisions that may be relevant to a patient. In this paper, we describe validation of a multidisciplinary cancer treatment decision in the clinical domain of head and neck oncology. Probabilistic graphical models and corresponding inference algorithms, in the form of Bayesian networks, can support complex decision-making processes by providing a mathematically reproducible and transparent advice. The quality of BN-based advice depends on the quality of the model. Therefore, it is vital to validate the model before it is applied in practice. For an example BN subnetwork of laryngeal cancer with 303 variables, we evaluated 66 patient records. To validate the model on this dataset, a validation workflow was applied in combination with quantitative and qualitative analyses. In the subsequent analyses, we observed four sources of imprecise predictions: incorrect data, incomplete patient data, outvoting relevant observations, and incorrect model. Finally, the four problems were solved by modifying the data and the model. The presented validation effort is related to the model complexity. For simpler models, the validation workflow is the same, although it may require fewer validation methods. The validation success is related to the model's well-founded knowledge base. The remaining laryngeal cancer model may disclose additional sources of imprecise predictions.

  6. A Model of Human Cooperation in Social Dilemmas

    PubMed Central

    Capraro, Valerio

    2013-01-01

    Social dilemmas are situations in which collective interests are at odds with private interests: pollution, depletion of natural resources, and intergroup conflicts, are at their core social dilemmas. Because of their multidisciplinarity and their importance, social dilemmas have been studied by economists, biologists, psychologists, sociologists, and political scientists. These studies typically explain tendency to cooperation by dividing people in proself and prosocial types, or appealing to forms of external control or, in iterated social dilemmas, to long-term strategies. But recent experiments have shown that cooperation is possible even in one-shot social dilemmas without forms of external control and the rate of cooperation typically depends on the payoffs. This makes impossible a predictive division between proself and prosocial people and proves that people have attitude to cooperation by nature. The key innovation of this article is in fact to postulate that humans have attitude to cooperation by nature and consequently they do not act a priori as single agents, as assumed by standard economic models, but they forecast how a social dilemma would evolve if they formed coalitions and then they act according to their most optimistic forecast. Formalizing this idea we propose the first predictive model of human cooperation able to organize a number of different experimental findings that are not explained by the standard model. We show also that the model makes satisfactorily accurate quantitative predictions of population average behavior in one-shot social dilemmas. PMID:24009679

  7. Making detailed predictions makes (some) predictions worse

    NASA Astrophysics Data System (ADS)

    Kelly, Theresa F.

    In this paper, we investigate whether making detailed predictions about an event makes other predictions worse. Across 19 experiments, 10,895 participants, and 415,960 predictions about 724 professional sports games, we find that people who made detailed predictions about sporting events (e.g., how many hits each baseball team would get) made worse predictions about more general outcomes (e.g., which team would win). We rule out that this effect is caused by inattention or fatigue, thinking too hard, or a differential reliance on holistic information about the teams. Instead, we find that thinking about game-relevant details before predicting winning teams causes people to give less weight to predictive information, presumably because predicting details makes information that is relatively useless for predicting the winning team more readily accessible in memory and therefore incorporated into forecasts. Furthermore, we show that this differential use of information can be used to predict what kinds of games will and will not be susceptible to the negative effect of making detailed predictions.

  8. The role of quantitative safety evaluation in regulatory decision making of drugs.

    PubMed

    Chakravarty, Aloka G; Izem, Rima; Keeton, Stephine; Kim, Clara Y; Levenson, Mark S; Soukup, Mat

    2016-01-01

    Evaluation of safety is a critical component of drug review at the US Food and Drug Administration (FDA). Statisticians are playing an increasingly visible role in quantitative safety evaluation and regulatory decision-making. This article reviews the history and the recent events relating to quantitative drug safety evaluation at the FDA. The article then focuses on five active areas of quantitative drug safety evaluation and the role Division of Biometrics VII (DBVII) plays in these areas, namely meta-analysis for safety evaluation, large safety outcome trials, post-marketing requirements (PMRs), the Sentinel Initiative, and the evaluation of risk from extended/long-acting opioids. This article will focus chiefly on developments related to quantitative drug safety evaluation and not on the many additional developments in drug safety in general.

  9. Deep neural nets as a method for quantitative structure-activity relationships.

    PubMed

    Ma, Junshui; Sheridan, Robert P; Liaw, Andy; Dahl, George E; Svetnik, Vladimir

    2015-02-23

    Neural networks were widely used for quantitative structure-activity relationships (QSAR) in the 1990s. Because of various practical issues (e.g., slow on large problems, difficult to train, prone to overfitting, etc.), they were superseded by more robust methods like support vector machine (SVM) and random forest (RF), which arose in the early 2000s. The last 10 years has witnessed a revival of neural networks in the machine learning community thanks to new methods for preventing overfitting, more efficient training algorithms, and advancements in computer hardware. In particular, deep neural nets (DNNs), i.e. neural nets with more than one hidden layer, have found great successes in many applications, such as computer vision and natural language processing. Here we show that DNNs can routinely make better prospective predictions than RF on a set of large diverse QSAR data sets that are taken from Merck's drug discovery effort. The number of adjustable parameters needed for DNNs is fairly large, but our results show that it is not necessary to optimize them for individual data sets, and a single set of recommended parameters can achieve better performance than RF for most of the data sets we studied. The usefulness of the parameters is demonstrated on additional data sets not used in the calibration. Although training DNNs is still computationally intensive, using graphical processing units (GPUs) can make this issue manageable.

  10. Family involvement in decision making for people with dementia in residential aged care: a systematic review of quantitative literature.

    PubMed

    Petriwskyj, Andrea; Gibson, Alexandra; Parker, Deborah; Banks, Susan; Andrews, Sharon; Robinson, Andrew

    2014-06-01

    Ensuring older adults' involvement in their care is accepted as good practice and is vital, particularly for people with dementia, whose care and treatment needs change considerably over the course of the illness. However, involving family members in decision making on people's behalf is still practically difficult for staff and family. The aim of this review was to identify and appraise the existing quantitative evidence about family involvement in decision making for people with dementia living in residential aged care. The present Joanna Briggs Institute (JBI) metasynthesis assessed studies that investigated involvement of family members in decision making for people with dementia in residential aged care settings. While quantitative and qualitative studies were included in the review, this paper presents the quantitative findings. A comprehensive search of 15 electronic databases was performed. The search was limited to papers published in English, from 1990 to 2013. Twenty-six studies were identified as being relevant; 10 were quantitative, with 1 mixed method study. Two independent reviewers assessed the studies for methodological validity and extracted the data using the JBI Meta Analysis of Statistics Assessment and Review Instrument (JBI-MAStARI). The findings were synthesized and presented in narrative form. The findings related to decisions encountered and made by family surrogates, variables associated with decisions, surrogates' perceptions of, and preferences for, their roles, as well as outcomes for people with dementia and their families. The results identified patterns within, and variables associated with, surrogate decision making, all of which highlight the complexity and variation regarding family involvement. Attention needs to be paid to supporting family members in decision making in collaboration with staff.

  11. A Quantitative Model of Expert Transcription Typing

    DTIC Science & Technology

    1993-03-08

    side of pure psychology, several researchers have argued that transcription typing is a particularly good activity for the study of human skilled...phenomenon with a quantitative METT prediction. The first, quick and dirty analysis gives a good prediction of the copy span, in fact, it is even...typing, it should be demonstrated that the mechanism of the model does not get in the way of good predictions. If situations occur where the entire

  12. Faecal Pathogen Flows and Their Public Health Risks in Urban Environments: A Proposed Approach to Inform Sanitation Planning

    PubMed Central

    Mills, Freya; Petterson, Susan; Norman, Guy

    2018-01-01

    Public health benefits are often a key political driver of urban sanitation investment in developing countries, however, pathogen flows are rarely taken systematically into account in sanitation investment choices. While several tools and approaches on sanitation and health risks have recently been developed, this research identified gaps in their ability to predict faecal pathogen flows, to relate exposure risks to the existing sanitation services, and to compare expected impacts of improvements. This paper outlines a conceptual approach that links faecal waste discharge patterns with potential pathogen exposure pathways to quantitatively compare urban sanitation improvement options. An illustrative application of the approach is presented, using a spreadsheet-based model to compare the relative effect on disability-adjusted life years of six sanitation improvement options for a hypothetical urban situation. The approach includes consideration of the persistence or removal of different pathogen classes in different environments; recognition of multiple interconnected sludge and effluent pathways, and of multiple potential sites for exposure; and use of quantitative microbial risk assessment to support prediction of relative health risks for each option. This research provides a step forward in applying current knowledge to better consider public health, alongside environmental and other objectives, in urban sanitation decision making. Further empirical research in specific locations is now required to refine the approach and address data gaps. PMID:29360775

  13. Natural Analogues - One Way to Help Build Public Confidence in the Predicted Performance of a Mined Geologic Repository for Nuclear Waste

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stuckless, J. S.

    2002-02-26

    The general public needs to have a way to judge the predicted long-term performance of the potential high-level nuclear waste repository at Yucca Mountain. The applicability and reliability of mathematical models used to make this prediction are neither easily understood nor accepted by the public. Natural analogues can provide the average person with a tool to assess the predicted performance and other scientific conclusions. For example, hydrologists with the Yucca Mountain Project have predicted that most of the water moving through the unsaturated zone at Yucca Mountain, Nevada will move through the host rock and around tunnels. Thus, seepage intomore » tunnels is predicted to be a small percentage of available infiltration. This hypothesis can be tested experimentally and with some quantitative analogues. It can also be tested qualitatively using a variety of analogues such as (1) well-preserved Paleolithic to Neolithic paintings in caves and rock shelters, (2) biological remains preserved in caves and rock shelters, and (3) artifacts and paintings preserved in man-made underground openings. These examples can be found in materials that are generally available to the non-scientific public and can demonstrate the surprising degree of preservation of fragile and easily destroyed materials for very long periods of time within the unsaturated zone.« less

  14. Applying quantitative benefit-risk analysis to aid regulatory decision making in diagnostic imaging: methods, challenges, and opportunities.

    PubMed

    Agapova, Maria; Devine, Emily Beth; Bresnahan, Brian W; Higashi, Mitchell K; Garrison, Louis P

    2014-09-01

    Health agencies making regulatory marketing-authorization decisions use qualitative and quantitative approaches to assess expected benefits and expected risks associated with medical interventions. There is, however, no universal standard approach that regulatory agencies consistently use to conduct benefit-risk assessment (BRA) for pharmaceuticals or medical devices, including for imaging technologies. Economics, health services research, and health outcomes research use quantitative approaches to elicit preferences of stakeholders, identify priorities, and model health conditions and health intervention effects. Challenges to BRA in medical devices are outlined, highlighting additional barriers in radiology. Three quantitative methods--multi-criteria decision analysis, health outcomes modeling and stated-choice survey--are assessed using criteria that are important in balancing benefits and risks of medical devices and imaging technologies. To be useful in regulatory BRA, quantitative methods need to: aggregate multiple benefits and risks, incorporate qualitative considerations, account for uncertainty, and make clear whose preferences/priorities are being used. Each quantitative method performs differently across these criteria and little is known about how BRA estimates and conclusions vary by approach. While no specific quantitative method is likely to be the strongest in all of the important areas, quantitative methods may have a place in BRA of medical devices and radiology. Quantitative BRA approaches have been more widely applied in medicines, with fewer BRAs in devices. Despite substantial differences in characteristics of pharmaceuticals and devices, BRA methods may be as applicable to medical devices and imaging technologies as they are to pharmaceuticals. Further research to guide the development and selection of quantitative BRA methods for medical devices and imaging technologies is needed. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.

  15. The Attentional Drift Diffusion Model of Simple Perceptual Decision-Making.

    PubMed

    Tavares, Gabriela; Perona, Pietro; Rangel, Antonio

    2017-01-01

    Perceptual decisions requiring the comparison of spatially distributed stimuli that are fixated sequentially might be influenced by fluctuations in visual attention. We used two psychophysical tasks with human subjects to investigate the extent to which visual attention influences simple perceptual choices, and to test the extent to which the attentional Drift Diffusion Model (aDDM) provides a good computational description of how attention affects the underlying decision processes. We find evidence for sizable attentional choice biases and that the aDDM provides a reasonable quantitative description of the relationship between fluctuations in visual attention, choices and reaction times. We also find that exogenous manipulations of attention induce choice biases consistent with the predictions of the model.

  16. Single-Cell Genomics: Approaches and Utility in Immunology.

    PubMed

    Neu, Karlynn E; Tang, Qingming; Wilson, Patrick C; Khan, Aly A

    2017-02-01

    Single-cell genomics offers powerful tools for studying immune cells, which make it possible to observe rare and intermediate cell states that cannot be resolved at the population level. Advances in computer science and single-cell sequencing technology have created a data-driven revolution in immunology. The challenge for immunologists is to harness computing and turn an avalanche of quantitative data into meaningful discovery of immunological principles, predictive models, and strategies for therapeutics. Here, we review the current literature on computational analysis of single-cell RNA-sequencing data and discuss underlying assumptions, methods, and applications in immunology, and highlight important directions for future research. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. A symmetrical subtraction combined with interpolated values for eliminating scattering from fluorescence EEM data.

    PubMed

    Xu, Jing; Liu, Xiaofei; Wang, Yutian

    2016-08-05

    Parallel factor analysis is a widely used method to extract qualitative and quantitative information of the analyte of interest from fluorescence emission-excitation matrix containing unknown components. Big amplitude of scattering will influence the results of parallel factor analysis. Many methods of eliminating scattering have been proposed. Each of these methods has its advantages and disadvantages. The combination of symmetrical subtraction and interpolated values has been discussed. The combination refers to both the combination of results and the combination of methods. Nine methods were used for comparison. The results show the combination of results can make a better concentration prediction for all the components. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Soil spectral characterization

    NASA Technical Reports Server (NTRS)

    Stoner, E. R.; Baumgardner, M. F.

    1981-01-01

    The spectral characterization of soils is discussed with particular reference to the bidirectional reflectance factor as a quantitative measure of soil spectral properties, the role of soil color, soil parameters affecting soil reflectance, and field characteristics of soil reflectance. Comparisons between laboratory-measured soil spectra and Landsat MSS data have shown good agreement, especially in discriminating relative drainage conditions and organic matter levels in unvegetated soils. The capacity to measure both visible and infrared soil reflectance provides information on other soil characteristics and makes it possible to predict soil response to different management conditions. Field and laboratory soil spectral characterization helps define the extent to which intrinsic spectral information is available from soils as a consequence of their composition and field characteristics.

  19. Cesium alignment produced by pumping with unpolarized light★

    NASA Astrophysics Data System (ADS)

    Shi, Yongqi; Weis, Antoine

    2018-04-01

    We demonstrate optical pumping on the four hyperfine components of the Cs D 1 transition by unpolarized (UPL) resonant laser light. The evidence is based on the reduction of the absorption coefficients κ 0 with increasing light power P in an uncoated Cs vapor cell with isotropic spin relaxation. For comparison we perform the same quantitative κ 0( P) measurements with linearly-polarized light (LPL) and circularly-polarized light (CPL). We find that our previously published algebraic expressions give an excellent description of all experimentally recorded induced transparency signals. Based on this we can make reliable absolute predictions for the power dependence of the spin orientation and alignment produced by pumping with LPL, CPL and UPL.

  20. Modeling heat dissipation at the nanoscale: an embedding approach for chemical reaction dynamics on metal surfaces.

    PubMed

    Meyer, Jörg; Reuter, Karsten

    2014-04-25

    We present an embedding technique for metallic systems that makes it possible to model energy dissipation into substrate phonons during surface chemical reactions from first principles. The separation of chemical and elastic contributions to the interaction potential provides a quantitative description of both electronic and phononic band structure. Application to the dissociation of O2 at Pd(100) predicts translationally "hot" oxygen adsorbates as a consequence of the released adsorption energy (ca. 2.6 eV). This finding questions the instant thermalization of reaction enthalpies generally assumed in models of heterogeneous catalysis. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Quantitative model of price diffusion and market friction based on trading as a mechanistic random process.

    PubMed

    Daniels, Marcus G; Farmer, J Doyne; Gillemot, László; Iori, Giulia; Smith, Eric

    2003-03-14

    We model trading and price formation in a market under the assumption that order arrival and cancellations are Poisson random processes. This model makes testable predictions for the most basic properties of markets, such as the diffusion rate of prices (which is the standard measure of financial risk) and the spread and price impact functions (which are the main determinants of transaction cost). Guided by dimensional analysis, simulation, and mean-field theory, we find scaling relations in terms of order flow rates. We show that even under completely random order flow the need to store supply and demand to facilitate trading induces anomalous diffusion and temporal structure in prices.

  2. Quantitative Model of Price Diffusion and Market Friction Based on Trading as a Mechanistic Random Process

    NASA Astrophysics Data System (ADS)

    Daniels, Marcus G.; Farmer, J. Doyne; Gillemot, László; Iori, Giulia; Smith, Eric

    2003-03-01

    We model trading and price formation in a market under the assumption that order arrival and cancellations are Poisson random processes. This model makes testable predictions for the most basic properties of markets, such as the diffusion rate of prices (which is the standard measure of financial risk) and the spread and price impact functions (which are the main determinants of transaction cost). Guided by dimensional analysis, simulation, and mean-field theory, we find scaling relations in terms of order flow rates. We show that even under completely random order flow the need to store supply and demand to facilitate trading induces anomalous diffusion and temporal structure in prices.

  3. Policy impacts of ecosystem services knowledge

    PubMed Central

    Posner, Stephen M.; McKenzie, Emily; Ricketts, Taylor H.

    2016-01-01

    Research about ecosystem services (ES) often aims to generate knowledge that influences policies and institutions for conservation and human development. However, we have limited understanding of how decision-makers use ES knowledge or what factors facilitate use. Here we address this gap and report on, to our knowledge, the first quantitative analysis of the factors and conditions that explain the policy impact of ES knowledge. We analyze a global sample of cases where similar ES knowledge was generated and applied to decision-making. We first test whether attributes of ES knowledge themselves predict different measures of impact on decisions. We find that legitimacy of knowledge is more often associated with impact than either the credibility or salience of the knowledge. We also examine whether predictor variables related to the science-to-policy process and the contextual conditions of a case are significant in predicting impact. Our findings indicate that, although many factors are important, attributes of the knowledge and aspects of the science-to-policy process that enhance legitimacy best explain the impact of ES science on decision-making. Our results are consistent with both theory and previous qualitative assessments in suggesting that the attributes and perceptions of scientific knowledge and process within which knowledge is coproduced are important determinants of whether that knowledge leads to action. PMID:26831101

  4. Decision-Making in Multiple Sclerosis Patients: A Systematic Review

    PubMed Central

    2018-01-01

    Background Multiple sclerosis (MS) is frequently associated with cognitive and behavioural deficits. A growing number of studies suggest an impact of MS on decision-making abilities. The aim of this systematic review was to assess if (1) performance of MS patients in decision-making tasks was consistently different from controls and (2) whether this modification was associated with cognitive dysfunction and emotional alterations. Methods The search was conducted on Pubmed/Medline database. 12 studies evaluating the difference between MS patients and healthy controls using validated decision-making tasks were included. Outcomes considered were quantitative (net scores) and qualitative measurements (deliberation time and learning from feedback). Results Quantitative and qualitative decision-making impairment in MS was present in 64.7% of measurements. Patients were equally impaired in tasks for decision-making under risk and ambiguity. A correlation to other cognitive functions was present in 50% of cases, with the highest associations in the domains of processing speed and attentional capacity. Conclusions In MS patients, qualitative and quantitative modifications may be present in any kind of decision-making task and can appear independently of other cognitive measures. Since decision-making abilities have a significant impact on everyday life, this cognitive aspect has an influential importance in various MS-related treatment settings. PMID:29721338

  5. Comparative Analysis of Predictive Models for Liver Toxicity Using ToxCast Assays and Quantitative Structure-Activity Relationships (MCBIOS)

    EPA Science Inventory

    Comparative Analysis of Predictive Models for Liver Toxicity Using ToxCast Assays and Quantitative Structure-Activity Relationships Jie Liu1,2, Richard Judson1, Matthew T. Martin1, Huixiao Hong3, Imran Shah1 1National Center for Computational Toxicology (NCCT), US EPA, RTP, NC...

  6. Investigation of a redox-sensitive predictive model of mouse embryonic stem cells differentiation using quantitative nuclease protection assays and glutathione redox status

    EPA Science Inventory

    Investigation of a redox-sensitive predictive model of mouse embryonic stem cell differentiation via quantitative nuclease protection assays and glutathione redox status Chandler KJ,Hansen JM, Knudsen T,and Hunter ES 1. U.S. Environmental Protection Agency, Research Triangl...

  7. Early prediction of coma recovery after cardiac arrest with blinded pupillometry.

    PubMed

    Solari, Daria; Rossetti, Andrea O; Carteron, Laurent; Miroz, John-Paul; Novy, Jan; Eckert, Philippe; Oddo, Mauro

    2017-06-01

    Prognostication studies on comatose cardiac arrest (CA) patients are limited by lack of blinding, potentially causing overestimation of outcome predictors and self-fulfilling prophecy. Using a blinded approach, we analyzed the value of quantitative automated pupillometry to predict neurological recovery after CA. We examined a prospective cohort of 103 comatose adult patients who were unconscious 48 hours after CA and underwent repeated measurements of quantitative pupillary light reflex (PLR) using the Neurolight-Algiscan device. Clinical examination, electroencephalography (EEG), somatosensory evoked potentials (SSEP), and serum neuron-specific enolase were performed in parallel, as part of standard multimodal assessment. Automated pupillometry results were blinded to clinicians involved in patient care. Cerebral Performance Categories (CPC) at 1 year was the outcome endpoint. Survivors (n = 50 patients; 32 CPC 1, 16 CPC 2, 2 CPC 3) had higher quantitative PLR (median = 20 [range = 13-41] vs 11 [0-55] %, p < 0.0001) and constriction velocity (1.46 [0.85-4.63] vs 0.94 [0.16-4.97] mm/s, p < 0.0001) than nonsurvivors. At 48 hours, a quantitative PLR < 13% had 100% specificity and positive predictive value to predict poor recovery (0% false-positive rate), and provided equal performance to that of EEG and SSEP. Reduced quantitative PLR correlated with higher serum neuron-specific enolase (Spearman r = -0.52, p < 0.0001). Reduced quantitative PLR correlates with postanoxic brain injury and, when compared to standard multimodal assessment, is highly accurate in predicting long-term prognosis after CA. This is the first prognostication study to show the value of automated pupillometry using a blinded approach to minimize self-fulfilling prophecy. Ann Neurol 2017;81:804-810. © 2017 American Neurological Association.

  8. Research on Improved Depth Belief Network-Based Prediction of Cardiovascular Diseases

    PubMed Central

    Zhang, Hongpo

    2018-01-01

    Quantitative analysis and prediction can help to reduce the risk of cardiovascular disease. Quantitative prediction based on traditional model has low accuracy. The variance of model prediction based on shallow neural network is larger. In this paper, cardiovascular disease prediction model based on improved deep belief network (DBN) is proposed. Using the reconstruction error, the network depth is determined independently, and unsupervised training and supervised optimization are combined. It ensures the accuracy of model prediction while guaranteeing stability. Thirty experiments were performed independently on the Statlog (Heart) and Heart Disease Database data sets in the UCI database. Experimental results showed that the mean of prediction accuracy was 91.26% and 89.78%, respectively. The variance of prediction accuracy was 5.78 and 4.46, respectively. PMID:29854369

  9. A generalized quantitative interpretation of dark-field contrast for highly concentrated microsphere suspensions

    PubMed Central

    Gkoumas, Spyridon; Villanueva-Perez, Pablo; Wang, Zhentian; Romano, Lucia; Abis, Matteo; Stampanoni, Marco

    2016-01-01

    In X-ray grating interferometry, dark-field contrast arises due to partial extinction of the detected interference fringes. This is also called visibility reduction and is attributed to small-angle scattering from unresolved structures in the imaged object. In recent years, analytical quantitative frameworks of dark-field contrast have been developed for highly diluted monodisperse microsphere suspensions with maximum 6% volume fraction. These frameworks assume that scattering particles are separated by large enough distances, which make any interparticle scattering interference negligible. In this paper, we start from the small-angle scattering intensity equation and, by linking Fourier and real-space, we introduce the structure factor and thus extend the analytical and experimental quantitative interpretation of dark-field contrast, for a range of suspensions with volume fractions reaching 40%. The structure factor accounts for interparticle scattering interference. Without introducing any additional fitting parameters, we successfully predict the experimental values measured at the TOMCAT beamline, Swiss Light Source. Finally, we apply this theoretical framework to an experiment probing a range of system correlation lengths by acquiring dark-field images at different energies. This proposed method has the potential to be applied in single-shot-mode using a polychromatic X-ray tube setup and a single-photon-counting energy-resolving detector. PMID:27734931

  10. Quantitative contrast-enhanced ultrasound evaluation of pathological complete response in patients with locally advanced breast cancer receiving neoadjuvant chemotherapy.

    PubMed

    Wan, Cai-Feng; Liu, Xue-Song; Wang, Lin; Zhang, Jie; Lu, Jin-Song; Li, Feng-Hua

    2018-06-01

    To clarify whether the quantitative parameters of contrast-enhanced ultrasound (CEUS) can be used to predict pathological complete response (pCR) in patients with locally advanced breast cancer receiving neoadjuvant chemotherapy (NAC). Fifty-one patients with histologically proved locally advanced breast cancer scheduled for NAC were enrolled. The quantitative data for CEUS and the tumor diameter were collected at baseline and before surgery, and compared with the pathological response. Multiple logistic regression analysis was performed to examine quantitative parameters at CEUS and the tumor diameter to predict the pCR, and receiver operating characteristic (ROC) curve analysis was used as a summary statistic. Multiple logistic regression analysis revealed that PEAK (the maximum intensity of the time-intensity curve during bolus transit), PEAK%, TTP% (time to peak), and diameter% were significant independent predictors of pCR, and the area under the ROC curve was 0.932(Az 1 ), and the sensitivity and specificity to predict pCR were 93.7% and 80.0%. The area under the ROC curve for the quantitative parameters was 0.927(Az 2 ), and the sensitivity and specificity to predict pCR were 81.2% and 94.3%. For diameter%, the area under the ROC curve was 0.786 (Az 3 ), and the sensitivity and specificity to predict pCR were 93.8% and 54.3%. The values of Az 1 and Az 2 were significantly higher than that of Az 3 (P = 0.027 and P = 0.034, respectively). However, there was no significant difference between the values of Az 1 and Az 2 (P = 0.825). Quantitative analysis of tumor blood perfusion with CEUS is superior to diameter% to predict pCR, and can be used as a functional technique to evaluate tumor response to NAC. Copyright © 2018. Published by Elsevier B.V.

  11. Quantitative prediction of solute strengthening in aluminium alloys.

    PubMed

    Leyson, Gerard Paul M; Curtin, William A; Hector, Louis G; Woodward, Christopher F

    2010-09-01

    Despite significant advances in computational materials science, a quantitative, parameter-free prediction of the mechanical properties of alloys has been difficult to achieve from first principles. Here, we present a new analytic theory that, with input from first-principles calculations, is able to predict the strengthening of aluminium by substitutional solute atoms. Solute-dislocation interaction energies in and around the dislocation core are first calculated using density functional theory and a flexible-boundary-condition method. An analytic model for the strength, or stress to move a dislocation, owing to the random field of solutes, is then presented. The theory, which has no adjustable parameters and is extendable to other metallic alloys, predicts both the energy barriers to dislocation motion and the zero-temperature flow stress, allowing for predictions of finite-temperature flow stresses. Quantitative comparisons with experimental flow stresses at temperature T=78 K are made for Al-X alloys (X=Mg, Si, Cu, Cr) and good agreement is obtained.

  12. Modelling multi-hazard hurricane damages on an urbanized coast with a Bayesian Network approach

    USGS Publications Warehouse

    van Verseveld, H.C.W.; Van Dongeren, A. R.; Plant, Nathaniel G.; Jäger, W.S.; den Heijer, C.

    2015-01-01

    Hurricane flood impacts to residential buildings in coastal zones are caused by a number of hazards, such as inundation, overflow currents, erosion, and wave attack. However, traditional hurricane damage models typically make use of stage-damage functions, where the stage is related to flooding depth only. Moreover, these models are deterministic and do not consider the large amount of uncertainty associated with both the processes themselves and with the predictions. This uncertainty becomes increasingly important when multiple hazards (flooding, wave attack, erosion, etc.) are considered simultaneously. This paper focusses on establishing relationships between observed damage and multiple hazard indicators in order to make better probabilistic predictions. The concept consists of (1) determining Local Hazard Indicators (LHIs) from a hindcasted storm with use of a nearshore morphodynamic model, XBeach, and (2) coupling these LHIs and building characteristics to the observed damages. We chose a Bayesian Network approach in order to make this coupling and used the LHIs ‘Inundation depth’, ‘Flow velocity’, ‘Wave attack’, and ‘Scour depth’ to represent flooding, current, wave impacts, and erosion related hazards.The coupled hazard model was tested against four thousand damage observations from a case site at the Rockaway Peninsula, NY, that was impacted by Hurricane Sandy in late October, 2012. The model was able to accurately distinguish ‘Minor damage’ from all other outcomes 95% of the time and could distinguish areas that were affected by the storm, but not severely damaged, 68% of the time. For the most heavily damaged buildings (‘Major Damage’ and ‘Destroyed’), projections of the expected damage underestimated the observed damage. The model demonstrated that including multiple hazards doubled the prediction skill, with Log-Likelihood Ratio test (a measure of improved accuracy and reduction in uncertainty) scores between 0.02 and 0.17 when only one hazard is considered and a score of 0.37 when multiple hazards are considered simultaneously. The LHIs with the most predictive skill were ‘Inundation depth’ and ‘Wave attack’. The Bayesian Network approach has several advantages over the market-standard stage-damage functions: the predictive capacity of multiple indicators can be combined; probabilistic predictions can be obtained, which include uncertainty; and quantitative as well as descriptive information can be used simultaneously.

  13. Investigating Children's Abilities to Count and Make Quantitative Comparisons

    ERIC Educational Resources Information Center

    Lee, Joohi; Md-Yunus, Sham'ah

    2016-01-01

    This study was designed to investigate children's abilities to count and make quantitative comparisons. In addition, this study utilized reasoning questions (i.e., how did you know?). Thirty-four preschoolers, mean age 4.5 years old, participated in the study. According to the results, 89% of the children (n = 30) were able to do rote counting and…

  14. Evaluation of chemotherapy response in ovarian cancer treatment using quantitative CT image biomarkers: a preliminary study

    NASA Astrophysics Data System (ADS)

    Qiu, Yuchen; Tan, Maxine; McMeekin, Scott; Thai, Theresa; Moore, Kathleen; Ding, Kai; Liu, Hong; Zheng, Bin

    2015-03-01

    The purpose of this study is to identify and apply quantitative image biomarkers for early prediction of the tumor response to the chemotherapy among the ovarian cancer patients participated in the clinical trials of testing new drugs. In the experiment, we retrospectively selected 30 cases from the patients who participated in Phase I clinical trials of new drug or drug agents for ovarian cancer treatment. Each case is composed of two sets of CT images acquired pre- and post-treatment (4-6 weeks after starting treatment). A computer-aided detection (CAD) scheme was developed to extract and analyze the quantitative image features of the metastatic tumors previously tracked by the radiologists using the standard Response Evaluation Criteria in Solid Tumors (RECIST) guideline. The CAD scheme first segmented 3-D tumor volumes from the background using a hybrid tumor segmentation scheme. Then, for each segmented tumor, CAD computed three quantitative image features including the change of tumor volume, tumor CT number (density) and density variance. The feature changes were calculated between the matched tumors tracked on the CT images acquired pre- and post-treatments. Finally, CAD predicted patient's 6-month progression-free survival (PFS) using a decision-tree based classifier. The performance of the CAD scheme was compared with the RECIST category. The result shows that the CAD scheme achieved a prediction accuracy of 76.7% (23/30 cases) with a Kappa coefficient of 0.493, which is significantly higher than the performance of RECIST prediction with a prediction accuracy and Kappa coefficient of 60% (17/30) and 0.062, respectively. This study demonstrated the feasibility of analyzing quantitative image features to improve the early predicting accuracy of the tumor response to the new testing drugs or therapeutic methods for the ovarian cancer patients.

  15. Is Directivity Still Effective in a PSHA Framework?

    NASA Astrophysics Data System (ADS)

    Spagnuolo, E.; Herrero, A.; Cultrera, G.

    2008-12-01

    Source rupture parameters, like directivity, modulate the energy release causing variations in the radiated signal amplitude. Thus they affect the empirical predictive equations and as a consequence the seismic hazard assessment. Classical probabilistic hazard evaluations, e.g. Cornell (1968), use very simple predictive equations only based on magnitude and distance which do not account for variables concerning the rupture process. However nowadays, a few predictive equations (e.g. Somerville 1997, Spudich and Chiou 2008) take into account for rupture directivity. Also few implementations have been made in a PSHA framework (e.g. Convertito et al. 2006, Rowshandel 2006). In practice, these new empirical predictive models incorporate quantitatively the rupture propagation effects through the introduction of variables like rake, azimuth, rupture velocity and laterality. The contribution of all these variables is summarized in corrective factors derived from measuring differences between the real data and the predicted ones Therefore, it's possible to keep the older computation, making use of a simple predictive model, and besides, to incorporate the directivity effect through the corrective factors. Any single supplementary variable meaning a new integral in the parametric space. However the difficulty consists of the constraints on parameter distribution functions. We present the preliminary result for ad hoc distributions (Gaussian, uniform distributions) in order to test the impact of incorporating directivity into PSHA models. We demonstrate that incorporating directivity in PSHA by means of the new predictive equations may lead to strong percentage variations in the hazard assessment.

  16. Laboratory evolution of the migratory polymorphism in the sand cricket: combining physiology with quantitative genetics.

    PubMed

    Roff, Derek A; Fairbairn, Daphne J

    2007-01-01

    Predicting evolutionary change is the central goal of evolutionary biology because it is the primary means by which we can test evolutionary hypotheses. In this article, we analyze the pattern of evolutionary change in a laboratory population of the wing-dimorphic sand cricket Gryllus firmus resulting from relaxation of selection favoring the migratory (long-winged) morph. Based on a well-characterized trade-off between fecundity and flight capability, we predict that evolution in the laboratory environment should result in a reduction in the proportion of long-winged morphs. We also predict increased fecundity and reduced functionality and weight of the major flight muscles in long-winged females but little change in short-winged (flightless) females. Based on quantitative genetic theory, we predict that the regression equation describing the trade-off between ovary weight and weight of the major flight muscles will show a change in its intercept but not in its slope. Comparisons across generations verify all of these predictions. Further, using values of genetic parameters estimated from previous studies, we show that a quantitative genetic simulation model can account for not only the qualitative changes but also the evolutionary trajectory. These results demonstrate the power of combining quantitative genetic and physiological approaches for understanding the evolution of complex traits.

  17. New horizons in mouse immunoinformatics: reliable in silico prediction of mouse class I histocompatibility major complex peptide binding affinity.

    PubMed

    Hattotuwagama, Channa K; Guan, Pingping; Doytchinova, Irini A; Flower, Darren R

    2004-11-21

    Quantitative structure-activity relationship (QSAR) analysis is a main cornerstone of modern informatic disciplines. Predictive computational models, based on QSAR technology, of peptide-major histocompatibility complex (MHC) binding affinity have now become a vital component of modern day computational immunovaccinology. Historically, such approaches have been built around semi-qualitative, classification methods, but these are now giving way to quantitative regression methods. The additive method, an established immunoinformatics technique for the quantitative prediction of peptide-protein affinity, was used here to identify the sequence dependence of peptide binding specificity for three mouse class I MHC alleles: H2-D(b), H2-K(b) and H2-K(k). As we show, in terms of reliability the resulting models represent a significant advance on existing methods. They can be used for the accurate prediction of T-cell epitopes and are freely available online ( http://www.jenner.ac.uk/MHCPred).

  18. [Influence of sample surface roughness on mathematical model of NIR quantitative analysis of wood density].

    PubMed

    Huang, An-Min; Fei, Ben-Hua; Jiang, Ze-Hui; Hse, Chung-Yun

    2007-09-01

    Near infrared spectroscopy is widely used as a quantitative method, and the main multivariate techniques consist of regression methods used to build prediction models, however, the accuracy of analysis results will be affected by many factors. In the present paper, the influence of different sample roughness on the mathematical model of NIR quantitative analysis of wood density was studied. The result of experiments showed that if the roughness of predicted samples was consistent with that of calibrated samples, the result was good, otherwise the error would be much higher. The roughness-mixed model was more flexible and adaptable to different sample roughness. The prediction ability of the roughness-mixed model was much better than that of the single-roughness model.

  19. Respiratory trace feature analysis for the prediction of respiratory-gated PET quantification.

    PubMed

    Wang, Shouyi; Bowen, Stephen R; Chaovalitwongse, W Art; Sandison, George A; Grabowski, Thomas J; Kinahan, Paul E

    2014-02-21

    The benefits of respiratory gating in quantitative PET/CT vary tremendously between individual patients. Respiratory pattern is among many patient-specific characteristics that are thought to play an important role in gating-induced imaging improvements. However, the quantitative relationship between patient-specific characteristics of respiratory pattern and improvements in quantitative accuracy from respiratory-gated PET/CT has not been well established. If such a relationship could be estimated, then patient-specific respiratory patterns could be used to prospectively select appropriate motion compensation during image acquisition on a per-patient basis. This study was undertaken to develop a novel statistical model that predicts quantitative changes in PET/CT imaging due to respiratory gating. Free-breathing static FDG-PET images without gating and respiratory-gated FDG-PET images were collected from 22 lung and liver cancer patients on a PET/CT scanner. PET imaging quality was quantified with peak standardized uptake value (SUV(peak)) over lesions of interest. Relative differences in SUV(peak) between static and gated PET images were calculated to indicate quantitative imaging changes due to gating. A comprehensive multidimensional extraction of the morphological and statistical characteristics of respiratory patterns was conducted, resulting in 16 features that characterize representative patterns of a single respiratory trace. The six most informative features were subsequently extracted using a stepwise feature selection approach. The multiple-regression model was trained and tested based on a leave-one-subject-out cross-validation. The predicted quantitative improvements in PET imaging achieved an accuracy higher than 90% using a criterion with a dynamic error-tolerance range for SUV(peak) values. The results of this study suggest that our prediction framework could be applied to determine which patients would likely benefit from respiratory motion compensation when clinicians quantitatively assess PET/CT for therapy target definition and response assessment.

  20. Respiratory trace feature analysis for the prediction of respiratory-gated PET quantification

    NASA Astrophysics Data System (ADS)

    Wang, Shouyi; Bowen, Stephen R.; Chaovalitwongse, W. Art; Sandison, George A.; Grabowski, Thomas J.; Kinahan, Paul E.

    2014-02-01

    The benefits of respiratory gating in quantitative PET/CT vary tremendously between individual patients. Respiratory pattern is among many patient-specific characteristics that are thought to play an important role in gating-induced imaging improvements. However, the quantitative relationship between patient-specific characteristics of respiratory pattern and improvements in quantitative accuracy from respiratory-gated PET/CT has not been well established. If such a relationship could be estimated, then patient-specific respiratory patterns could be used to prospectively select appropriate motion compensation during image acquisition on a per-patient basis. This study was undertaken to develop a novel statistical model that predicts quantitative changes in PET/CT imaging due to respiratory gating. Free-breathing static FDG-PET images without gating and respiratory-gated FDG-PET images were collected from 22 lung and liver cancer patients on a PET/CT scanner. PET imaging quality was quantified with peak standardized uptake value (SUVpeak) over lesions of interest. Relative differences in SUVpeak between static and gated PET images were calculated to indicate quantitative imaging changes due to gating. A comprehensive multidimensional extraction of the morphological and statistical characteristics of respiratory patterns was conducted, resulting in 16 features that characterize representative patterns of a single respiratory trace. The six most informative features were subsequently extracted using a stepwise feature selection approach. The multiple-regression model was trained and tested based on a leave-one-subject-out cross-validation. The predicted quantitative improvements in PET imaging achieved an accuracy higher than 90% using a criterion with a dynamic error-tolerance range for SUVpeak values. The results of this study suggest that our prediction framework could be applied to determine which patients would likely benefit from respiratory motion compensation when clinicians quantitatively assess PET/CT for therapy target definition and response assessment.

  1. Strategic Regulatory Evaluation and Endorsement of the Hollow Fiber Tuberculosis System as a Novel Drug Development Tool.

    PubMed

    Romero, Klaus; Clay, Robert; Hanna, Debra

    2015-08-15

    The first nonclinical drug development tool (DDT) advanced by the Critical Path to TB Drug Regimens (CPTR) Initiative through a regulatory review process has been endorsed by leading global regulatory authorities. DDTs with demonstrated predictive accuracy for clinical and microbiological outcomes are needed to support decision making. Regulatory endorsement of these DDTs is critical for drug developers, as it promotes confidence in their use in Investigational New Drug and New Drug Application filings. The in vitro hollow fiber system model of tuberculosis (HFS-TB) is able to recapitulate concentration-time profiles (exposure) observed in patients for single drugs and combinations, by evaluating exposure measures for the ability to kill tuberculosis in different physiologic conditions. Monte Carlo simulations make this quantitative output useful to inform susceptibility breakpoints, dosage, and optimal combination regimens in patients, and to design nonclinical experiments in animal models. The Pre-Clinical and Clinical Sciences Working Group within CPTR executed an evidence-based evaluation of the HFS-TB for predictive accuracy. This extensive effort was enabled through the collaboration of subject matter experts representing the pharmaceutical industry, academia, product development partnerships, and regulatory authorities including the Food and Drug Administration (FDA) and the European Medicines Agency (EMA). A comprehensive analysis plan following the regulatory guidance documents for DDT qualification was developed, followed by individual discussions with the FDA and the EMA. The results from the quantitative analyses were submitted to both agencies, pursuing regulatory DDT endorsement. The EMA Qualification Opinion for the HFS-TB DDT was published 26 January 2015 (available at: http://www.ema.europa.eu/ema/index.jsp?curl=pages/regulation/document_listing/document_listing_000319.jsp). © The Author 2015. Published by Oxford University Press on behalf of the Infectious Diseases Society of America. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. An attentional drift diffusion model over binary-attribute choice.

    PubMed

    Fisher, Geoffrey

    2017-11-01

    In order to make good decisions, individuals need to identify and properly integrate information about various attributes associated with a choice. Since choices are often complex and made rapidly, they are typically affected by contextual variables that are thought to influence how much attention is paid to different attributes. I propose a modification of the attentional drift-diffusion model, the binary-attribute attentional drift diffusion model (baDDM), which describes the choice process over simple binary-attribute choices and how it is affected by fluctuations in visual attention. Using an eye-tracking experiment, I find the baDDM makes accurate quantitative predictions about several key variables including choices, reaction times, and how these variables are correlated with attention to two attributes in an accept-reject decision. Furthermore, I estimate an attribute-based fixation bias that suggests attention to an attribute increases its subjective weight by 5%, while the unattended attribute's weight is decreased by 10%. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. The psychology of martyrdom: making the ultimate sacrifice in the name of a cause.

    PubMed

    Bélanger, Jocelyn J; Caouette, Julie; Sharvit, Keren; Dugas, Michelle

    2014-09-01

    Martyrdom is defined as the psychological readiness to suffer and sacrifice one's life for a cause. An integrative set of 8 studies investigated the concept of martyrdom by creating a new tool to quantitatively assess individuals' propensity toward self-sacrifice. Studies 1A-1C consisted of psychometric work attesting to the scale's unidimensionality, internal consistency, and temporal stability while examining its nomological network. Studies 2A-2B focused on the scale's predictive validity, especially as it relates to extreme behaviors and suicidal terrorism. Studies 3-5 focused on the influence of self-sacrifice on automatic decision making, costly and altruistic behaviors, and morality judgments. Results involving more than 2,900 participants from different populations, including a terrorist sample, supported the proposed conceptualization of martyrdom and demonstrated its importance for a vast repertoire of cognitive, emotional, and behavioral phenomena. Implications and future directions for the psychology of terrorism are discussed. 2014 APA, all rights reserved

  4. Intrinsic fluctuations of the proton saturation momentum scale in high multiplicity p+p collisions

    DOE PAGES

    McLerran, Larry; Tribedy, Prithwish

    2015-11-02

    High multiplicity events in p+p collisions are studied using the theory of the Color Glass Condensate. Here, we show that intrinsic fluctuations of the proton saturation momentum scale are needed in addition to the sub-nucleonic color charge fluctuations to explain the very high multiplicity tail of distributions in p+p collisions. It is presumed that the origin of such intrinsic fluctuations is non-perturbative in nature. Classical Yang Mills simulations using the IP-Glasma model are performed to make quantitative estimations. Furthermore, we find that fluctuations as large as O(1) of the average values of the saturation momentum scale can lead to raremore » high multiplicity events seen in p+p data at RHIC and LHC energies. Using the available data on multiplicity distributions we try to constrain the distribution of the proton saturation momentum scale and make predictions for the multiplicity distribution in 13 TeV p+p collisions.« less

  5. The Sexual Stratification Hypothesis: Is the Decision to Arrest Influenced by the Victim/Suspect Racial/Ethnic Dyad?

    PubMed

    O'Neal, Eryn Nicole; Beckman, Laura O; Spohn, Cassia

    2016-05-24

    The sexual stratification hypothesis suggests that criminal justice responses to sexual victimization will differ depending on the victim/suspect racial/ethnic dyad. Previous research examining the sexual stratification hypothesis has primarily focused on court processes, and the small body of literature examining arrest decisions is dated. There remains substantial opportunity for testing the sexual stratification hypothesis at response stages apart from the court level (i.e., arrest). Using quantitative data on 655 sexual assault complaints that were reported to the Los Angeles County Sherriff's Department (LASD) and the Los Angeles Police Department (LAPD) in 2008, this study examines the effect of the victim/suspect racial/ethnic dyad on the decision to arrest. Findings suggest that police consider the victim/suspect racial/ethnic dyad when making arrest decisions. In addition, victim characteristics, strength of evidence indicators, and measures of case factors predict the police decision to make an arrest. © The Author(s) 2016.

  6. Can Global Weed Assemblages Be Used to Predict Future Weeds?

    PubMed Central

    Morin, Louise; Paini, Dean R.; Randall, Roderick P.

    2013-01-01

    Predicting which plant taxa are more likely to become weeds in a region presents significant challenges to both researchers and government agencies. Often it is done in a qualitative or semi-quantitative way. In this study, we explored the potential of using the quantitative self-organising map (SOM) approach to analyse global weed assemblages and estimate likelihoods of plant taxa becoming weeds before and after they have been moved to a new region. The SOM approach examines plant taxa associations by analysing where a taxon is recorded as a weed and what other taxa are recorded as weeds in those regions. The dataset analysed was extracted from a pre-existing, extensive worldwide database of plant taxa recorded as weeds or other related status and, following reformatting, included 187 regions and 6690 plant taxa. To assess the value of the SOM approach we selected Australia as a case study. We found that the key and most important limitation in using such analytical approach lies with the dataset used. The classification of a taxon as a weed in the literature is not often based on actual data that document the economic, environmental and/or social impact of the taxon, but mostly based on human perceptions that the taxon is troublesome or simply not wanted in a particular situation. The adoption of consistent and objective criteria that incorporate a standardized approach for impact assessment of plant taxa will be necessary to develop a new global database suitable to make predictions regarding weediness using methods like SOM. It may however, be more realistic to opt for a classification system that focuses on the invasive characteristics of plant taxa without any inference to impacts, which to be defined would require some level of research to avoid bias from human perceptions and value systems. PMID:23393591

  7. Monitoring with Trackers Based on Semi-Quantitative Models

    NASA Technical Reports Server (NTRS)

    Kuipers, Benjamin

    1997-01-01

    In three years of NASA-sponsored research preceding this project, we successfully developed a technology for: (1) building qualitative and semi-quantitative models from libraries of model-fragments, (2) simulating these models to predict future behaviors with the guarantee that all possible behaviors are covered, (3) assimilating observations into behaviors, shrinking uncertainty so that incorrect models are eventually refuted and correct models make stronger predictions for the future. In our object-oriented framework, a tracker is an object which embodies the hypothesis that the available observation stream is consistent with a particular behavior of a particular model. The tracker maintains its own status (consistent, superceded, or refuted), and answers questions about its explanation for past observations and its predictions for the future. In the MIMIC approach to monitoring of continuous systems, a number of trackers are active in parallel, representing alternate hypotheses about the behavior of a system. This approach is motivated by the need to avoid 'system accidents' [Perrow, 1985] due to operator fixation on a single hypothesis, as for example at Three Mile Island. As we began to address these issues, we focused on three major research directions that we planned to pursue over a three-year project: (1) tractable qualitative simulation, (2) semiquantitative inference, and (3) tracking set management. Unfortunately, funding limitations made it impossible to continue past year one. Nonetheless, we made major progress in the first two of these areas. Progress in the third area as slower because the graduate student working on that aspect of the project decided to leave school and take a job in industry. I enclosed a set of abstract of selected papers on the work describe below. Several papers that draw on the research supported during this period appeared in print after the grant period ended.

  8. Publication metrics and success on the academic job market.

    PubMed

    van Dijk, David; Manor, Ohad; Carey, Lucas B

    2014-06-02

    The number of applicants vastly outnumbers the available academic faculty positions. What makes a successful academic job market candidate is the subject of much current discussion [1-4]. Yet, so far there has been no quantitative analysis of who becomes a principal investigator (PI). We here use a machine-learning approach to predict who becomes a PI, based on data from over 25,000 scientists in PubMed. We show that success in academia is predictable. It depends on the number of publications, the impact factor (IF) of the journals in which those papers are published, and the number of papers that receive more citations than average for the journal in which they were published (citations/IF). However, both the scientist's gender and the rank of their university are also of importance, suggesting that non-publication features play a statistically significant role in the academic hiring process. Our model (www.pipredictor.com) allows anyone to calculate their likelihood of becoming a PI. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Wind-Driven Formation of Ice Bridges in Straits.

    PubMed

    Rallabandi, Bhargav; Zheng, Zhong; Winton, Michael; Stone, Howard A

    2017-03-24

    Ice bridges are static structures composed of tightly packed sea ice that can form during the course of its flow through a narrow strait. Despite their important role in local ecology and climate, the formation and breakup of ice bridges is not well understood and has proved difficult to predict. Using long-wave approximations and a continuum description of sea ice dynamics, we develop a one-dimensional theory for the wind-driven formation of ice bridges in narrow straits, which is verified against direct numerical simulations. We show that for a given wind stress and minimum and maximum channel widths, a steady-state ice bridge can only form beyond a critical value of the thickness and the compactness of the ice field. The theory also makes quantitative predictions for ice fluxes, which are particularly useful to estimate the ice export associated with the breakup of ice bridges. We note that similar ideas are applicable to dense granular flows in confined geometries.

  10. The magnetospheric electric field and convective processes as diagnostics of the IMF and solar wind

    NASA Technical Reports Server (NTRS)

    Kaye, S. M.

    1979-01-01

    Indirect measurements of the convection field as well as direct of the ionospheric electric field provide a means to at least monitor quanitatively solar wind processes. For instance, asymmetries in the ionospheric electric field and ionospheric Hall currents over the polar cap reflect the solar wind sector polarity. A stronger electric field, and thus convective flow, is found on the side of the polar cap where the y component of the IMF is parallel to the y component of the geomagnetic field. Additionally, the magnitude of the electric field and convective southward B sub Z and/or solar wind velocity, and thus may indicate the arrival at Earth of an interaction region in the solar wind. It is apparent that processes associated with the convention electric field may be used to predict large scale features in the solar wind; however, with present empirical knowledge it is not possible to make quantitative predictions of individual solar wind or IMF parameters.

  11. Choice theories: What are they good for?☆

    PubMed Central

    Johnson, Eric J.

    2013-01-01

    Simonson et al. present an ambitious sketch of an integrative theory of context. Provoked by this thoughtful proposal, I discuss what is the function of theories of choice in the coming decades. Traditionally, choice models and theory have attempted to predict choices as a function of the attributes of options. I argue that to be truly useful, they need to generate specific and quantitative predictions of the effect of the choice environment upon choice probability. To do this, we need to focus on rigorously modeling and measuring the underlying processes causing these effects, and use the Simonson et al. proposal to provide some examples. I also present some examples from research in decision-making and decision neuroscience, and argue that models that fail, and fail spectacularly are particularly useful. I close with a challenge: How would consumer researcher aid the design of real world choice environments such as the health exchanges under the Patient Protection and Affordable Care Act? PMID:23794793

  12. Contamination of packaged food by substances migrating from a direct-contact plastic layer: Assessment using a generic quantitative household scale methodology.

    PubMed

    Vitrac, Olivier; Challe, Blandine; Leblanc, Jean-Charles; Feigenbaum, Alexandre

    2007-01-01

    The contamination risk in 12 packaged foods by substances released from the plastic contact layer has been evaluated using a novel modeling technique, which predicts the migration that accounts for (i) possible variations in the time of contact between foodstuffs and packaging and (ii) uncertainty in physico-chemical parameters used to predict migration. Contamination data, which are subject to variability and uncertainty, are derived through a stochastic resolution of transport equations, which control the migration into food. Distributions of contact times between packaging materials and foodstuffs were reconstructed from the volumes and frequencies of purchases of a given panel of 6422 households, making assumptions about household storage behaviour. The risk of contamination of the packaged foods was estimated for styrene (a monomer found in polystyrene yogurt pots) and 2,6-di-tert-butyl-4-hydroxytoluene (a representative of the widely used phenolic antioxidants). The results are analysed and discussed regarding sensitivity of the model to the set parameters and chosen assumptions.

  13. Frustrated spin chains in strong magnetic field: Dilute two-component Bose gas regime

    NASA Astrophysics Data System (ADS)

    Kolezhuk, A. K.; Heidrich-Meisner, F.; Greschner, S.; Vekua, T.

    2012-02-01

    We study the ground state of frustrated spin-S chains in a strong magnetic field in the immediate vicinity of saturation. In strongly frustrated chains, the magnon dispersion has two degenerate minima at inequivalent momenta ±Q, and just below the saturation field the system can be effectively represented as a dilute one-dimensional lattice gas of two species of bosons that correspond to magnons with momenta around ±Q. We present a theory of effective interactions in such a dilute magnon gas that allows us to make quantitative predictions for arbitrary values of the spin. With the help of this method, we are able to establish the magnetic phase diagram of frustrated chains close to saturation and study phase transitions between several nontrivial states, including a two-component Luttinger liquid, a vector chiral phase, and phases with bound magnons. We study those phase transitions numerically and find a good agreement with our analytical predictions.

  14. Novel risk predictor for thrombus deposition in abdominal aortic aneurysms

    NASA Astrophysics Data System (ADS)

    Nestola, M. G. C.; Gizzi, A.; Cherubini, C.; Filippi, S.; Succi, S.

    2015-10-01

    The identification of the basic mechanisms responsible for cardiovascular diseases stands as one of the most challenging problems in modern medical research including various mechanisms which encompass a broad spectrum of space and time scales. Major implications for clinical practice and pre-emptive medicine rely on the onset and development of intraluminal thrombus in which effective clinical therapies require synthetic risk predictors/indicators capable of informing real-time decision-making protocols. In the present contribution, two novel hemodynamics synthetic indicators, based on a three-band decomposition (TBD) of the shear stress signal, are introduced. Extensive fluid-structure computer simulations of patient-specific scenarios confirm the enhanced risk-prediction capabilities of the TBD indicators. In particular, they permit a quantitative and accurate localization of the most likely thrombus deposition in realistic aortic geometries, where previous indicators would predict healthy operation. The proposed methodology is also shown to provide additional information and discrimination criteria on other factors of major clinical relevance, such as the size of the aneurysm.

  15. Applying quantitative adiposity feature analysis models to predict benefit of bevacizumab-based chemotherapy in ovarian cancer patients

    NASA Astrophysics Data System (ADS)

    Wang, Yunzhi; Qiu, Yuchen; Thai, Theresa; More, Kathleen; Ding, Kai; Liu, Hong; Zheng, Bin

    2016-03-01

    How to rationally identify epithelial ovarian cancer (EOC) patients who will benefit from bevacizumab or other antiangiogenic therapies is a critical issue in EOC treatments. The motivation of this study is to quantitatively measure adiposity features from CT images and investigate the feasibility of predicting potential benefit of EOC patients with or without receiving bevacizumab-based chemotherapy treatment using multivariate statistical models built based on quantitative adiposity image features. A dataset involving CT images from 59 advanced EOC patients were included. Among them, 32 patients received maintenance bevacizumab after primary chemotherapy and the remaining 27 patients did not. We developed a computer-aided detection (CAD) scheme to automatically segment subcutaneous fat areas (VFA) and visceral fat areas (SFA) and then extracted 7 adiposity-related quantitative features. Three multivariate data analysis models (linear regression, logistic regression and Cox proportional hazards regression) were performed respectively to investigate the potential association between the model-generated prediction results and the patients' progression-free survival (PFS) and overall survival (OS). The results show that using all 3 statistical models, a statistically significant association was detected between the model-generated results and both of the two clinical outcomes in the group of patients receiving maintenance bevacizumab (p<0.01), while there were no significant association for both PFS and OS in the group of patients without receiving maintenance bevacizumab. Therefore, this study demonstrated the feasibility of using quantitative adiposity-related CT image features based statistical prediction models to generate a new clinical marker and predict the clinical outcome of EOC patients receiving maintenance bevacizumab-based chemotherapy.

  16. Structured decision making for managing pneumonia epizootics in bighorn sheep

    USGS Publications Warehouse

    Sells, Sarah N.; Mitchell, Michael S.; Edwards, Victoria L.; Gude, Justin A.; Anderson, Neil J.

    2016-01-01

    Good decision-making is essential to conserving wildlife populations. Although there may be multiple ways to address a problem, perfect solutions rarely exist. Managers are therefore tasked with identifying decisions that will best achieve desired outcomes. Structured decision making (SDM) is a method of decision analysis used to identify the most effective, efficient, and realistic decisions while accounting for values and priorities of the decision maker. The stepwise process includes identifying the management problem, defining objectives for solving the problem, developing alternative approaches to achieve the objectives, and formally evaluating which alternative is most likely to accomplish the objectives. The SDM process can be more effective than informal decision-making because it provides a transparent way to quantitatively evaluate decisions for addressing multiple management objectives while incorporating science, uncertainty, and risk tolerance. To illustrate the application of this process to a management need, we present an SDM-based decision tool developed to identify optimal decisions for proactively managing risk of pneumonia epizootics in bighorn sheep (Ovis canadensis) in Montana. Pneumonia epizootics are a major challenge for managers due to long-term impacts to herds, epistemic uncertainty in timing and location of future epizootics, and consequent difficulty knowing how or when to manage risk. The decision tool facilitates analysis of alternative decisions for how to manage herds based on predictions from a risk model, herd-specific objectives, and predicted costs and benefits of each alternative. Decision analyses for 2 example herds revealed that meeting management objectives necessitates specific approaches unique to each herd. The analyses showed how and under what circumstances the alternatives are optimal compared to other approaches and current management. Managers can be confident that these decisions are effective, efficient, and realistic because they explicitly account for important considerations managers implicitly weigh when making decisions, including competing management objectives, uncertainty in potential outcomes, and risk tolerance.

  17. Use of machine learning methods to reduce predictive error of groundwater models.

    PubMed

    Xu, Tianfang; Valocchi, Albert J; Choi, Jaesik; Amir, Eyal

    2014-01-01

    Quantitative analyses of groundwater flow and transport typically rely on a physically-based model, which is inherently subject to error. Errors in model structure, parameter and data lead to both random and systematic error even in the output of a calibrated model. We develop complementary data-driven models (DDMs) to reduce the predictive error of physically-based groundwater models. Two machine learning techniques, the instance-based weighting and support vector regression, are used to build the DDMs. This approach is illustrated using two real-world case studies of the Republican River Compact Administration model and the Spokane Valley-Rathdrum Prairie model. The two groundwater models have different hydrogeologic settings, parameterization, and calibration methods. In the first case study, cluster analysis is introduced for data preprocessing to make the DDMs more robust and computationally efficient. The DDMs reduce the root-mean-square error (RMSE) of the temporal, spatial, and spatiotemporal prediction of piezometric head of the groundwater model by 82%, 60%, and 48%, respectively. In the second case study, the DDMs reduce the RMSE of the temporal prediction of piezometric head of the groundwater model by 77%. It is further demonstrated that the effectiveness of the DDMs depends on the existence and extent of the structure in the error of the physically-based model. © 2013, National GroundWater Association.

  18. Benefits of statistical molecular design, covariance analysis, and reference models in QSAR: a case study on acetylcholinesterase

    NASA Astrophysics Data System (ADS)

    Andersson, C. David; Hillgren, J. Mikael; Lindgren, Cecilia; Qian, Weixing; Akfur, Christine; Berg, Lotta; Ekström, Fredrik; Linusson, Anna

    2015-03-01

    Scientific disciplines such as medicinal- and environmental chemistry, pharmacology, and toxicology deal with the questions related to the effects small organic compounds exhort on biological targets and the compounds' physicochemical properties responsible for these effects. A common strategy in this endeavor is to establish structure-activity relationships (SARs). The aim of this work was to illustrate benefits of performing a statistical molecular design (SMD) and proper statistical analysis of the molecules' properties before SAR and quantitative structure-activity relationship (QSAR) analysis. Our SMD followed by synthesis yielded a set of inhibitors of the enzyme acetylcholinesterase (AChE) that had very few inherent dependencies between the substructures in the molecules. If such dependencies exist, they cause severe errors in SAR interpretation and predictions by QSAR-models, and leave a set of molecules less suitable for future decision-making. In our study, SAR- and QSAR models could show which molecular sub-structures and physicochemical features that were advantageous for the AChE inhibition. Finally, the QSAR model was used for the prediction of the inhibition of AChE by an external prediction set of molecules. The accuracy of these predictions was asserted by statistical significance tests and by comparisons to simple but relevant reference models.

  19. Internal exposure dynamics drive the Adverse Outcome Pathways of synthetic glucocorticoids in fish

    NASA Astrophysics Data System (ADS)

    Margiotta-Casaluci, Luigi; Owen, Stewart F.; Huerta, Belinda; Rodríguez-Mozaz, Sara; Kugathas, Subramanian; Barceló, Damià; Rand-Weaver, Mariann; Sumpter, John P.

    2016-02-01

    The Adverse Outcome Pathway (AOP) framework represents a valuable conceptual tool to systematically integrate existing toxicological knowledge from a mechanistic perspective to facilitate predictions of chemical-induced effects across species. However, its application for decision-making requires the transition from qualitative to quantitative AOP (qAOP). Here we used a fish model and the synthetic glucocorticoid beclomethasone dipropionate (BDP) to investigate the role of chemical-specific properties, pharmacokinetics, and internal exposure dynamics in the development of qAOPs. We generated a qAOP network based on drug plasma concentrations and focused on immunodepression, skin androgenisation, disruption of gluconeogenesis and reproductive performance. We showed that internal exposure dynamics and chemical-specific properties influence the development of qAOPs and their predictive power. Comparing the effects of two different glucocorticoids, we highlight how relatively similar in vitro hazard-based indicators can lead to different in vivo risk. This discrepancy can be predicted by their different uptake potential, pharmacokinetic (PK) and pharmacodynamic (PD) profiles. We recommend that the development phase of qAOPs should include the application of species-species uptake and physiologically-based PK/PD models. This integration will significantly enhance the predictive power, enabling a more accurate assessment of the risk and the reliable transferability of qAOPs across chemicals.

  20. Multi-scale modeling of diffusion-controlled reactions in polymers: renormalisation of reactivity parameters.

    PubMed

    Everaers, Ralf; Rosa, Angelo

    2012-01-07

    The quantitative description of polymeric systems requires hierarchical modeling schemes, which bridge the gap between the atomic scale, relevant to chemical or biomolecular reactions, and the macromolecular scale, where the longest relaxation modes occur. Here, we use the formalism for diffusion-controlled reactions in polymers developed by Wilemski, Fixman, and Doi to discuss the renormalisation of the reactivity parameters in polymer models with varying spatial resolution. In particular, we show that the adjustments are independent of chain length. As a consequence, it is possible to match reactions times between descriptions with different resolution for relatively short reference chains and to use the coarse-grained model to make quantitative predictions for longer chains. We illustrate our results by a detailed discussion of the classical problem of chain cyclization in the Rouse model, which offers the simplest example of a multi-scale descriptions, if we consider differently discretized Rouse models for the same physical system. Moreover, we are able to explore different combinations of compact and non-compact diffusion in the local and large-scale dynamics by varying the embedding dimension.

  1. Forecasting the Environmental Impacts of New Energetic Materials

    DTIC Science & Technology

    2010-11-30

    Quantitative structure- activity relationships for chemical reductions of organic contaminants. Environmental Toxicology and Chemistry 22(8): 1733-1742. QSARs ...activity relationships [ QSARs ]) and the use of these properties to predict the chemical?s fate with multimedia assessment models. SERDP has recently...has several parts, including the prediction of chemical properties (e.g., with quantitative structure-activity relationships [ QSARs ]) and the use of

  2. Biomechanical Model for Computing Deformations for Whole-Body Image Registration: A Meshless Approach

    PubMed Central

    Li, Mao; Miller, Karol; Joldes, Grand Roman; Kikinis, Ron; Wittek, Adam

    2016-01-01

    Patient-specific biomechanical models have been advocated as a tool for predicting deformations of soft body organs/tissue for medical image registration (aligning two sets of images) when differences between the images are large. However, complex and irregular geometry of the body organs makes generation of patient-specific biomechanical models very time consuming. Meshless discretisation has been proposed to solve this challenge. However, applications so far have been limited to 2-D models and computing single organ deformations. In this study, 3-D comprehensive patient-specific non-linear biomechanical models implemented using Meshless Total Lagrangian Explicit Dynamics (MTLED) algorithms are applied to predict a 3-D deformation field for whole-body image registration. Unlike a conventional approach which requires dividing (segmenting) the image into non-overlapping constituents representing different organs/tissues, the mechanical properties are assigned using the Fuzzy C-Means (FCM) algorithm without the image segmentation. Verification indicates that the deformations predicted using the proposed meshless approach are for practical purposes the same as those obtained using the previously validated finite element models. To quantitatively evaluate the accuracy of the predicted deformations, we determined the spatial misalignment between the registered (i.e. source images warped using the predicted deformations) and target images by computing the edge-based Hausdorff distance. The Hausdorff distance-based evaluation determines that our meshless models led to successful registration of the vast majority of the image features. PMID:26791945

  3. Impact of root growth and root hydraulic conductance on water availability of young walnut trees

    NASA Astrophysics Data System (ADS)

    Jerszurki, Daniela; Couvreur, Valentin; Hopmans, Jan W.; Silva, Lucas C. R.; Shackel, Kenneth A.; de Souza, Jorge L. M.

    2015-04-01

    Walnut (Juglans regia L.) is a tree species of high economic importance in the Central Valley of California. This crop has particularly high water requirements, which makes it highly dependent on irrigation. The context of decreasing water availability in the state calls for efficient water management practices, which requires improving our understanding of the relationship between water application and walnut water availability. In addition to the soil's hydraulic conductivity, two plant properties are thought to control the supply of water from the bulk soil to the canopy: (i) root distribution and (ii) plant hydraulic conductance. Even though these properties are clearly linked to crop water requirements, their quantitative relation remains unclear. The aim of this study is to quantitatively explain walnut water requirements under water deficit from continuous measurements of its water consumption, soil and stem water potential, root growth and root system hydraulic conductance. For that purpose, a greenhouse experiment was conducted for a two month period. Young walnut trees were planted in transparent cylindrical pots, equipped with: (i) rhizotron tubes, which allowed for non-invasive monitoring of root growth, (ii) pressure transducer tensiometers for soil water potential, (iii) psychrometers attached to non-transpiring leaves for stem water potential, and (iv) weighing scales for plant transpiration. Treatments consisted of different irrigation rates: 100%, 75% and 50% of potential crop evapotranspiration. Plant responses were compared to predictions from three simple process-based soil-plant-atmosphere models of water flow: (i) a hydraulic model of stomatal regulation based on stem water potential and vapor pressure deficit, (ii) a model of plant hydraulics predicting stem water potential from soil-root interfaces water potential, and (iii) a model of soil water depletion predicting the water potential drop between the bulk soil and soil-root interfaces. These models were combined to a global optimization algorithm to obtain parameters that best fit the observed soil-plant-atmosphere water dynamics. Eventually, relations between root system conductance and growth as well as water access strategies were quantitatively analyzed.

  4. The transparency, reliability and utility of tropical rainforest land-use and land-cover change models.

    PubMed

    Rosa, Isabel M D; Ahmed, Sadia E; Ewers, Robert M

    2014-06-01

    Land-use and land-cover (LULC) change is one of the largest drivers of biodiversity loss and carbon emissions globally. We use the tropical rainforests of the Amazon, the Congo basin and South-East Asia as a case study to investigate spatial predictive models of LULC change. Current predictions differ in their modelling approaches, are highly variable and often poorly validated. We carried out a quantitative review of 48 modelling methodologies, considering model spatio-temporal scales, inputs, calibration and validation methods. In addition, we requested model outputs from each of the models reviewed and carried out a quantitative assessment of model performance for tropical LULC predictions in the Brazilian Amazon. We highlight existing shortfalls in the discipline and uncover three key points that need addressing to improve the transparency, reliability and utility of tropical LULC change models: (1) a lack of openness with regard to describing and making available the model inputs and model code; (2) the difficulties of conducting appropriate model validations; and (3) the difficulty that users of tropical LULC models face in obtaining the model predictions to help inform their own analyses and policy decisions. We further draw comparisons between tropical LULC change models in the tropics and the modelling approaches and paradigms in other disciplines, and suggest that recent changes in the climate change and species distribution modelling communities may provide a pathway that tropical LULC change modellers may emulate to further improve the discipline. Climate change models have exerted considerable influence over public perceptions of climate change and now impact policy decisions at all political levels. We suggest that tropical LULC change models have an equally high potential to influence public opinion and impact the development of land-use policies based on plausible future scenarios, but, to do that reliably may require further improvements in the discipline. © 2014 John Wiley & Sons Ltd.

  5. Confocal Raman Microscopy for pH-Gradient Preconcentration and Quantitative Analyte Detection in Optically Trapped Phospholipid Vesicles.

    PubMed

    Hardcastle, Chris D; Harris, Joel M

    2015-08-04

    The ability of a vesicle membrane to preserve a pH gradient, while allowing for diffusion of neutral molecules across the phospholipid bilayer, can provide the isolation and preconcentration of ionizable compounds within the vesicle interior. In this work, confocal Raman microscopy is used to observe (in situ) the pH-gradient preconcentration of compounds into individual optically trapped vesicles that provide sub-femtoliter collectors for small-volume samples. The concentration of analyte accumulated in the vesicle interior is determined relative to a perchlorate-ion internal standard, preloaded into the vesicle along with a high-concentration buffer. As a guide to the experiments, a model for the transfer of analyte into the vesicle based on acid-base equilibria is developed to predict the concentration enrichment as a function of source-phase pH and analyte concentration. To test the concept, the accumulation of benzyldimethylamine (BDMA) was measured within individual 1 μm phospholipid vesicles having a stable initial pH that is 7 units lower than the source phase. For low analyte concentrations in the source phase (100 nM), a concentration enrichment into the vesicle interior of (5.2 ± 0.4) × 10(5) was observed, in agreement with the model predictions. Detection of BDMA from a 25 nM source-phase sample was demonstrated, a noteworthy result for an unenhanced Raman scattering measurement. The developed model accurately predicts the falloff of enrichment (and measurement sensitivity) at higher analyte concentrations, where the transfer of greater amounts of BDMA into the vesicle titrates the internal buffer and decreases the pH gradient. The predictable calibration response over 4 orders of magnitude in source-phase concentration makes it suitable for quantitative analysis of ionizable compounds from small-volume samples. The kinetics of analyte accumulation are relatively fast (∼15 min) and are consistent with the rate of transfer of a polar aromatic molecule across a gel-phase phospholipid membrane.

  6. Prediction of Environmental Impact of High-Energy Materials with Atomistic Computer Simulations

    DTIC Science & Technology

    2010-11-01

    from a training set of compounds. Other methods include Quantitative Struc- ture-Activity Relationship ( QSAR ) and Quantitative Structure-Property...26 28 the development of QSPR/ QSAR models, in contrast to boiling points and critical parameters derived from empirical correlations, to improve...Quadratic Configuration Interaction Singles Doubles QSAR Quantitative Structure-Activity Relationship QSPR Quantitative Structure-Property

  7. Quantitative Adverse Outcome Pathways and Their Application to Predictive Toxicology

    EPA Science Inventory

    A quantitative adverse outcome pathway (qAOP) consists of one or more biologically based, computational models describing key event relationships linking a molecular initiating event (MIE) to an adverse outcome. A qAOP provides quantitative, dose–response, and time-course p...

  8. Using Uncertainty Quantification to Guide Development and Improvements of a Regional-Scale Model of the Coastal Lowlands Aquifer System Spanning Texas, Louisiana, Mississippi, Alabama and Florida

    NASA Astrophysics Data System (ADS)

    Foster, L. K.; Clark, B. R.; Duncan, L. L.; Tebo, D. T.; White, J.

    2017-12-01

    Several historical groundwater models exist within the Coastal Lowlands Aquifer System (CLAS), which spans the Gulf Coastal Plain in Texas, Louisiana, Mississippi, Alabama, and Florida. The largest of these models, called the Gulf Coast Regional Aquifer System Analysis (RASA) model, has been brought into a new framework using the Newton formulation for MODFLOW-2005 (MODFLOW-NWT) and serves as the starting point of a new investigation underway by the U.S. Geological Survey to improve understanding of the CLAS and provide predictions of future groundwater availability within an uncertainty quantification (UQ) framework. The use of an UQ framework will not only provide estimates of water-level observation worth, hydraulic parameter uncertainty, boundary-condition uncertainty, and uncertainty of future potential predictions, but it will also guide the model development process. Traditionally, model development proceeds from dataset construction to the process of deterministic history matching, followed by deterministic predictions using the model. This investigation will combine the use of UQ with existing historical models of the study area to assess in a quantitative framework the effect model package and property improvements have on the ability to represent past-system states, as well as the effect on the model's ability to make certain predictions of water levels, water budgets, and base-flow estimates. Estimates of hydraulic property information and boundary conditions from the existing models and literature, forming the prior, will be used to make initial estimates of model forecasts and their corresponding uncertainty, along with an uncalibrated groundwater model run within an unconstrained Monte Carlo analysis. First-Order Second-Moment (FOSM) analysis will also be used to investigate parameter and predictive uncertainty, and guide next steps in model development prior to rigorous history matching by using PEST++ parameter estimation code.

  9. "When does making detailed predictions make predictions worse?": Correction to Kelly and Simmons (2016).

    PubMed

    2016-10-01

    Reports an error in "When Does Making Detailed Predictions Make Predictions Worse" by Theresa F. Kelly and Joseph P. Simmons ( Journal of Experimental Psychology: General , Advanced Online Publication, Aug 8, 2016, np). In the article, the symbols in Figure 2 were inadvertently altered in production. All versions of this article have been corrected. (The following abstract of the original article appeared in record 2016-37952-001.) In this article, we investigate whether making detailed predictions about an event worsens other predictions of the event. Across 19 experiments, 10,896 participants, and 407,045 predictions about 724 professional sports games, we find that people who made detailed predictions about sporting events (e.g., how many hits each baseball team would get) made worse predictions about more general outcomes (e.g., which team would win). We rule out that this effect is caused by inattention or fatigue, thinking too hard, or a differential reliance on holistic information about the teams. Instead, we find that thinking about game-relevant details before predicting winning teams causes people to give less weight to predictive information, presumably because predicting details makes useless or redundant information more accessible and thus more likely to be incorporated into forecasts. Furthermore, we show that this differential use of information can be used to predict what kinds of events will and will not be susceptible to the negative effect of making detailed predictions. PsycINFO Database Record (c) 2016 APA, all rights reserved

  10. Response time distributions in rapid chess: a large-scale decision making experiment.

    PubMed

    Sigman, Mariano; Etchemendy, Pablo; Slezak, Diego Fernández; Cecchi, Guillermo A

    2010-01-01

    Rapid chess provides an unparalleled laboratory to understand decision making in a natural environment. In a chess game, players choose consecutively around 40 moves in a finite time budget. The goodness of each choice can be determined quantitatively since current chess algorithms estimate precisely the value of a position. Web-based chess produces vast amounts of data, millions of decisions per day, incommensurable with traditional psychological experiments. We generated a database of response times (RTs) and position value in rapid chess games. We measured robust emergent statistical observables: (1) RT distributions are long-tailed and show qualitatively distinct forms at different stages of the game, (2) RT of successive moves are highly correlated both for intra- and inter-player moves. These findings have theoretical implications since they deny two basic assumptions of sequential decision making algorithms: RTs are not stationary and can not be generated by a state-function. Our results also have practical implications. First, we characterized the capacity of blunders and score fluctuations to predict a player strength, which is yet an open problem in chess softwares. Second, we show that the winning likelihood can be reliably estimated from a weighted combination of remaining times and position evaluation.

  11. A Design Pattern for Decentralised Decision Making

    PubMed Central

    Valentini, Gabriele; Fernández-Oto, Cristian; Dorigo, Marco

    2015-01-01

    The engineering of large-scale decentralised systems requires sound methodologies to guarantee the attainment of the desired macroscopic system-level behaviour given the microscopic individual-level implementation. While a general-purpose methodology is currently out of reach, specific solutions can be given to broad classes of problems by means of well-conceived design patterns. We propose a design pattern for collective decision making grounded on experimental/theoretical studies of the nest-site selection behaviour observed in honeybee swarms (Apis mellifera). The way in which honeybee swarms arrive at consensus is fairly well-understood at the macroscopic level. We provide formal guidelines for the microscopic implementation of collective decisions to quantitatively match the macroscopic predictions. We discuss implementation strategies based on both homogeneous and heterogeneous multiagent systems, and we provide means to deal with spatial and topological factors that have a bearing on the micro-macro link. Finally, we exploit the design pattern in two case studies that showcase the viability of the approach. Besides engineering, such a design pattern can prove useful for a deeper understanding of decision making in natural systems thanks to the inclusion of individual heterogeneities and spatial factors, which are often disregarded in theoretical modelling. PMID:26496359

  12. Response Time Distributions in Rapid Chess: A Large-Scale Decision Making Experiment

    PubMed Central

    Sigman, Mariano; Etchemendy, Pablo; Slezak, Diego Fernández; Cecchi, Guillermo A.

    2010-01-01

    Rapid chess provides an unparalleled laboratory to understand decision making in a natural environment. In a chess game, players choose consecutively around 40 moves in a finite time budget. The goodness of each choice can be determined quantitatively since current chess algorithms estimate precisely the value of a position. Web-based chess produces vast amounts of data, millions of decisions per day, incommensurable with traditional psychological experiments. We generated a database of response times (RTs) and position value in rapid chess games. We measured robust emergent statistical observables: (1) RT distributions are long-tailed and show qualitatively distinct forms at different stages of the game, (2) RT of successive moves are highly correlated both for intra- and inter-player moves. These findings have theoretical implications since they deny two basic assumptions of sequential decision making algorithms: RTs are not stationary and can not be generated by a state-function. Our results also have practical implications. First, we characterized the capacity of blunders and score fluctuations to predict a player strength, which is yet an open problem in chess softwares. Second, we show that the winning likelihood can be reliably estimated from a weighted combination of remaining times and position evaluation. PMID:21031032

  13. The Separation and Quantitation of Peptides with and without Oxidation of Methionine and Deamidation of Asparagine Using Hydrophilic Interaction Liquid Chromatography with Mass Spectrometry (HILIC-MS)

    NASA Astrophysics Data System (ADS)

    Badgett, Majors J.; Boyes, Barry; Orlando, Ron

    2017-05-01

    Peptides with deamidated asparagine residues and oxidized methionine residues are often not resolved sufficiently to allow quantitation of their native and modified forms using reversed phase (RP) chromatography. The accurate quantitation of these modifications is vital in protein biotherapeutic analysis because they can affect a protein's function, activity, and stability. We demonstrate here that hydrophilic interaction liquid chromatography (HILIC) adequately and predictably separates peptides with these modifications from their native counterparts. Furthermore, coefficients describing the extent of the hydrophilicity of these modifications have been derived and were incorporated into a previously made peptide retention prediction model that is capable of predicting the retention times of peptides with and without these modifications.

  14. The Separation and Quantitation of Peptides with and without Oxidation of Methionine and Deamidation of Asparagine Using Hydrophilic Interaction Liquid Chromatography with Mass Spectrometry (HILIC-MS).

    PubMed

    Badgett, Majors J; Boyes, Barry; Orlando, Ron

    2017-05-01

    Peptides with deamidated asparagine residues and oxidized methionine residues are often not resolved sufficiently to allow quantitation of their native and modified forms using reversed phase (RP) chromatography. The accurate quantitation of these modifications is vital in protein biotherapeutic analysis because they can affect a protein's function, activity, and stability. We demonstrate here that hydrophilic interaction liquid chromatography (HILIC) adequately and predictably separates peptides with these modifications from their native counterparts. Furthermore, coefficients describing the extent of the hydrophilicity of these modifications have been derived and were incorporated into a previously made peptide retention prediction model that is capable of predicting the retention times of peptides with and without these modifications. Graphical Abstract ᅟ.

  15. Template CoMFA Generates Single 3D-QSAR Models that, for Twelve of Twelve Biological Targets, Predict All ChEMBL-Tabulated Affinities

    PubMed Central

    Cramer, Richard D.

    2015-01-01

    The possible applicability of the new template CoMFA methodology to the prediction of unknown biological affinities was explored. For twelve selected targets, all ChEMBL binding affinities were used as training and/or prediction sets, making these 3D-QSAR models the most structurally diverse and among the largest ever. For six of the targets, X-ray crystallographic structures provided the aligned templates required as input (BACE, cdk1, chk2, carbonic anhydrase-II, factor Xa, PTP1B). For all targets including the other six (hERG, cyp3A4 binding, endocrine receptor, COX2, D2, and GABAa), six modeling protocols applied to only three familiar ligands provided six alternate sets of aligned templates. The statistical qualities of the six or seven models thus resulting for each individual target were remarkably similar. Also, perhaps unexpectedly, the standard deviations of the errors of cross-validation predictions accompanying model derivations were indistinguishable from the standard deviations of the errors of truly prospective predictions. These standard deviations of prediction ranged from 0.70 to 1.14 log units and averaged 0.89 (8x in concentration units) over the twelve targets, representing an average reduction of almost 50% in uncertainty, compared to the null hypothesis of “predicting” an unknown affinity to be the average of known affinities. These errors of prediction are similar to those from Tanimoto coefficients of fragment occurrence frequencies, the predominant approach to side effect prediction, which template CoMFA can augment by identifying additional active structural classes, by improving Tanimoto-only predictions, by yielding quantitative predictions of potency, and by providing interpretable guidance for avoiding or enhancing any specific target response. PMID:26065424

  16. Development and Measurement of Preschoolers' Quantitative Knowledge

    ERIC Educational Resources Information Center

    Geary, David C.

    2015-01-01

    The collection of studies in this special issue make an important contribution to our understanding and measurement of the core cognitive and noncognitive factors that influence children's emerging quantitative competencies. The studies also illustrate how the field has matured, from a time when the quantitative competencies of infants and young…

  17. A Framework to Determine New System Requirements Under Design Parameter and Demand Uncertainties

    DTIC Science & Technology

    2015-04-30

    relegates quantitative complexities of decision-making to the method and designates trade-space exploration to the practitioner. We demonstrate the...quantitative complexities of decision-making to the method and designates trade-space exploration to the practitioner. We demonstrate the approach...play a critical role in determining new system requirements. Scope and Method of Approach The early stages of the design process have substantial

  18. Teaching physics and understanding infrared thermal imaging

    NASA Astrophysics Data System (ADS)

    Vollmer, Michael; Möllmann, Klaus-Peter

    2017-08-01

    Infrared thermal imaging is a very rapidly evolving field. The latest trends are small smartphone IR camera accessories, making infrared imaging a widespread and well-known consumer product. Applications range from medical diagnosis methods via building inspections and industrial predictive maintenance etc. also to visualization in the natural sciences. Infrared cameras do allow qualitative imaging and visualization but also quantitative measurements of the surface temperatures of objects. On the one hand, they are a particularly suitable tool to teach optics and radiation physics and many selected topics in different fields of physics, on the other hand there is an increasing need of engineers and physicists who understand these complex state of the art photonics systems. Therefore students must also learn and understand the physics underlying these systems.

  19. Computer-Aided Drug Design in Epigenetics

    NASA Astrophysics Data System (ADS)

    Lu, Wenchao; Zhang, Rukang; Jiang, Hao; Zhang, Huimin; Luo, Cheng

    2018-03-01

    Epigenetic dysfunction has been widely implicated in several diseases especially cancers thus highlights the therapeutic potential for chemical interventions in this field. With rapid development of computational methodologies and high-performance computational resources, computer-aided drug design has emerged as a promising strategy to speed up epigenetic drug discovery. Herein, we make a brief overview of major computational methods reported in the literature including druggability prediction, virtual screening, homology modeling, scaffold hopping, pharmacophore modeling, molecular dynamics simulations, quantum chemistry calculation and 3D quantitative structure activity relationship that have been successfully applied in the design and discovery of epi-drugs and epi-probes. Finally, we discuss about major limitations of current virtual drug design strategies in epigenetics drug discovery and future directions in this field.

  20. Abrasion-ablation model for neutron production in heavy ion reactions

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Wilson, John W.; Townsend, Lawrence W.

    1995-01-01

    In heavy ion reactions, neutron production at forward angles is observed to occur with a Gaussian shape that is centered near the beam energy and extends to energies well above that of the beam. This paper presents an abrasion-ablation model for making quantitative predictions of the neutron spectrum. To describe neutrons produced from the abrasion step of the reaction where the projectile and target overlap, the authors use the Glauber model and include effects of final-state interactions. They then use the prefragment mass distribution from abrasion with a statistical evaporation model to estimate the neutron spectrum resulting from ablation. Measurements of neutron production from Ne and Nb beams are compared with calculations, and good agreement is found.

  1. Computer-Aided Drug Design in Epigenetics

    PubMed Central

    Lu, Wenchao; Zhang, Rukang; Jiang, Hao; Zhang, Huimin; Luo, Cheng

    2018-01-01

    Epigenetic dysfunction has been widely implicated in several diseases especially cancers thus highlights the therapeutic potential for chemical interventions in this field. With rapid development of computational methodologies and high-performance computational resources, computer-aided drug design has emerged as a promising strategy to speed up epigenetic drug discovery. Herein, we make a brief overview of major computational methods reported in the literature including druggability prediction, virtual screening, homology modeling, scaffold hopping, pharmacophore modeling, molecular dynamics simulations, quantum chemistry calculation, and 3D quantitative structure activity relationship that have been successfully applied in the design and discovery of epi-drugs and epi-probes. Finally, we discuss about major limitations of current virtual drug design strategies in epigenetics drug discovery and future directions in this field. PMID:29594101

  2. The predictive value of quantitative fibronectin testing in combination with cervical length measurement in symptomatic women.

    PubMed

    Bruijn, Merel M C; Kamphuis, Esme I; Hoesli, Irene M; Martinez de Tejada, Begoña; Loccufier, Anne R; Kühnert, Maritta; Helmer, Hanns; Franz, Marie; Porath, Martina M; Oudijk, Martijn A; Jacquemyn, Yves; Schulzke, Sven M; Vetter, Grit; Hoste, Griet; Vis, Jolande Y; Kok, Marjolein; Mol, Ben W J; van Baaren, Gert-Jan

    2016-12-01

    The combination of the qualitative fetal fibronectin test and cervical length measurement has a high negative predictive value for preterm birth within 7 days; however, positive prediction is poor. A new bedside quantitative fetal fibronectin test showed potential additional value over the conventional qualitative test, but there is limited evidence on the combination with cervical length measurement. The purpose of this study was to compare quantitative fetal fibronectin and qualitative fetal fibronectin testing in the prediction of spontaneous preterm birth within 7 days in symptomatic women who undergo cervical length measurement. We performed a European multicenter cohort study in 10 perinatal centers in 5 countries. Women between 24 and 34 weeks of gestation with signs of active labor and intact membranes underwent quantitative fibronectin testing and cervical length measurement. We assessed the risk of preterm birth within 7 days in predefined strata based on fibronectin concentration and cervical length. Of 455 women who were included in the study, 48 women (11%) delivered within 7 days. A combination of cervical length and qualitative fibronectin resulted in the identification of 246 women who were at low risk: 164 women with a cervix between 15 and 30 mm and a negative fibronectin test (<50 ng/mL; preterm birth rate, 2%) and 82 women with a cervix at >30 mm (preterm birth rate, 2%). Use of quantitative fibronectin alone resulted in a predicted risk of preterm birth within 7 days that ranged from 2% in the group with the lowest fibronectin level (<10 ng/mL) to 38% in the group with the highest fibronectin level (>500 ng/mL), with similar accuracy as that of the combination of cervical length and qualitative fibronectin. Combining cervical length and quantitative fibronectin resulted in the identification of an additional 19 women at low risk (preterm birth rate, 5%), using a threshold of 10 ng/mL in women with a cervix at <15 mm, and 6 women at high risk (preterm birth rate, 33%) using a threshold of >500 ng/mL in women with a cervix at >30 mm. In women with threatened preterm birth, quantitative fibronectin testing alone performs equal to the combination of cervical length and qualitative fibronectin. Possibly, the combination of quantitative fibronectin testing and cervical length increases this predictive capacity. Cost-effectiveness analysis and the availability of these tests in a local setting should determine the final choice. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Harnessing quantitative genetics and genomics for understanding and improving complex traits in crops

    USDA-ARS?s Scientific Manuscript database

    Classical quantitative genetics aids crop improvement by providing the means to estimate heritability, genetic correlations, and predicted responses to various selection schemes. Genomics has the potential to aid quantitative genetics and applied crop improvement programs via large-scale, high-thro...

  4. Comparative study of contrast-enhanced ultrasound qualitative and quantitative analysis for identifying benign and malignant breast tumor lumps.

    PubMed

    Liu, Jian; Gao, Yun-Hua; Li, Ding-Dong; Gao, Yan-Chun; Hou, Ling-Mi; Xie, Ting

    2014-01-01

    To compare the value of contrast-enhanced ultrasound (CEUS) qualitative and quantitative analysis in the identification of breast tumor lumps. Qualitative and quantitative indicators of CEUS for 73 cases of breast tumor lumps were retrospectively analyzed by univariate and multivariate approaches. Logistic regression was applied and ROC curves were drawn for evaluation and comparison. The CEUS qualitative indicator-generated regression equation contained three indicators, namely enhanced homogeneity, diameter line expansion and peak intensity grading, which demonstrated prediction accuracy for benign and malignant breast tumor lumps of 91.8%; the quantitative indicator-generated regression equation only contained one indicator, namely the relative peak intensity, and its prediction accuracy was 61.5%. The corresponding areas under the ROC curve for qualitative and quantitative analyses were 91.3% and 75.7%, respectively, which exhibited a statistically significant difference by the Z test (P<0.05). The ability of CEUS qualitative analysis to identify breast tumor lumps is better than with quantitative analysis.

  5. Integrating a quantitative risk appraisal in a health impact assessment: analysis of the novel smoke-free policy in Hungary.

    PubMed

    Ádám, Balázs; Molnár, Ágnes; Gulis, Gabriel; Ádány, Róza

    2013-04-01

    Although the quantification of health outcomes in a health impact assessment (HIA) is scarce in practice, it is preferred by policymakers, as it assists various aspects of the decision-making process. This article provides an example of integrating a quantitative risk appraisal in an HIA performed for the recently adopted Hungarian anti-smoking policy which introduced a smoking ban in closed public places, workplaces and public transport vehicles, and is one of the most effective measures to decrease smoking-related ill health. A comprehensive, prospective HIA was conducted to map the full impact chain of the proposal. Causal pathways were prioritized in a transparent process with special attention given to those pathways for which measures of disease burden could be calculated for the baseline and predicted future scenarios. The proposal was found to decrease the prevalence of active and passive smoking and result in a considerably positive effect on several diseases, among which lung cancer, chronic pulmonary diseases, coronary heart diseases and stroke have the greatest importance. The health gain calculated for the quantifiable health outcomes is close to 1700 deaths postponed and 16,000 life years saved annually in Hungary. The provision of smoke-free public places has an unambiguously positive impact on the health of the public, especially in a country with a high burden of smoking-related diseases. The study described offers a practical example of applying quantification in an HIA, thereby promoting its incorporation into political decision making.

  6. Tsunami Catalogues for the Eastern Mediterranean - Revisited.

    NASA Astrophysics Data System (ADS)

    Ambraseys, N.; Synolakis, C. E.

    2008-12-01

    We critically examine examine tsunami catalogues of tsunamis in the Eastern Mediterranean published in the last decade, by reference to the original sources, see Ambraseys (2008). Such catalogues have been widely used in the aftermath of the 2004 Boxing Day tsunami for probabilistic hazard analysis, even to make projections for a ten year time frame. On occasion, such predictions have caused panic and have reduced the credibility of the scientific community in making hazard assessments. We correct classification and other spurious errors in earlier catalogues and posit a new list. We conclude that for some historic events, any assignment of magnitude, even on a six point intensity scale is inappropriate due to lack of information. Further we assert that any tsunami catalogue, including ours, can only be used in conjunction with sedimentologic evidence to quantitatively infer the return period of larger events. Statistical analyses correlating numbers of tsunami events derived solely from catalogues with their inferred or imagined intensities are meaningless, at least when focusing on specific locales where only a handful of tsunamis are known to have been historically reported. Quantitative hazard assessments based on scenario events of historic tsunamis for which -at best- only the size and approximate location of the parent earthquake is known should be undertaken with extreme caution and only with benefit of geologic studies to enhance the understanding of the local tectonics. Ambraseys N. (2008) Earthquakes in the Eastern Mediterranean and the Middle East: multidisciplinary study of 2000 years of seimicity, Cambridge Univ. Press, Cambridge (ISBN 9780521872928).

  7. A systematic review of quantitative burn wound microbiology in the management of burns patients.

    PubMed

    Halstead, Fenella D; Lee, Kwang Chear; Kwei, Johnny; Dretzke, Janine; Oppenheim, Beryl A; Moiemen, Naiem S

    2018-02-01

    The early diagnosis of infection or sepsis in burns are important for patient care. Globally, a large number of burn centres advocate quantitative cultures of wound biopsies for patient management, since there is assumed to be a direct link between the bioburden of a burn wound and the risk of microbial invasion. Given the conflicting study findings in this area, a systematic review was warranted. Bibliographic databases were searched with no language restrictions to August 2015. Study selection, data extraction and risk of bias assessment were performed in duplicate using pre-defined criteria. Substantial heterogeneity precluded quantitative synthesis, and findings were described narratively, sub-grouped by clinical question. Twenty six laboratory and/or clinical studies were included. Substantial heterogeneity hampered comparisons across studies and interpretation of findings. Limited evidence suggests that (i) more than one quantitative microbiology sample is required to obtain reliable estimates of bacterial load; (ii) biopsies are more sensitive than swabs in diagnosing or predicting sepsis; (iii) high bacterial loads may predict worse clinical outcomes, and (iv) both quantitative and semi-quantitative culture reports need to be interpreted with caution and in the context of other clinical risk factors. The evidence base for the utility and reliability of quantitative microbiology for diagnosing or predicting clinical outcomes in burns patients is limited and often poorly reported. Consequently future research is warranted. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  8. Forests and Soil Erosion across Europe

    NASA Astrophysics Data System (ADS)

    Bathurst, J. C.

    2012-04-01

    Land use and climate change threaten the ability of Europe's forests to provide a vital service in limiting soil erosion, e.g. from forest fires and landslides. However, our ability to define the threat and to propose mitigation measures suffers from two deficiencies concerning the forest/erosion interface: 1) While there have been a considerable number of field studies of the relationship between forest cover and erosion in different parts of Europe, the data sets are scattered among research groups and a range of literature outlets. There is no comprehensive overview of the forest/erosion interface at the European scale, essential for considering regional variations and investigating the effects of future changes in land use and climate. 2) Compared with forest/water studies, we have a poorer quantitative appreciation of forest/erosion interactions. In the forest/water area it is possible to make quantitative statements such as that a 20% change in forest cover across a river catchment is needed for the effect on annual water yield to be measurable or that a forested catchment in upland UK has an annual water yield around 15% lower than an otherwise comparable grassland catchment. Comparable statements are not yet possible for forest/erosion interactions and there are uncertainties in the mathematical representation of forest/erosion interactions which limit our ability to make predictions, for example of the impact of forest loss in a given area. This presentation therefore considers the next step in improving our predictive capability. It proposes the integration of existing research and data to construct the "big picture" across Europe, i.e. erosion rates and sediment yields associated with forest cover and its loss in a range of erosion regimes (e.g. post-forest fire erosion or post-logging landslides). This would provide a basis for generalizations at the European scale. However, such an overview would not form a predictive capability. Therefore it is also necessary to identify a range of predictive methods, from empirical guidelines to computer models, which can be recommended for applications such as extrapolating from the local to the regional scale and for planning mitigation strategies. Such developments could help improve efficiency in the integrated management of forest, soil and water resources, benefit local engineering projects ranging from hazard mitigation plans to road culvert design, contribute to the implementation of the EU Water Framework Development, form a more objective basis for cost/benefit analysis of proposed management actions and help in putting a value on forest services.

  9. Estimation of equivalence ratio distribution in diesel spray using a computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Suzuki, Yasumasa; Tsujimura, Taku; Kusaka, Jin

    2014-08-01

    It is important to understand the mechanism of mixing and atomization of the diesel spray. In addition, the computational prediction of mixing behavior and internal structure of a diesel spray is expected to promote the further understanding about a diesel spray and development of the diesel engine including devices for fuel injection. In this study, we predicted the formation of diesel fuel spray with 3D-CFD code and validated the application by comparing experimental results of the fuel spray behavior and the equivalence ratio visualized by Layleigh-scatter imaging under some ambient, injection and fuel conditions. Using the applicable constants of KH-RT model, we can predict the liquid length spray on a quantitative level. under various fuel injection, ambient and fuel conditions. On the other hand, the change of the vapor penetration and the fuel mass fraction and equivalence ratio distribution with change of fuel injection and ambient conditions quantitatively. The 3D-CFD code used in this study predicts the spray cone angle and entrainment of ambient gas are predicted excessively, therefore there is the possibility of the improvement in the prediction accuracy by the refinement of fuel droplets breakup and evaporation model and the quantitative prediction of spray cone angle.

  10. A hybrid expectation maximisation and MCMC sampling algorithm to implement Bayesian mixture model based genomic prediction and QTL mapping.

    PubMed

    Wang, Tingting; Chen, Yi-Ping Phoebe; Bowman, Phil J; Goddard, Michael E; Hayes, Ben J

    2016-09-21

    Bayesian mixture models in which the effects of SNP are assumed to come from normal distributions with different variances are attractive for simultaneous genomic prediction and QTL mapping. These models are usually implemented with Monte Carlo Markov Chain (MCMC) sampling, which requires long compute times with large genomic data sets. Here, we present an efficient approach (termed HyB_BR), which is a hybrid of an Expectation-Maximisation algorithm, followed by a limited number of MCMC without the requirement for burn-in. To test prediction accuracy from HyB_BR, dairy cattle and human disease trait data were used. In the dairy cattle data, there were four quantitative traits (milk volume, protein kg, fat% in milk and fertility) measured in 16,214 cattle from two breeds genotyped for 632,002 SNPs. Validation of genomic predictions was in a subset of cattle either from the reference set or in animals from a third breeds that were not in the reference set. In all cases, HyB_BR gave almost identical accuracies to Bayesian mixture models implemented with full MCMC, however computational time was reduced by up to 1/17 of that required by full MCMC. The SNPs with high posterior probability of a non-zero effect were also very similar between full MCMC and HyB_BR, with several known genes affecting milk production in this category, as well as some novel genes. HyB_BR was also applied to seven human diseases with 4890 individuals genotyped for around 300 K SNPs in a case/control design, from the Welcome Trust Case Control Consortium (WTCCC). In this data set, the results demonstrated again that HyB_BR performed as well as Bayesian mixture models with full MCMC for genomic predictions and genetic architecture inference while reducing the computational time from 45 h with full MCMC to 3 h with HyB_BR. The results for quantitative traits in cattle and disease in humans demonstrate that HyB_BR can perform equally well as Bayesian mixture models implemented with full MCMC in terms of prediction accuracy, but with up to 17 times faster than the full MCMC implementations. The HyB_BR algorithm makes simultaneous genomic prediction, QTL mapping and inference of genetic architecture feasible in large genomic data sets.

  11. DNA sequence-dependent mechanics and protein-assisted bending in repressor-mediated loop formation

    PubMed Central

    Boedicker, James Q.; Garcia, Hernan G.; Johnson, Stephanie; Phillips, Rob

    2014-01-01

    As the chief informational molecule of life, DNA is subject to extensive physical manipulations. The energy required to deform double-helical DNA depends on sequence, and this mechanical code of DNA influences gene regulation, such as through nucleosome positioning. Here we examine the sequence-dependent flexibility of DNA in bacterial transcription factor-mediated looping, a context for which the role of sequence remains poorly understood. Using a suite of synthetic constructs repressed by the Lac repressor and two well-known sequences that show large flexibility differences in vitro, we make precise statistical mechanical predictions as to how DNA sequence influences loop formation and test these predictions using in vivo transcription and in vitro single-molecule assays. Surprisingly, sequence-dependent flexibility does not affect in vivo gene regulation. By theoretically and experimentally quantifying the relative contributions of sequence and the DNA-bending protein HU to DNA mechanical properties, we reveal that bending by HU dominates DNA mechanics and masks intrinsic sequence-dependent flexibility. Such a quantitative understanding of how mechanical regulatory information is encoded in the genome will be a key step towards a predictive understanding of gene regulation at single-base pair resolution. PMID:24231252

  12. Advancing research on animal-transported subsidies by integrating animal movement and ecosystem modelling.

    PubMed

    Earl, Julia E; Zollner, Patrick A

    2017-09-01

    Connections between ecosystems via animals (active subsidies) support ecosystem services and contribute to numerous ecological effects. Thus, the ability to predict the spatial distribution of active subsidies would be useful for ecology and conservation. Previous work modelling active subsidies focused on implicit space or static distributions, which treat passive and active subsidies similarly. Active subsidies are fundamentally different from passive subsidies, because animals can respond to the process of subsidy deposition and ecosystem changes caused by subsidy deposition. We propose addressing this disparity by integrating animal movement and ecosystem ecology to advance active subsidy investigations, make more accurate predictions of subsidy spatial distributions, and enable a mechanistic understanding of subsidy spatial distributions. We review selected quantitative techniques that could be used to accomplish integration and lead to novel insights. The ultimate objective for these types of studies is predictions of subsidy spatial distributions from characteristics of the subsidy and the movement strategy employed by animals that transport subsidies. These advances will be critical in informing the management of ecosystem services, species conservation and ecosystem degradation related to active subsidies. © 2017 The Authors. Journal of Animal Ecology © 2017 British Ecological Society.

  13. Height and body mass influence on human body outlines: a quantitative approach using an elliptic Fourier analysis.

    PubMed

    Courtiol, Alexandre; Ferdy, Jean Baptiste; Godelle, Bernard; Raymond, Michel; Claude, Julien

    2010-05-01

    Many studies use representations of human body outlines to study how individual characteristics, such as height and body mass, affect perception of body shape. These typically involve reality-based stimuli (e.g., pictures) or manipulated stimuli (e.g., drawings). These two classes of stimuli have important drawbacks that limit result interpretations. Realistic stimuli vary in terms of traits that are correlated, which makes it impossible to assess the effect of a single trait independently. In addition, manipulated stimuli usually do not represent realistic morphologies. We describe and examine a method based on elliptic Fourier descriptors to automatically predict and represent body outlines for a given set of predicted variables (e.g., sex, height, and body mass). We first estimate whether these predictive variables are significantly related to human outlines. We find that height and body mass significantly influence body shape. Unlike height, the effect of body mass on shape differs between sexes. Then, we show that we can easily build a regression model that creates hypothetical outlines for an arbitrary set of covariates. These statistically computed outlines are quite realistic and may be used as stimuli in future studies.

  14. Two modes of motion of the alligator lizard cochlea: Measurements and model predictions

    NASA Astrophysics Data System (ADS)

    Aranyosi, A. J.; Freeman, Dennis M.

    2005-09-01

    Measurements of motion of an in vitro preparation of the alligator lizard basilar papilla in response to sound demonstrate elliptical trajectories. These trajectories are consistent with the presence of both a translational and rotational mode of motion. The translational mode is independent of frequency, and the rotational mode has a displacement peak near 5 kHz. These measurements can be explained by a simple mechanical system in which the basilar papilla is supported asymmetrically on the basilar membrane. In a quantitative model, the translational admittance is compliant while the rotational admittance is second order. Best-fit model parameters are consistent with estimates based on anatomy and predict that fluid flow across hair bundles is a primary source of viscous damping. The model predicts that the rotational mode contributes to the high-frequency slopes of auditory nerve fiber tuning curves, providing a physical explanation for a low-pass filter required in models of this cochlea. The combination of modes makes the sensitivity of hair bundles more uniform with radial position than that which would result from pure rotation. A mechanical analogy with the organ of Corti suggests that these two modes of motion may also be present in the mammalian cochlea.

  15. Universality and predictability in molecular quantitative genetics.

    PubMed

    Nourmohammad, Armita; Held, Torsten; Lässig, Michael

    2013-12-01

    Molecular traits, such as gene expression levels or protein binding affinities, are increasingly accessible to quantitative measurement by modern high-throughput techniques. Such traits measure molecular functions and, from an evolutionary point of view, are important as targets of natural selection. We review recent developments in evolutionary theory and experiments that are expected to become building blocks of a quantitative genetics of molecular traits. We focus on universal evolutionary characteristics: these are largely independent of a trait's genetic basis, which is often at least partially unknown. We show that universal measurements can be used to infer selection on a quantitative trait, which determines its evolutionary mode of conservation or adaptation. Furthermore, universality is closely linked to predictability of trait evolution across lineages. We argue that universal trait statistics extends over a range of cellular scales and opens new avenues of quantitative evolutionary systems biology. Copyright © 2013. Published by Elsevier Ltd.

  16. Dispositional mindfulness and employment status as predictors of resilience in third year nursing students: a quantitative study.

    PubMed

    Chamberlain, Diane; Williams, Allison; Stanley, David; Mellor, Peter; Cross, Wendy; Siegloff, Lesley

    2016-10-01

    Nursing students will graduate into stressful workplace environments and resilience is an essential acquired ability for surviving the workplace. Few studies have explored the relationship between resilience and the degree of innate dispositional mindfulness, compassion, compassion fatigue and burnout in nursing students, including those who find themselves in the position of needing to work in addition to their academic responsibilities. This paper investigates the predictors of resilience, including dispositional mindfulness and employment status of third year nursing students from three Australian universities. Participants were 240 undergraduate, third year, nursing students. Participants completed a resilience measure (Connor-Davidson Resilience Scale, CD-RISC), measures of dispositional mindfulness (Cognitive and Affective Mindfulness Scale Revised, CAMS-R) and professional quality of life (The Professional Quality of Life Scale version 5, PROQOL5), such as compassion satisfaction, compassion fatigue and burnout. An observational quantitative successive independent samples survey design was employed. A stepwise linear regression was used to evaluate the extent to which predictive variables were related each to resilience. The predictive model explained 57% of the variance in resilience. Dispositional mindfulness subset acceptance made the strongest contribution, followed by the expectation of a graduate nurse transition programme acceptance, with dispositional mindfulness total score and employment greater than 20 hours per week making the smallest contribution. This was a resilient group of nursing students who rated high with dispositional mindfulness and exhibited hopeful and positive aspirations for obtaining a position in a competitive graduate nurse transition programme after graduation.

  17. Unraveling additive from nonadditive effects using genomic relationship matrices.

    PubMed

    Muñoz, Patricio R; Resende, Marcio F R; Gezan, Salvador A; Resende, Marcos Deon Vilela; de Los Campos, Gustavo; Kirst, Matias; Huber, Dudley; Peter, Gary F

    2014-12-01

    The application of quantitative genetics in plant and animal breeding has largely focused on additive models, which may also capture dominance and epistatic effects. Partitioning genetic variance into its additive and nonadditive components using pedigree-based models (P-genomic best linear unbiased predictor) (P-BLUP) is difficult with most commonly available family structures. However, the availability of dense panels of molecular markers makes possible the use of additive- and dominance-realized genomic relationships for the estimation of variance components and the prediction of genetic values (G-BLUP). We evaluated height data from a multifamily population of the tree species Pinus taeda with a systematic series of models accounting for additive, dominance, and first-order epistatic interactions (additive by additive, dominance by dominance, and additive by dominance), using either pedigree- or marker-based information. We show that, compared with the pedigree, use of realized genomic relationships in marker-based models yields a substantially more precise separation of additive and nonadditive components of genetic variance. We conclude that the marker-based relationship matrices in a model including additive and nonadditive effects performed better, improving breeding value prediction. Moreover, our results suggest that, for tree height in this population, the additive and nonadditive components of genetic variance are similar in magnitude. This novel result improves our current understanding of the genetic control and architecture of a quantitative trait and should be considered when developing breeding strategies. Copyright © 2014 by the Genetics Society of America.

  18. Topology and the universe

    NASA Astrophysics Data System (ADS)

    Gott, J. Richard, III

    1998-09-01

    Topology may play an important role in cosmology in several different ways. First, Einstein's field equations tell us about the local geometry of the universe but not about its topology. Therefore, the universe may be multiply connected. Inflation predicts that the fluctuations that made clusters and groups of galaxies arose from random quantum fluctuations in the early universe. These should be Gaussian random phase. This can be tested by quantitatively measuring the topology of large-scale structure in the universe using the genus statistic. If the original fluctuations were Gaussian random phase then the structure we see today should have a spongelike topology. A number of studies by our group and others have shown that this is indeed the case. Future tests using the Sloan Digital Sky Survey should be possible. Microwave background fluctuations should also exhibit a characteristic symmetric pattern of hot and cold spots. The COBE data are consistent with this pattern and the MAP and PLANCK satellites should provide a definitive test. If the original inflationary state was metastable then it should decay by making an infinite number of open inflationary bubble universes. This model makes a specific prediction for the power spectrum of fluctuations in the microwave background which can be checked by the MAP and PLANCK satellites. Finally, Gott and Li have proposed how a multiply connected cosmology with an early epoch of closed timelike curves might allow the universe to be its own mother.

  19. Modified subaperture tool influence functions of a flat-pitch polisher with reverse-calculated material removal rate.

    PubMed

    Dong, Zhichao; Cheng, Haobo; Tam, Hon-Yuen

    2014-04-10

    Numerical simulation of subaperture tool influence functions (TIF) is widely known as a critical procedure in computer-controlled optical surfacing. However, it may lack practicability in engineering because the emulation TIF (e-TIF) has some discrepancy with the practical TIF (p-TIF), and the removal rate could not be predicted by simulations. Prior to the polishing of a formal workpiece, opticians have to conduct TIF spot experiments on another sample to confirm the p-TIF with a quantitative removal rate, which is difficult and time-consuming for sequential polishing runs with different tools. This work is dedicated to applying these e-TIFs into practical engineering by making improvements from two aspects: (1) modifies the pressure distribution model of a flat-pitch polisher by finite element analysis and least square fitting methods to make the removal shape of e-TIFs closer to p-TIFs (less than 5% relative deviation validated by experiments); (2) predicts the removal rate of e-TIFs by reverse calculating the material removal volume of a pre-polishing run to the formal workpiece (relative deviations of peak and volume removal rate were validated to be less than 5%). This can omit TIF spot experiments for the particular flat-pitch tool employed and promote the direct usage of e-TIFs in the optimization of a dwell time map, which can largely save on cost and increase fabrication efficiency.

  20. Confronting uncertainty in flood damage predictions

    NASA Astrophysics Data System (ADS)

    Schröter, Kai; Kreibich, Heidi; Vogel, Kristin; Merz, Bruno

    2015-04-01

    Reliable flood damage models are a prerequisite for the practical usefulness of the model results. Oftentimes, traditional uni-variate damage models as for instance depth-damage curves fail to reproduce the variability of observed flood damage. Innovative multi-variate probabilistic modelling approaches are promising to capture and quantify the uncertainty involved and thus to improve the basis for decision making. In this study we compare the predictive capability of two probabilistic modelling approaches, namely Bagging Decision Trees and Bayesian Networks. For model evaluation we use empirical damage data which are available from computer aided telephone interviews that were respectively compiled after the floods in 2002, 2005 and 2006, in the Elbe and Danube catchments in Germany. We carry out a split sample test by sub-setting the damage records. One sub-set is used to derive the models and the remaining records are used to evaluate the predictive performance of the model. Further we stratify the sample according to catchments which allows studying model performance in a spatial transfer context. Flood damage estimation is carried out on the scale of the individual buildings in terms of relative damage. The predictive performance of the models is assessed in terms of systematic deviations (mean bias), precision (mean absolute error) as well as in terms of reliability which is represented by the proportion of the number of observations that fall within the 95-quantile and 5-quantile predictive interval. The reliability of the probabilistic predictions within validation runs decreases only slightly and achieves a very good coverage of observations within the predictive interval. Probabilistic models provide quantitative information about prediction uncertainty which is crucial to assess the reliability of model predictions and improves the usefulness of model results.

  1. Three-dimensional structural modelling and calculation of electrostatic potentials of HLA Bw4 and Bw6 epitopes to explain the molecular basis for alloantibody binding: toward predicting HLA antigenicity and immunogenicity.

    PubMed

    Mallon, Dermot H; Bradley, J Andrew; Winn, Peter J; Taylor, Craig J; Kosmoliaptsis, Vasilis

    2015-02-01

    We have previously shown that qualitative assessment of surface electrostatic potential of HLA class I molecules helps explain serological patterns of alloantibody binding. We have now used a novel computational approach to quantitate differences in surface electrostatic potential of HLA B-cell epitopes and applied this to explain HLA Bw4 and Bw6 antigenicity. Protein structure models of HLA class I alleles expressing either the Bw4 or Bw6 epitope (defined by sequence motifs at positions 77 to 83) were generated using comparative structure prediction. The electrostatic potential in 3-dimensional space encompassing the Bw4/Bw6 epitope was computed by solving the Poisson-Boltzmann equation and quantitatively compared in a pairwise, all-versus-all fashion to produce distance matrices that cluster epitopes with similar electrostatics properties. Quantitative comparison of surface electrostatic potential at the carboxyl terminal of the α1-helix of HLA class I alleles, corresponding to amino acid sequence motif 77 to 83, produced clustering of HLA molecules in 3 principal groups according to Bw4 or Bw6 epitope expression. Remarkably, quantitative differences in electrostatic potential reflected known patterns of serological reactivity better than Bw4/Bw6 amino acid sequence motifs. Quantitative assessment of epitope electrostatic potential allowed the impact of known amino acid substitutions (HLA-B*07:02 R79G, R82L, G83R) that are critical for antibody binding to be predicted. We describe a novel approach for quantitating differences in HLA B-cell epitope electrostatic potential. Proof of principle is provided that this approach enables better assessment of HLA epitope antigenicity than amino acid sequence data alone, and it may allow prediction of HLA immunogenicity.

  2. Electroencephalography and quantitative electroencephalography in mild traumatic brain injury.

    PubMed

    Haneef, Zulfi; Levin, Harvey S; Frost, James D; Mizrahi, Eli M

    2013-04-15

    Mild traumatic brain injury (mTBI) causes brain injury resulting in electrophysiologic abnormalities visible in electroencephalography (EEG) recordings. Quantitative EEG (qEEG) makes use of quantitative techniques to analyze EEG characteristics such as frequency, amplitude, coherence, power, phase, and symmetry over time independently or in combination. QEEG has been evaluated for its use in making a diagnosis of mTBI and assessing prognosis, including the likelihood of progressing to the postconcussive syndrome (PCS) phase. We review the EEG and qEEG changes of mTBI described in the literature. An attempt is made to separate the findings seen during the acute, subacute, and chronic phases after mTBI. Brief mention is also made of the neurobiological correlates of qEEG using neuroimaging techniques or in histopathology. Although the literature indicates the promise of qEEG in making a diagnosis and indicating prognosis of mTBI, further study is needed to corroborate and refine these methods.

  3. Electroencephalography and Quantitative Electroencephalography in Mild Traumatic Brain Injury

    PubMed Central

    Levin, Harvey S.; Frost, James D.; Mizrahi, Eli M.

    2013-01-01

    Abstract Mild traumatic brain injury (mTBI) causes brain injury resulting in electrophysiologic abnormalities visible in electroencephalography (EEG) recordings. Quantitative EEG (qEEG) makes use of quantitative techniques to analyze EEG characteristics such as frequency, amplitude, coherence, power, phase, and symmetry over time independently or in combination. QEEG has been evaluated for its use in making a diagnosis of mTBI and assessing prognosis, including the likelihood of progressing to the postconcussive syndrome (PCS) phase. We review the EEG and qEEG changes of mTBI described in the literature. An attempt is made to separate the findings seen during the acute, subacute, and chronic phases after mTBI. Brief mention is also made of the neurobiological correlates of qEEG using neuroimaging techniques or in histopathology. Although the literature indicates the promise of qEEG in making a diagnosis and indicating prognosis of mTBI, further study is needed to corroborate and refine these methods. PMID:23249295

  4. Nanoparticle surface characterization and clustering through concentration-dependent surface adsorption modeling.

    PubMed

    Chen, Ran; Zhang, Yuntao; Sahneh, Faryad Darabi; Scoglio, Caterina M; Wohlleben, Wendel; Haase, Andrea; Monteiro-Riviere, Nancy A; Riviere, Jim E

    2014-09-23

    Quantitative characterization of nanoparticle interactions with their surrounding environment is vital for safe nanotechnological development and standardization. A recent quantitative measure, the biological surface adsorption index (BSAI), has demonstrated promising applications in nanomaterial surface characterization and biological/environmental prediction. This paper further advances the approach beyond the application of five descriptors in the original BSAI to address the concentration dependence of the descriptors, enabling better prediction of the adsorption profile and more accurate categorization of nanomaterials based on their surface properties. Statistical analysis on the obtained adsorption data was performed based on three different models: the original BSAI, a concentration-dependent polynomial model, and an infinite dilution model. These advancements in BSAI modeling showed a promising development in the application of quantitative predictive modeling in biological applications, nanomedicine, and environmental safety assessment of nanomaterials.

  5. Quantitative Methods Intervention: What Do the Students Want?

    ERIC Educational Resources Information Center

    Frankland, Lianne; Harrison, Jacqui

    2016-01-01

    The shortage of social science graduates with competent quantitative skills jeopardises the competitive UK economy, public policy making effectiveness and the status the UK has as a world leader in higher education and research (British Academy for Humanities and Social Sciences, 2012). There is a growing demand for quantitative skills across all…

  6. 3D-quantitative structure-activity relationship studies on benzothiadiazepine hydroxamates as inhibitors of tumor necrosis factor-alpha converting enzyme.

    PubMed

    Murumkar, Prashant R; Giridhar, Rajani; Yadav, Mange Ram

    2008-04-01

    A set of 29 benzothiadiazepine hydroxamates having selective tumor necrosis factor-alpha converting enzyme inhibitory activity were used to compare the quality and predictive power of 3D-quantitative structure-activity relationship, comparative molecular field analysis, and comparative molecular similarity indices models for the atom-based, centroid/atom-based, data-based, and docked conformer-based alignment. Removal of two outliers from the initial training set of molecules improved the predictivity of models. Among the 3D-quantitative structure-activity relationship models developed using the above four alignments, the database alignment provided the optimal predictive comparative molecular field analysis model for the training set with cross-validated r(2) (q(2)) = 0.510, non-cross-validated r(2) = 0.972, standard error of estimates (s) = 0.098, and F = 215.44 and the optimal comparative molecular similarity indices model with cross-validated r(2) (q(2)) = 0.556, non-cross-validated r(2) = 0.946, standard error of estimates (s) = 0.163, and F = 99.785. These models also showed the best test set prediction for six compounds with predictive r(2) values of 0.460 and 0.535, respectively. The contour maps obtained from 3D-quantitative structure-activity relationship studies were appraised for activity trends for the molecules analyzed. The comparative molecular similarity indices models exhibited good external predictivity as compared with that of comparative molecular field analysis models. The data generated from the present study helped us to further design and report some novel and potent tumor necrosis factor-alpha converting enzyme inhibitors.

  7. Building gene expression signatures indicative of transcription factor activation to predict AOP modulation

    EPA Science Inventory

    Building gene expression signatures indicative of transcription factor activation to predict AOP modulation Adverse outcome pathways (AOPs) are a framework for predicting quantitative relationships between molecular initiatin...

  8. The Dopamine Prediction Error: Contributions to Associative Models of Reward Learning

    PubMed Central

    Nasser, Helen M.; Calu, Donna J.; Schoenbaum, Geoffrey; Sharpe, Melissa J.

    2017-01-01

    Phasic activity of midbrain dopamine neurons is currently thought to encapsulate the prediction-error signal described in Sutton and Barto’s (1981) model-free reinforcement learning algorithm. This phasic signal is thought to contain information about the quantitative value of reward, which transfers to the reward-predictive cue after learning. This is argued to endow the reward-predictive cue with the value inherent in the reward, motivating behavior toward cues signaling the presence of reward. Yet theoretical and empirical research has implicated prediction-error signaling in learning that extends far beyond a transfer of quantitative value to a reward-predictive cue. Here, we review the research which demonstrates the complexity of how dopaminergic prediction errors facilitate learning. After briefly discussing the literature demonstrating that phasic dopaminergic signals can act in the manner described by Sutton and Barto (1981), we consider how these signals may also influence attentional processing across multiple attentional systems in distinct brain circuits. Then, we discuss how prediction errors encode and promote the development of context-specific associations between cues and rewards. Finally, we consider recent evidence that shows dopaminergic activity contains information about causal relationships between cues and rewards that reflect information garnered from rich associative models of the world that can be adapted in the absence of direct experience. In discussing this research we hope to support the expansion of how dopaminergic prediction errors are thought to contribute to the learning process beyond the traditional concept of transferring quantitative value. PMID:28275359

  9. State-of-the-art radiological techniques improve the assessment of postoperative lung function in patients with non-small cell lung cancer.

    PubMed

    Ohno, Yoshiharu; Koyama, Hisanobu; Nogami, Munenobu; Takenaka, Daisuke; Onishi, Yumiko; Matsumoto, Keiko; Matsumoto, Sumiaki; Maniwa, Yoshimasa; Yoshimura, Masahiro; Nishimura, Yoshihiro; Sugimura, Kazuro

    2011-01-01

    The purpose of this study was to compare predictive capabilities for postoperative lung function in non-small cell lung cancer (NSCLC) patients of the state-of-the-art radiological methods including perfusion MRI, quantitative CT and SPECT/CT with that of anatomical method (i.e. qualitative CT) and traditional nuclear medicine methods such as planar imaging and SPECT. Perfusion MRI, CT, nuclear medicine study and measurements of %FEV(1) before and after lung resection were performed for 229 NSCLC patients (125 men and 104 women). For perfusion MRI, postoperative %FEV(1) (po%FEV(1)) was predicted from semi-quantitatively assessed blood volumes within total and resected lungs, for quantitative CT, it was predicted from the functional lung volumes within total and resected lungs, for qualitative CT, from the number of segments of total and resected lungs, and for nuclear medicine studies, from uptakes within total and resected lungs. All SPECTs were automatically co-registered with CTs for preparation of SPECT/CTs. Predicted po%FEV(1)s were then correlated with actual po%FEV(1)s, which were measured %FEV(1)s after operation. The limits of agreement were also evaluated. All predicted po%FEV(1)s showed good correlation with actual po%FEV(1)s (0.83≤r≤0.88, p<0.0001). Perfusion MRI, quantitative CT and SPECT/CT demonstrated better correlation than other methods. The limits of agreement of perfusion MRI (4.4±14.2%), quantitative CT (4.7±14.2%) and SPECT/CT (5.1±14.7%) were less than those of qualitative CT (6.0±17.4%), planar imaging (5.8±18.2%), and SPECT (5.5±16.8%). State-of-the-art radiological methods can predict postoperative lung function in NSCLC patients more accurately than traditional methods. Copyright © 2009 Elsevier Ireland Ltd. All rights reserved.

  10. Quantitative prediction of oral cancer risk in patients with oral leukoplakia.

    PubMed

    Liu, Yao; Li, Yicheng; Fu, Yue; Liu, Tong; Liu, Xiaoyong; Zhang, Xinyan; Fu, Jie; Guan, Xiaobing; Chen, Tong; Chen, Xiaoxin; Sun, Zheng

    2017-07-11

    Exfoliative cytology has been widely used for early diagnosis of oral squamous cell carcinoma. We have developed an oral cancer risk index using DNA index value to quantitatively assess cancer risk in patients with oral leukoplakia, but with limited success. In order to improve the performance of the risk index, we collected exfoliative cytology, histopathology, and clinical follow-up data from two independent cohorts of normal, leukoplakia and cancer subjects (training set and validation set). Peaks were defined on the basis of first derivatives with positives, and modern machine learning techniques were utilized to build statistical prediction models on the reconstructed data. Random forest was found to be the best model with high sensitivity (100%) and specificity (99.2%). Using the Peaks-Random Forest model, we constructed an index (OCRI2) as a quantitative measurement of cancer risk. Among 11 leukoplakia patients with an OCRI2 over 0.5, 4 (36.4%) developed cancer during follow-up (23 ± 20 months), whereas 3 (5.3%) of 57 leukoplakia patients with an OCRI2 less than 0.5 developed cancer (32 ± 31 months). OCRI2 is better than other methods in predicting oral squamous cell carcinoma during follow-up. In conclusion, we have developed an exfoliative cytology-based method for quantitative prediction of cancer risk in patients with oral leukoplakia.

  11. Quantitation without Calibration: Response Profile as an Indicator of Target Amount.

    PubMed

    Debnath, Mrittika; Farace, Jessica M; Johnson, Kristopher D; Nesterova, Irina V

    2018-06-21

    Quantitative assessment of biomarkers is essential in numerous contexts from decision-making in clinical situations to food quality monitoring to interpretation of life-science research findings. However, appropriate quantitation techniques are not as widely addressed as detection methods. One of the major challenges in biomarker's quantitation is the need to have a calibration for correlating a measured signal to a target amount. The step complicates the methodologies and makes them less sustainable. In this work we address the issue via a new strategy: relying on position of response profile rather than on an absolute signal value for assessment of a target's amount. In order to enable the capability we develop a target-probe binding mechanism based on a negative cooperativity effect. A proof-of-concept example demonstrates that the model is suitable for quantitative analysis of nucleic acids over a wide concentration range. The general principles of the platform will be applicable toward a variety of biomarkers such as nucleic acids, proteins, peptides, and others.

  12. Qualification Testing Versus Quantitative Reliability Testing of PV - Gaining Confidence in a Rapidly Changing Technology: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurtz, Sarah; Repins, Ingrid L; Hacke, Peter L

    Continued growth of PV system deployment would be enhanced by quantitative, low-uncertainty predictions of the degradation and failure rates of PV modules and systems. The intended product lifetime (decades) far exceeds the product development cycle (months), limiting our ability to reduce the uncertainty of the predictions for this rapidly changing technology. Yet, business decisions (setting insurance rates, analyzing return on investment, etc.) require quantitative risk assessment. Moving toward more quantitative assessments requires consideration of many factors, including the intended application, consequence of a possible failure, variability in the manufacturing, installation, and operation, as well as uncertainty in the measured accelerationmore » factors, which provide the basis for predictions based on accelerated tests. As the industry matures, it is useful to periodically assess the overall strategy for standards development and prioritization of research to provide a technical basis both for the standards and the analysis related to the application of those. To this end, this paper suggests a tiered approach to creating risk assessments. Recent and planned potential improvements in international standards are also summarized.« less

  13. A Synthesis Of Knowledge About Caregiver Decision Making Finds Gaps In Support For Those Who Care For Aging Loved Ones.

    PubMed

    Garvelink, Mirjam M; Ngangue, Patrice A G; Adekpedjou, Rheda; Diouf, Ndeye T; Goh, Larissa; Blair, Louisa; Légaré, France

    2016-04-01

    We conducted a mixed-methods knowledge synthesis to assess the effectiveness of interventions to improve caregivers' involvement in decision making with seniors, and to describe caregivers' experiences of decision making in the absence of interventions. We analyzed forty-nine qualitative, fourteen quantitative, and three mixed-methods studies. The qualitative studies indicated that caregivers had unmet needs for information, discussions of values and needs, and decision support, which led to negative sentiments after decision making. Our results indicate that there have been insufficient quantitative evaluations of interventions to involve caregivers in decision making with seniors and that the evaluations that do exist found few clinically significant effects. Elements of usual care that received positive evaluations were the availability of a decision coach and a supportive decision-making environment. Additional rigorously evaluated interventions are needed to help caregivers be more involved in decision making with seniors. Project HOPE—The People-to-People Health Foundation, Inc.

  14. Has Lean improved organizational decision making?

    PubMed

    Simons, Pascale; Benders, Jos; Bergs, Jochen; Marneffe, Wim; Vandijck, Dominique

    2016-06-13

    Purpose - Sustainable improvement is likely to be hampered by ambiguous objectives and uncertain cause-effect relations in care processes (the organization's decision-making context). Lean management can improve implementation results because it decreases ambiguity and uncertainties. But does it succeed? Many quality improvement (QI) initiatives are appropriate improvement strategies in organizational contexts characterized by low ambiguity and uncertainty. However, most care settings do not fit this context. The purpose of this paper is to investigate whether a Lean-inspired change program changed the organization's decision-making context, making it more amenable for QI initiatives. Design/methodology/approach - In 2014, 12 professionals from a Dutch radiotherapy institute were interviewed regarding their perceptions of a Lean program in their organization and the perceived ambiguous objectives and uncertain cause-effect relations in their clinical processes. A survey (25 questions), addressing the same concepts, was conducted among the interviewees in 2011 and 2014. The structured interviews were analyzed using a deductive approach. Quantitative data were analyzed using appropriate statistics. Findings - Interviewees experienced improved shared visions and the number of uncertain cause-effect relations decreased. Overall, more positive (99) than negative Lean effects (18) were expressed. The surveys revealed enhanced process predictability and standardization, and improved shared visions. Practical implications - Lean implementation has shown to lead to greater transparency and increased shared visions. Originality/value - Lean management decreased ambiguous objectives and reduced uncertainties in clinical process cause-effect relations. Therefore, decision making benefitted from Lean increasing QI's sustainability.

  15. Impact of implementation choices on quantitative predictions of cell-based computational models

    NASA Astrophysics Data System (ADS)

    Kursawe, Jochen; Baker, Ruth E.; Fletcher, Alexander G.

    2017-09-01

    'Cell-based' models provide a powerful computational tool for studying the mechanisms underlying the growth and dynamics of biological tissues in health and disease. An increasing amount of quantitative data with cellular resolution has paved the way for the quantitative parameterisation and validation of such models. However, the numerical implementation of cell-based models remains challenging, and little work has been done to understand to what extent implementation choices may influence model predictions. Here, we consider the numerical implementation of a popular class of cell-based models called vertex models, which are often used to study epithelial tissues. In two-dimensional vertex models, a tissue is approximated as a tessellation of polygons and the vertices of these polygons move due to mechanical forces originating from the cells. Such models have been used extensively to study the mechanical regulation of tissue topology in the literature. Here, we analyse how the model predictions may be affected by numerical parameters, such as the size of the time step, and non-physical model parameters, such as length thresholds for cell rearrangement. We find that vertex positions and summary statistics are sensitive to several of these implementation parameters. For example, the predicted tissue size decreases with decreasing cell cycle durations, and cell rearrangement may be suppressed by large time steps. These findings are counter-intuitive and illustrate that model predictions need to be thoroughly analysed and implementation details carefully considered when applying cell-based computational models in a quantitative setting.

  16. Quantitative chest computed tomography as a means of predicting exercise performance in severe emphysema.

    PubMed

    Crausman, R S; Ferguson, G; Irvin, C G; Make, B; Newell, J D

    1995-06-01

    We assessed the value of quantitative high-resolution computed tomography (CT) as a diagnostic and prognostic tool in smoking-related emphysema. We performed an inception cohort study of 14 patients referred with emphysema. The diagnosis of emphysema was based on a compatible history, physical examination, chest radiograph, CT scan of the lung, and pulmonary physiologic evaluation. As a group, those who underwent exercise testing were hyperinflated (percentage predicted total lung capacity +/- standard error of the mean = 133 +/- 9%), and there was evidence of air trapping (percentage predicted respiratory volume = 318 +/- 31%) and airflow limitation (forced expiratory volume in 1 sec [FEV1] = 40 +/- 7%). The exercise performance of the group was severely limited (maximum achievable workload = 43 +/- 6%) and was characterized by prominent ventilatory, gas exchange, and pulmonary vascular abnormalities. The quantitative CT index was markedly elevated in all patients (76 +/- 9; n = 14; normal < 4). There were correlations between this quantitative CT index and measures of airflow limitation (FEV1 r2 = .34, p = 09; FEV1/forced vital capacity r2 = .46, p = .04) and between maximum workload achieved (r2 = .93, p = .0001) and maximum oxygen utilization (r2 = .83, p = .0007). Quantitative chest CT assessment of disease severity is correlated with the degree of airflow limitation and exercise impairment in pulmonary emphysema.

  17. Guidelines for improving the reproducibility of quantitative multiparameter immunofluorescence measurements by laser scanning cytometry on fixed cell suspensions from human solid tumors.

    PubMed

    Shackney, Stanley; Emlet, David R; Pollice, Agnese; Smith, Charles; Brown, Kathryn; Kociban, Deborah

    2006-01-01

    Laser scanning Cytometry (LSC) is a versatile technology that makes it possible to perform multiple measurements on individual cells and correlate them cell by cell with other cellular features. It would be highly desirable to be able to perform reproducible, quantitative, correlated cell-based immunofluorescence studies on individual cells from human solid tumors. However, such studies can be challenging because of the presence of large numbers of cell aggregates and other confounding factors. Techniques have been developed to deal with cell aggregates in data sets collected by LSC. Experience has also been gained in addressing other key technical and methodological issues that can affect the reproducibility of such cell-based immunofluorescence measurements. We describe practical aspects of cell sample collection, cell fixation and staining, protocols for performing multiparameter immunofluorescence measurements by LSC, use of controls and reference samples, and approaches to data analysis that we have found useful in improving the accuracy and reproducibility of LSC data obtained in human tumor samples. We provide examples of the potential advantages of LSC in examining quantitative aspects of cell-based analysis. Improvements in the quality of cell-based multiparameter immunofluorescence measurements make it possible to extract useful information from relatively small numbers of cells. This, in turn, permits the performance of multiple multicolor panels on each tumor sample. With links among the different panels that are provided by overlapping measurements, it is possible to develop increasingly more extensive profiles of intracellular expression of multiple proteins in clinical samples of human solid tumors. Examples of such linked panels of measurements are provided. Advances in methodology can improve cell-based multiparameter immunofluorescence measurements on cell suspensions from human solid tumors by LSC for use in prognostic and predictive clinical applications. Copyright (c) 2005 Wiley-Liss, Inc.

  18. Kernel-based whole-genome prediction of complex traits: a review.

    PubMed

    Morota, Gota; Gianola, Daniel

    2014-01-01

    Prediction of genetic values has been a focus of applied quantitative genetics since the beginning of the 20th century, with renewed interest following the advent of the era of whole genome-enabled prediction. Opportunities offered by the emergence of high-dimensional genomic data fueled by post-Sanger sequencing technologies, especially molecular markers, have driven researchers to extend Ronald Fisher and Sewall Wright's models to confront new challenges. In particular, kernel methods are gaining consideration as a regression method of choice for genome-enabled prediction. Complex traits are presumably influenced by many genomic regions working in concert with others (clearly so when considering pathways), thus generating interactions. Motivated by this view, a growing number of statistical approaches based on kernels attempt to capture non-additive effects, either parametrically or non-parametrically. This review centers on whole-genome regression using kernel methods applied to a wide range of quantitative traits of agricultural importance in animals and plants. We discuss various kernel-based approaches tailored to capturing total genetic variation, with the aim of arriving at an enhanced predictive performance in the light of available genome annotation information. Connections between prediction machines born in animal breeding, statistics, and machine learning are revisited, and their empirical prediction performance is discussed. Overall, while some encouraging results have been obtained with non-parametric kernels, recovering non-additive genetic variation in a validation dataset remains a challenge in quantitative genetics.

  19. Prediction of MHC class II binding affinity using SMM-align, a novel stabilization matrix alignment method

    PubMed Central

    Nielsen, Morten; Lundegaard, Claus; Lund, Ole

    2007-01-01

    Background Antigen presenting cells (APCs) sample the extra cellular space and present peptides from here to T helper cells, which can be activated if the peptides are of foreign origin. The peptides are presented on the surface of the cells in complex with major histocompatibility class II (MHC II) molecules. Identification of peptides that bind MHC II molecules is thus a key step in rational vaccine design and developing methods for accurate prediction of the peptide:MHC interactions play a central role in epitope discovery. The MHC class II binding groove is open at both ends making the correct alignment of a peptide in the binding groove a crucial part of identifying the core of an MHC class II binding motif. Here, we present a novel stabilization matrix alignment method, SMM-align, that allows for direct prediction of peptide:MHC binding affinities. The predictive performance of the method is validated on a large MHC class II benchmark data set covering 14 HLA-DR (human MHC) and three mouse H2-IA alleles. Results The predictive performance of the SMM-align method was demonstrated to be superior to that of the Gibbs sampler, TEPITOPE, SVRMHC, and MHCpred methods. Cross validation between peptide data set obtained from different sources demonstrated that direct incorporation of peptide length potentially results in over-fitting of the binding prediction method. Focusing on amino terminal peptide flanking residues (PFR), we demonstrate a consistent gain in predictive performance by favoring binding registers with a minimum PFR length of two amino acids. Visualizing the binding motif as obtained by the SMM-align and TEPITOPE methods highlights a series of fundamental discrepancies between the two predicted motifs. For the DRB1*1302 allele for instance, the TEPITOPE method favors basic amino acids at most anchor positions, whereas the SMM-align method identifies a preference for hydrophobic or neutral amino acids at the anchors. Conclusion The SMM-align method was shown to outperform other state of the art MHC class II prediction methods. The method predicts quantitative peptide:MHC binding affinity values, making it ideally suited for rational epitope discovery. The method has been trained and evaluated on the, to our knowledge, largest benchmark data set publicly available and covers the nine HLA-DR supertypes suggested as well as three mouse H2-IA allele. Both the peptide benchmark data set, and SMM-align prediction method (NetMHCII) are made publicly available. PMID:17608956

  20. Prediction of MHC class II binding affinity using SMM-align, a novel stabilization matrix alignment method.

    PubMed

    Nielsen, Morten; Lundegaard, Claus; Lund, Ole

    2007-07-04

    Antigen presenting cells (APCs) sample the extra cellular space and present peptides from here to T helper cells, which can be activated if the peptides are of foreign origin. The peptides are presented on the surface of the cells in complex with major histocompatibility class II (MHC II) molecules. Identification of peptides that bind MHC II molecules is thus a key step in rational vaccine design and developing methods for accurate prediction of the peptide:MHC interactions play a central role in epitope discovery. The MHC class II binding groove is open at both ends making the correct alignment of a peptide in the binding groove a crucial part of identifying the core of an MHC class II binding motif. Here, we present a novel stabilization matrix alignment method, SMM-align, that allows for direct prediction of peptide:MHC binding affinities. The predictive performance of the method is validated on a large MHC class II benchmark data set covering 14 HLA-DR (human MHC) and three mouse H2-IA alleles. The predictive performance of the SMM-align method was demonstrated to be superior to that of the Gibbs sampler, TEPITOPE, SVRMHC, and MHCpred methods. Cross validation between peptide data set obtained from different sources demonstrated that direct incorporation of peptide length potentially results in over-fitting of the binding prediction method. Focusing on amino terminal peptide flanking residues (PFR), we demonstrate a consistent gain in predictive performance by favoring binding registers with a minimum PFR length of two amino acids. Visualizing the binding motif as obtained by the SMM-align and TEPITOPE methods highlights a series of fundamental discrepancies between the two predicted motifs. For the DRB1*1302 allele for instance, the TEPITOPE method favors basic amino acids at most anchor positions, whereas the SMM-align method identifies a preference for hydrophobic or neutral amino acids at the anchors. The SMM-align method was shown to outperform other state of the art MHC class II prediction methods. The method predicts quantitative peptide:MHC binding affinity values, making it ideally suited for rational epitope discovery. The method has been trained and evaluated on the, to our knowledge, largest benchmark data set publicly available and covers the nine HLA-DR supertypes suggested as well as three mouse H2-IA allele. Both the peptide benchmark data set, and SMM-align prediction method (NetMHCII) are made publicly available.

  1. Measuring and Modeling Behavioral Decision Dynamics in Collective Evacuation

    PubMed Central

    Carlson, Jean M.; Alderson, David L.; Stromberg, Sean P.; Bassett, Danielle S.; Craparo, Emily M.; Guiterrez-Villarreal, Francisco; Otani, Thomas

    2014-01-01

    Identifying and quantifying factors influencing human decision making remains an outstanding challenge, impacting the performance and predictability of social and technological systems. In many cases, system failures are traced to human factors including congestion, overload, miscommunication, and delays. Here we report results of a behavioral network science experiment, targeting decision making in a natural disaster. In a controlled laboratory setting, our results quantify several key factors influencing individual evacuation decision making in a controlled laboratory setting. The experiment includes tensions between broadcast and peer-to-peer information, and contrasts the effects of temporal urgency associated with the imminence of the disaster and the effects of limited shelter capacity for evacuees. Based on empirical measurements of the cumulative rate of evacuations as a function of the instantaneous disaster likelihood, we develop a quantitative model for decision making that captures remarkably well the main features of observed collective behavior across many different scenarios. Moreover, this model captures the sensitivity of individual- and population-level decision behaviors to external pressures, and systematic deviations from the model provide meaningful estimates of variability in the collective response. Identification of robust methods for quantifying human decisions in the face of risk has implications for policy in disasters and other threat scenarios, specifically the development and testing of robust strategies for training and control of evacuations that account for human behavior and network topologies. PMID:24520331

  2. Near infrared spectroscopy as an on-line method to quantitatively determine glycogen and predict ultimate pH in pre rigor bovine M. longissimus dorsi.

    PubMed

    Lomiwes, D; Reis, M M; Wiklund, E; Young, O A; North, M

    2010-12-01

    The potential of near infrared (NIR) spectroscopy as an on-line method to quantify glycogen and predict ultimate pH (pH(u)) of pre rigor beef M. longissimus dorsi (LD) was assessed. NIR spectra (538 to 1677 nm) of pre rigor LD from steers, cows and bulls were collected early post mortem and measurements were made for pre rigor glycogen concentration and pH(u). Spectral and measured data were combined to develop models to quantify glycogen and predict the pH(u) of pre rigor LD. NIR spectra and pre rigor predicted values obtained from quantitative models were shown to be poorly correlated against glycogen and pH(u) (r(2)=0.23 and 0.20, respectively). Qualitative models developed to categorize each muscle according to their pH(u) were able to correctly categorize 42% of high pH(u) samples. Optimum qualitative and quantitative models derived from NIR spectra found low correlation between predicted values and reference measurements. Copyright © 2010 The American Meat Science Association. Published by Elsevier Ltd.. All rights reserved.

  3. Using metal-ligand binding characteristics to predict metal toxicity: quantitative ion character-activity relationships (QICARs).

    PubMed Central

    Newman, M C; McCloskey, J T; Tatara, C P

    1998-01-01

    Ecological risk assessment can be enhanced with predictive models for metal toxicity. Modelings of published data were done under the simplifying assumption that intermetal trends in toxicity reflect relative metal-ligand complex stabilities. This idea has been invoked successfully since 1904 but has yet to be applied widely in quantitative ecotoxicology. Intermetal trends in toxicity were successfully modeled with ion characteristics reflecting metal binding to ligands for a wide range of effects. Most models were useful for predictive purposes based on an F-ratio criterion and cross-validation, but anomalous predictions did occur if speciation was ignored. In general, models for metals with the same valence (i.e., divalent metals) were better than those combining mono-, di-, and trivalent metals. The softness parameter (sigma p) and the absolute value of the log of the first hydrolysis constant ([symbol: see text] log KOH [symbol: see text]) were especially useful in model construction. Also, delta E0 contributed substantially to several of the two-variable models. In contrast, quantitative attempts to predict metal interactions in binary mixtures based on metal-ligand complex stabilities were not successful. PMID:9860900

  4. What do we gain with Probabilistic Flood Loss Models?

    NASA Astrophysics Data System (ADS)

    Schroeter, K.; Kreibich, H.; Vogel, K.; Merz, B.; Lüdtke, S.

    2015-12-01

    The reliability of flood loss models is a prerequisite for their practical usefulness. Oftentimes, traditional uni-variate damage models as for instance depth-damage curves fail to reproduce the variability of observed flood damage. Innovative multi-variate probabilistic modelling approaches are promising to capture and quantify the uncertainty involved and thus to improve the basis for decision making. In this study we compare the predictive capability of two probabilistic modelling approaches, namely Bagging Decision Trees and Bayesian Networks and traditional stage damage functions which are cast in a probabilistic framework. For model evaluation we use empirical damage data which are available from computer aided telephone interviews that were respectively compiled after the floods in 2002, 2005, 2006 and 2013 in the Elbe and Danube catchments in Germany. We carry out a split sample test by sub-setting the damage records. One sub-set is used to derive the models and the remaining records are used to evaluate the predictive performance of the model. Further we stratify the sample according to catchments which allows studying model performance in a spatial transfer context. Flood damage estimation is carried out on the scale of the individual buildings in terms of relative damage. The predictive performance of the models is assessed in terms of systematic deviations (mean bias), precision (mean absolute error) as well as in terms of reliability which is represented by the proportion of the number of observations that fall within the 95-quantile and 5-quantile predictive interval. The reliability of the probabilistic predictions within validation runs decreases only slightly and achieves a very good coverage of observations within the predictive interval. Probabilistic models provide quantitative information about prediction uncertainty which is crucial to assess the reliability of model predictions and improves the usefulness of model results.

  5. Determination of quantitative trait variants by concordance via application of the a posteriori granddaughter design to the U.S. Holstein population

    USDA-ARS?s Scientific Manuscript database

    Experimental designs that exploit family information can provide substantial predictive power in quantitative trait variant discovery projects. Concordance between quantitative trait locus genotype as determined by the a posteriori granddaughter design and marker genotype was determined for 29 trai...

  6. Qualitative Versus Quantitative Social Support as a Predictor of Depression in the Elderly.

    ERIC Educational Resources Information Center

    Chwalisz, Kathleen D.; And Others

    This study examined the relationship between qualitative and quantitative indicators of social support in the prediction of depression. Quantitative indicators were examined with regard to their direct effects on depression as well as their indirect effects through their relationship to perceived social support. Subjects were 301…

  7. The Impact of Quantitative Data Provided by a Multi-spectral Digital Skin Lesion Analysis Device on Dermatologists'Decisions to Biopsy Pigmented Lesions.

    PubMed

    Farberg, Aaron S; Winkelmann, Richard R; Tucker, Natalie; White, Richard; Rigel, Darrell S

    2017-09-01

    BACKGROUND: Early diagnosis of melanoma is critical to survival. New technologies, such as a multi-spectral digital skin lesion analysis (MSDSLA) device [MelaFind, STRATA Skin Sciences, Horsham, Pennsylvania] may be useful to enhance clinician evaluation of concerning pigmented skin lesions. Previous studies evaluated the effect of only the binary output. OBJECTIVE: The objective of this study was to determine how decisions dermatologists make regarding pigmented lesion biopsies are impacted by providing both the underlying classifier score (CS) and associated probability risk provided by multi-spectral digital skin lesion analysis. This outcome was also compared against the improvement reported with the provision of only the binary output. METHODS: Dermatologists attending an educational conference evaluated 50 pigmented lesions (25 melanomas and 25 benign lesions). Participants were asked if they would biopsy the lesion based on clinical images, and were asked this question again after being shown multi-spectral digital skin lesion analysis data that included the probability graphs and classifier score. RESULTS: Data were analyzed from a total of 160 United States board-certified dermatologists. Biopsy sensitivity for melanoma improved from 76 percent following clinical evaluation to 92 percent after quantitative multi-spectral digital skin lesion analysis information was provided ( p <0.0001). Specificity improved from 52 percent to 79 percent ( p <0.0001). The positive predictive value increased from 61 percent to 81 percent ( p <0.01) when the quantitative data were provided. Negative predictive value also increased (68% vs. 91%, p<0.01), and overall biopsy accuracy was greater with multi-spectral digital skin lesion analysis (64% vs. 86%, p <0.001). Interrater reliability improved (intraclass correlation 0.466 before, 0.559 after). CONCLUSION: Incorporating the classifier score and probability data into physician evaluation of pigmented lesions led to both increased sensitivity and specificity, thereby resulting in more accurate biopsy decisions.

  8. Baseline correction combined partial least squares algorithm and its application in on-line Fourier transform infrared quantitative analysis.

    PubMed

    Peng, Jiangtao; Peng, Silong; Xie, Qiong; Wei, Jiping

    2011-04-01

    In order to eliminate the lower order polynomial interferences, a new quantitative calibration algorithm "Baseline Correction Combined Partial Least Squares (BCC-PLS)", which combines baseline correction and conventional PLS, is proposed. By embedding baseline correction constraints into PLS weights selection, the proposed calibration algorithm overcomes the uncertainty in baseline correction and can meet the requirement of on-line attenuated total reflectance Fourier transform infrared (ATR-FTIR) quantitative analysis. The effectiveness of the algorithm is evaluated by the analysis of glucose and marzipan ATR-FTIR spectra. BCC-PLS algorithm shows improved prediction performance over PLS. The root mean square error of cross-validation (RMSECV) on marzipan spectra for the prediction of the moisture is found to be 0.53%, w/w (range 7-19%). The sugar content is predicted with a RMSECV of 2.04%, w/w (range 33-68%). Copyright © 2011 Elsevier B.V. All rights reserved.

  9. [Quantitative surface analysis of Pt-Co, Cu-Au and Cu-Ag alloy films by XPS and AES].

    PubMed

    Li, Lian-Zhong; Zhuo, Shang-Jun; Shen, Ru-Xiang; Qian, Rong; Gao, Jie

    2013-11-01

    In order to improve the quantitative analysis accuracy of AES, We associated XPS with AES and studied the method to reduce the error of AES quantitative analysis, selected Pt-Co, Cu-Au and Cu-Ag binary alloy thin-films as the samples, used XPS to correct AES quantitative analysis results by changing the auger sensitivity factors to make their quantitative analysis results more similar. Then we verified the accuracy of the quantitative analysis of AES when using the revised sensitivity factors by other samples with different composition ratio, and the results showed that the corrected relative sensitivity factors can reduce the error in quantitative analysis of AES to less than 10%. Peak defining is difficult in the form of the integral spectrum of AES analysis since choosing the starting point and ending point when determining the characteristic auger peak intensity area with great uncertainty, and to make analysis easier, we also processed data in the form of the differential spectrum, made quantitative analysis on the basis of peak to peak height instead of peak area, corrected the relative sensitivity factors, and verified the accuracy of quantitative analysis by the other samples with different composition ratio. The result showed that the analytical error in quantitative analysis of AES reduced to less than 9%. It showed that the accuracy of AES quantitative analysis can be highly improved by the way of associating XPS with AES to correct the auger sensitivity factors since the matrix effects are taken into account. Good consistency was presented, proving the feasibility of this method.

  10. Predicting perturbation patterns from the topology of biological networks.

    PubMed

    Santolini, Marc; Barabási, Albert-László

    2018-06-20

    High-throughput technologies, offering an unprecedented wealth of quantitative data underlying the makeup of living systems, are changing biology. Notably, the systematic mapping of the relationships between biochemical entities has fueled the rapid development of network biology, offering a suitable framework to describe disease phenotypes and predict potential drug targets. However, our ability to develop accurate dynamical models remains limited, due in part to the limited knowledge of the kinetic parameters underlying these interactions. Here, we explore the degree to which we can make reasonably accurate predictions in the absence of the kinetic parameters. We find that simple dynamically agnostic models are sufficient to recover the strength and sign of the biochemical perturbation patterns observed in 87 biological models for which the underlying kinetics are known. Surprisingly, a simple distance-based model achieves 65% accuracy. We show that this predictive power is robust to topological and kinetic parameter perturbations, and we identify key network properties that can increase up to 80% the recovery rate of the true perturbation patterns. We validate our approach using experimental data on the chemotactic pathway in bacteria, finding that a network model of perturbation spreading predicts with ∼80% accuracy the directionality of gene expression and phenotype changes in knock-out and overproduction experiments. These findings show that the steady advances in mapping out the topology of biochemical interaction networks opens avenues for accurate perturbation spread modeling, with direct implications for medicine and drug development.

  11. Biomechanical model for computing deformations for whole-body image registration: A meshless approach.

    PubMed

    Li, Mao; Miller, Karol; Joldes, Grand Roman; Kikinis, Ron; Wittek, Adam

    2016-12-01

    Patient-specific biomechanical models have been advocated as a tool for predicting deformations of soft body organs/tissue for medical image registration (aligning two sets of images) when differences between the images are large. However, complex and irregular geometry of the body organs makes generation of patient-specific biomechanical models very time-consuming. Meshless discretisation has been proposed to solve this challenge. However, applications so far have been limited to 2D models and computing single organ deformations. In this study, 3D comprehensive patient-specific nonlinear biomechanical models implemented using meshless Total Lagrangian explicit dynamics algorithms are applied to predict a 3D deformation field for whole-body image registration. Unlike a conventional approach that requires dividing (segmenting) the image into non-overlapping constituents representing different organs/tissues, the mechanical properties are assigned using the fuzzy c-means algorithm without the image segmentation. Verification indicates that the deformations predicted using the proposed meshless approach are for practical purposes the same as those obtained using the previously validated finite element models. To quantitatively evaluate the accuracy of the predicted deformations, we determined the spatial misalignment between the registered (i.e. source images warped using the predicted deformations) and target images by computing the edge-based Hausdorff distance. The Hausdorff distance-based evaluation determines that our meshless models led to successful registration of the vast majority of the image features. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  12. Quantitative Lymphoscintigraphy to Predict the Possibility of Lymphedema Development After Breast Cancer Surgery: Retrospective Clinical Study.

    PubMed

    Kim, Paul; Lee, Ju Kang; Lim, Oh Kyung; Park, Heung Kyu; Park, Ki Deok

    2017-12-01

    To predict the probability of lymphedema development in breast cancer patients in the early post-operation stage, we investigated the ability of quantitative lymphoscintigraphic assessment. This retrospective study included 201 patients without lymphedema after unilateral breast cancer surgery. Lymphoscintigraphy was performed between 4 and 8 weeks after surgery to evaluate the lymphatic system in the early postoperative stage. Quantitative lymphoscintigraphy was performed using four methods: ratio of radiopharmaceutical clearance rate of the affected to normal hand; ratio of radioactivity of the affected to normal hand; ratio of radiopharmaceutical uptake rate of the affected to normal axilla (RUA); and ratio of radioactivity of the affected to normal axilla (RRA). During a 1-year follow-up, patients with a circumferential interlimb difference of 2 cm at any measurement location and a 200-mL interlimb volume difference were diagnosed with lymphedema. We investigated the difference in quantitative lymphoscintigraphic assessment between the non-lymphedema and lymphedema groups. Quantitative lymphoscintigraphic assessment revealed that the RUA and RRA were significantly lower in the lymphedema group than in the non-lymphedema group. After adjusting the model for all significant variables (body mass index, N-stage, T-stage, type of surgery, and type of lymph node surgery), RRA was associated with lymphedema (odds ratio=0.14; 95% confidence interval, 0.04-0.46; p=0.001). In patients in the early postoperative stage after unilateral breast cancer surgery, quantitative lymphoscintigraphic assessment can be used to predict the probability of developing lymphedema.

  13. [The role of endotracheal aspirate culture in the diagnosis of ventilator-associated pneumonia: a meta analysis].

    PubMed

    Wang, Fei; He, Bei

    2013-01-01

    To investigate the role of endotracheal aspirate (EA) culture in the diagnosis and antibiotic management in ventilator-associated pneumonia (VAP). We searched CNKI, Wanfang, PUBMED and EMBASE databases published from January 1990 to December 2011, to find relevant literatures on VAP microbiological diagnostic techniques including EA and bronchoalveolar lavage (BALF). The following key words were used: ventilator associated pneumonia, diagnosis and adult. Meta-analysis was performed and the sensitivity and specificity of EA on VAP diagnosis were calculated. Our literature search identified 1665 potential articles, 8 of which fulfilled our selection criteria including 561 patients with paired cultures. Using BALF quantitative culture as reference standard, the sensitivity and specificity of EA were 72% and 71%. When considering quantitative culture of EA only, the sensitivity and specificity improved to 90% and 65%, while the positive and the negative predictive values were 68% and 89% respectively. However, the sensitivity and specificity of semi-quantitative culture of EA were only 50% and 80%, with a positive predictive value of 77% and a negative predictive value of 58% respectively. EA culture had relatively poor sensitivity and specificity, although quantitative culture of EA only could improve the sensitivity. Initiating therapy on the basis of EA quantitative culture may still result in excessive antibiotic usage. Our data suggested that EA could provide some information for clinical decision but could not replace the role of BALF quantitative culture in VAP diagnosis.

  14. MilQuant: a free, generic software tool for isobaric tagging-based quantitation.

    PubMed

    Zou, Xiao; Zhao, Minzhi; Shen, Hongyan; Zhao, Xuyang; Tong, Yuanpeng; Wang, Qingsong; Wei, Shicheng; Ji, Jianguo

    2012-09-18

    Isobaric tagging techniques such as iTRAQ and TMT are widely used in quantitative proteomics and especially useful for samples that demand in vitro labeling. Due to diversity in choices of MS acquisition approaches, identification algorithms, and relative abundance deduction strategies, researchers are faced with a plethora of possibilities when it comes to data analysis. However, the lack of generic and flexible software tool often makes it cumbersome for researchers to perform the analysis entirely as desired. In this paper, we present MilQuant, mzXML-based isobaric labeling quantitator, a pipeline of freely available programs that supports native acquisition files produced by all mass spectrometer types and collection approaches currently used in isobaric tagging based MS data collection. Moreover, aside from effective normalization and abundance ratio deduction algorithms, MilQuant exports various intermediate results along each step of the pipeline, making it easy for researchers to customize the analysis. The functionality of MilQuant was demonstrated by four distinct datasets from different laboratories. The compatibility and extendibility of MilQuant makes it a generic and flexible tool that can serve as a full solution to data analysis of isobaric tagging-based quantitation. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Preoperative Cerebral Oxygen Extraction Fraction Imaging Generated from 7T MR Quantitative Susceptibility Mapping Predicts Development of Cerebral Hyperperfusion following Carotid Endarterectomy.

    PubMed

    Nomura, J-I; Uwano, I; Sasaki, M; Kudo, K; Yamashita, F; Ito, K; Fujiwara, S; Kobayashi, M; Ogasawara, K

    2017-12-01

    Preoperative hemodynamic impairment in the affected cerebral hemisphere is associated with the development of cerebral hyperperfusion following carotid endarterectomy. Cerebral oxygen extraction fraction images generated from 7T MR quantitative susceptibility mapping correlate with oxygen extraction fraction images on positron-emission tomography. The present study aimed to determine whether preoperative oxygen extraction fraction imaging generated from 7T MR quantitative susceptibility mapping could identify patients at risk for cerebral hyperperfusion following carotid endarterectomy. Seventy-seven patients with unilateral internal carotid artery stenosis (≥70%) underwent preoperative 3D T2*-weighted imaging using a multiple dipole-inversion algorithm with a 7T MR imager. Quantitative susceptibility mapping images were then obtained, and oxygen extraction fraction maps were generated. Quantitative brain perfusion single-photon emission CT was also performed before and immediately after carotid endarterectomy. ROIs were automatically placed in the bilateral middle cerebral artery territories in all images using a 3D stereotactic ROI template, and affected-to-contralateral ratios in the ROIs were calculated on quantitative susceptibility mapping-oxygen extraction fraction images. Ten patients (13%) showed post-carotid endarterectomy hyperperfusion (cerebral blood flow increases of ≥100% compared with preoperative values in the ROIs on brain perfusion SPECT). Multivariate analysis showed that a high quantitative susceptibility mapping-oxygen extraction fraction ratio was significantly associated with the development of post-carotid endarterectomy hyperperfusion (95% confidence interval, 33.5-249.7; P = .002). Sensitivity, specificity, and positive- and negative-predictive values of the quantitative susceptibility mapping-oxygen extraction fraction ratio for the prediction of the development of post-carotid endarterectomy hyperperfusion were 90%, 84%, 45%, and 98%, respectively. Preoperative oxygen extraction fraction imaging generated from 7T MR quantitative susceptibility mapping identifies patients at risk for cerebral hyperperfusion following carotid endarterectomy. © 2017 by American Journal of Neuroradiology.

  16. Quantitative Market Research Regarding Funding of District 8 Construction Projects

    DOT National Transportation Integrated Search

    1995-05-01

    The primary objective of this quantitative research is to provide information : for more effective decision making regarding the level of investment in various : transportation systems in District 8. : This objective was accomplished by establishing ...

  17. Validation of model predictions of pore-scale fluid distributions during two-phase flow

    NASA Astrophysics Data System (ADS)

    Bultreys, Tom; Lin, Qingyang; Gao, Ying; Raeini, Ali Q.; AlRatrout, Ahmed; Bijeljic, Branko; Blunt, Martin J.

    2018-05-01

    Pore-scale two-phase flow modeling is an important technology to study a rock's relative permeability behavior. To investigate if these models are predictive, the calculated pore-scale fluid distributions which determine the relative permeability need to be validated. In this work, we introduce a methodology to quantitatively compare models to experimental fluid distributions in flow experiments visualized with microcomputed tomography. First, we analyzed five repeated drainage-imbibition experiments on a single sample. In these experiments, the exact fluid distributions were not fully repeatable on a pore-by-pore basis, while the global properties of the fluid distribution were. Then two fractional flow experiments were used to validate a quasistatic pore network model. The model correctly predicted the fluid present in more than 75% of pores and throats in drainage and imbibition. To quantify what this means for the relevant global properties of the fluid distribution, we compare the main flow paths and the connectivity across the different pore sizes in the modeled and experimental fluid distributions. These essential topology characteristics matched well for drainage simulations, but not for imbibition. This suggests that the pore-filling rules in the network model we used need to be improved to make reliable predictions of imbibition. The presented analysis illustrates the potential of our methodology to systematically and robustly test two-phase flow models to aid in model development and calibration.

  18. Prediction of Compressional, Shear, and Stoneley Wave Velocities from Conventional Well Log Data Using a Committee Machine with Intelligent Systems

    NASA Astrophysics Data System (ADS)

    Asoodeh, Mojtaba; Bagheripour, Parisa

    2012-01-01

    Measurement of compressional, shear, and Stoneley wave velocities, carried out by dipole sonic imager (DSI) logs, provides invaluable data in geophysical interpretation, geomechanical studies and hydrocarbon reservoir characterization. The presented study proposes an improved methodology for making a quantitative formulation between conventional well logs and sonic wave velocities. First, sonic wave velocities were predicted from conventional well logs using artificial neural network, fuzzy logic, and neuro-fuzzy algorithms. Subsequently, a committee machine with intelligent systems was constructed by virtue of hybrid genetic algorithm-pattern search technique while outputs of artificial neural network, fuzzy logic and neuro-fuzzy models were used as inputs of the committee machine. It is capable of improving the accuracy of final prediction through integrating the outputs of aforementioned intelligent systems. The hybrid genetic algorithm-pattern search tool, embodied in the structure of committee machine, assigns a weight factor to each individual intelligent system, indicating its involvement in overall prediction of DSI parameters. This methodology was implemented in Asmari formation, which is the major carbonate reservoir rock of Iranian oil field. A group of 1,640 data points was used to construct the intelligent model, and a group of 800 data points was employed to assess the reliability of the proposed model. The results showed that the committee machine with intelligent systems performed more effectively compared with individual intelligent systems performing alone.

  19. Physics and chemistry-driven artificial neural network for predicting bioactivity of peptides and proteins and their design.

    PubMed

    Huang, Ri-Bo; Du, Qi-Shi; Wei, Yu-Tuo; Pang, Zong-Wen; Wei, Hang; Chou, Kuo-Chen

    2009-02-07

    Predicting the bioactivity of peptides and proteins is an important challenge in drug development and protein engineering. In this study we introduce a novel approach, the so-called "physics and chemistry-driven artificial neural network (Phys-Chem ANN)", to deal with such a problem. Unlike the existing ANN approaches, which were designed under the inspiration of biological neural system, the Phys-Chem ANN approach is based on the physical and chemical principles, as well as the structural features of proteins. In the Phys-Chem ANN model the "hidden layers" are no longer virtual "neurons", but real structural units of proteins and peptides. It is a hybridization approach, which combines the linear free energy concept of quantitative structure-activity relationship (QSAR) with the advanced mathematical technique of ANN. The Phys-Chem ANN approach has adopted an iterative and feedback procedure, incorporating both machine-learning and artificial intelligence capabilities. In addition to making more accurate predictions for the bioactivities of proteins and peptides than is possible with the traditional QSAR approach, the Phys-Chem ANN approach can also provide more insights about the relationship between bioactivities and the structures involved than the ANN approach does. As an example of the application of the Phys-Chem ANN approach, a predictive model for the conformational stability of human lysozyme is presented.

  20. Multi-model assessment of the impact of soil moisture initialization on mid-latitude summer predictability

    NASA Astrophysics Data System (ADS)

    Ardilouze, Constantin; Batté, L.; Bunzel, F.; Decremer, D.; Déqué, M.; Doblas-Reyes, F. J.; Douville, H.; Fereday, D.; Guemas, V.; MacLachlan, C.; Müller, W.; Prodhomme, C.

    2017-12-01

    Land surface initial conditions have been recognized as a potential source of predictability in sub-seasonal to seasonal forecast systems, at least for near-surface air temperature prediction over the mid-latitude continents. Yet, few studies have systematically explored such an influence over a sufficient hindcast period and in a multi-model framework to produce a robust quantitative assessment. Here, a dedicated set of twin experiments has been carried out with boreal summer retrospective forecasts over the 1992-2010 period performed by five different global coupled ocean-atmosphere models. The impact of a realistic versus climatological soil moisture initialization is assessed in two regions with high potential previously identified as hotspots of land-atmosphere coupling, namely the North American Great Plains and South-Eastern Europe. Over the latter region, temperature predictions show a significant improvement, especially over the Balkans. Forecast systems better simulate the warmest summers if they follow pronounced dry initial anomalies. It is hypothesized that models manage to capture a positive feedback between high temperature and low soil moisture content prone to dominate over other processes during the warmest summers in this region. Over the Great Plains, however, improving the soil moisture initialization does not lead to any robust gain of forecast quality for near-surface temperature. It is suggested that models biases prevent the forecast systems from making the most of the improved initial conditions.

  1. Elevated carbon dioxide is predicted to promote coexistence among competing species in a trait-based model

    DOE PAGES

    Ali, Ashehad A.; Medlyn, Belinda E.; Aubier, Thomas G.; ...

    2015-10-06

    Differential species responses to atmospheric CO 2 concentration (C a) could lead to quantitative changes in competition among species and community composition, with flow-on effects for ecosystem function. However, there has been little theoretical analysis of how elevated C a (eC a) will affect plant competition, or how composition of plant communities might change. Such theoretical analysis is needed for developing testable hypotheses to frame experimental research. Here, we investigated theoretically how plant competition might change under eC a by implementing two alternative competition theories, resource use theory and resource capture theory, in a plant carbon and nitrogen cycling model.more » The model makes several novel predictions for the impact of eC a on plant community composition. Using resource use theory, the model predicts that eC a is unlikely to change species dominance in competition, but is likely to increase coexistence among species. Using resource capture theory, the model predicts that eC a may increase community evenness. Collectively, both theories suggest that eC a will favor coexistence and hence that species diversity should increase with eC a. Our theoretical analysis leads to a novel hypothesis for the impact of eC a on plant community composition. In this study, the hypothesis has potential to help guide the design and interpretation of eC a experiments.« less

  2. The effects of stabilizing and directional selection on phenotypic and genotypic variation in a population of RNA enzymes.

    PubMed

    Hayden, Eric J; Bratulic, Sinisa; Koenig, Iwo; Ferrada, Evandro; Wagner, Andreas

    2014-02-01

    The distribution of variation in a quantitative trait and its underlying distribution of genotypic diversity can both be shaped by stabilizing and directional selection. Understanding either distribution is important, because it determines a population's response to natural selection. Unfortunately, existing theory makes conflicting predictions about how selection shapes these distributions, and very little pertinent experimental evidence exists. Here we study a simple genetic system, an evolving RNA enzyme (ribozyme) in which a combination of high throughput genotyping and measurement of a biochemical phenotype allow us to address this question. We show that directional selection, compared to stabilizing selection, increases the genotypic diversity of an evolving ribozyme population. In contrast, it leaves the variance in the phenotypic trait unchanged.

  3. PEITH(Θ): perfecting experiments with information theory in Python with GPU support.

    PubMed

    Dony, Leander; Mackerodt, Jonas; Ward, Scott; Filippi, Sarah; Stumpf, Michael P H; Liepe, Juliane

    2018-04-01

    Different experiments provide differing levels of information about a biological system. This makes it difficult, a priori, to select one of them beyond mere speculation and/or belief, especially when resources are limited. With the increasing diversity of experimental approaches and general advances in quantitative systems biology, methods that inform us about the information content that a given experiment carries about the question we want to answer, become crucial. PEITH(Θ) is a general purpose, Python framework for experimental design in systems biology. PEITH(Θ) uses Bayesian inference and information theory in order to derive which experiments are most informative in order to estimate all model parameters and/or perform model predictions. https://github.com/MichaelPHStumpf/Peitho. m.stumpf@imperial.ac.uk or juliane.liepe@mpibpc.mpg.de.

  4. How do tympanic-membrane perforations affect human middle-ear sound transmission?

    PubMed

    Voss, S E; Rosowski, J J; Merchant, S N; Peake, W T

    2001-01-01

    Although tympanic-membrane (TM) perforations are common sequelae of middle-ear disease, the hearing losses they cause have not been accurately determined, largely because additional pathological conditions occur in these ears. Our measurements of acoustic transmission before and after making controlled perforations in cadaver ears show that perforations cause frequency-dependent loss that: (1) is largest at low frequencies; (2) increases as perforation size increases; and (3) does not depend on perforation location. The dominant loss mechanism is the reduction in sound-pressure difference across the TM. Measurements of middle-ear air-space sound pressures show that transmission via direct acoustic stimulation of the oval and round windows is generally negligible. A quantitative model predicts the influence of middle-ear air-space volume on loss; with larger volumes, loss is smaller.

  5. Cavallo's multiplier for in situ generation of high voltage

    NASA Astrophysics Data System (ADS)

    Clayton, S. M.; Ito, T. M.; Ramsey, J. C.; Wei, W.; Blatnik, M. A.; Filippone, B. W.; Seidel, G. M.

    2018-05-01

    A classic electrostatic induction machine, Cavallo's multiplier, is suggested for in situ production of very high voltage in cryogenic environments. The device is suitable for generating a large electrostatic field under conditions of very small load current. Operation of the Cavallo multiplier is analyzed, with quantitative description in terms of mutual capacitances between electrodes in the system. A demonstration apparatus was constructed, and measured voltages are compared to predictions based on measured capacitances in the system. The simplicity of the Cavallo multiplier makes it amenable to electrostatic analysis using finite element software, and electrode shapes can be optimized to take advantage of a high dielectric strength medium such as liquid helium. A design study is presented for a Cavallo multiplier in a large-scale, cryogenic experiment to measure the neutron electric dipole moment.

  6. Effects of wind waves on horizontal array performance in shallow-water conditions

    NASA Astrophysics Data System (ADS)

    Zavol'skii, N. A.; Malekhanov, A. I.; Raevskii, M. A.; Smirnov, A. V.

    2017-09-01

    We analyze the influence of statistical effects of the propagation of an acoustic signal excited by a tone source in a shallow-water channel with a rough sea surface on the efficiency of a horizontal phased array. As the array characteristics, we consider the angular function of the array response for a given direction to the source and the coefficient of amplification of the signal-to-noise ratio (array gain). Numerical simulation was conducted in to the winter hydrological conditions of the Barents Sea in a wide range of parameters determining the spatial signal coherence. The results show the main physical effects of the influence of wind waves on the array characteristics and make it possible to quantitatively predict the efficiency of a large horizontal array in realistic shallow-water channels.

  7. Biodiversity elements vulnerable to climate change in the Catskill High Peaks subecoregion (Ulster, Delaware, Sullivan, and Greene Counties, New York State).

    PubMed

    Adams, Morton S; Parisio, Steven J

    2013-09-01

    Climate change is expected to affect biodiversity elements in the Catskill High Peaks subecoregion of New York State with effects that are difficult to predict. The present communication details the species and communities of greatest conservation concern in this portion of the state and makes recommendations for monitoring the most pressing climate change-biodiversity vulnerabilities. Specifically, we present sites for monitoring representative old-growth and successional stands of red spruce/balsam fir and northern hardwood matrix forests, cliff communities, ice cave talus communities, and both minerotrophic inland poor fen and ombrotrophic perched peatlands. The proposed monitoring protocols vary among the various sites, but all are quantitative and are designed to document patterns of change. © 2013 New York Academy of Sciences.

  8. Modelling PK/QT relationships from Phase I dose-escalation trials for drug combinations and developing quantitative risk assessments of clinically relevant QT prolongations.

    PubMed

    Sinclair, Karen; Kinable, Els; Grosch, Kai; Wang, Jixian

    2016-05-01

    In current industry practice, it is difficult to assess QT effects at potential therapeutic doses based on Phase I dose-escalation trials in oncology due to data scarcity, particularly in combinations trials. In this paper, we propose to use dose-concentration and concentration-QT models jointly to model the exposures and effects of multiple drugs in combination. The fitted models then can be used to make early predictions for QT prolongation to aid choosing recommended dose combinations for further investigation. The models consider potential correlation between concentrations of test drugs and potential drug-drug interactions at PK and QT levels. In addition, this approach allows for the assessment of the probability of QT prolongation exceeding given thresholds of clinical significance. The performance of this approach was examined via simulation under practical scenarios for dose-escalation trials for a combination of two drugs. The simulation results show that invaluable information of QT effects at therapeutic dose combinations can be gained by the proposed approaches. Early detection of dose combinations with substantial QT prolongation is evaluated effectively through the CIs of the predicted peak QT prolongation at each dose combination. Furthermore, the probability of QT prolongation exceeding a certain threshold is also computed to support early detection of safety signals while accounting for uncertainty associated with data from Phase I studies. While the prediction of QT effects is sensitive to the dose escalation process, the sensitivity and limited sample size should be considered when providing support to the decision-making process for further developing certain dose combinations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  9. Quantitative computed tomography versus spirometry in predicting air leak duration after major lung resection for cancer.

    PubMed

    Ueda, Kazuhiro; Kaneda, Yoshikazu; Sudo, Manabu; Mitsutaka, Jinbo; Li, Tao-Sheng; Suga, Kazuyoshi; Tanaka, Nobuyuki; Hamano, Kimikazu

    2005-11-01

    Emphysema is a well-known risk factor for developing air leak or persistent air leak after pulmonary resection. Although quantitative computed tomography (CT) and spirometry are used to diagnose emphysema, it remains controversial whether these tests are predictive of the duration of postoperative air leak. Sixty-two consecutive patients who were scheduled to undergo major lung resection for cancer were enrolled in this prospective study to define the best predictor of postoperative air leak duration. Preoperative factors analyzed included spirometric variables and area of emphysema (proportion of the low-attenuation area) that was quantified in a three-dimensional CT lung model. Chest tubes were removed the day after disappearance of the air leak, regardless of pleural drainage. Univariate and multivariate proportional hazards analyses were used to determine the influence of preoperative factors on chest tube time (air leak duration). By univariate analysis, site of resection (upper, lower), forced expiratory volume in 1 second, predicted postoperative forced expiratory volume in 1 second, and area of emphysema (< 1%, 1% to 10%, > 10%) were significant predictors of air leak duration. By multivariate analysis, site of resection and area of emphysema were the best independent determinants of air leak duration. The results were similar for patients with a smoking history (n = 40), but neither forced expiratory volume in 1 second nor predicted postoperative forced expiratory volume in 1 second were predictive of air leak duration. Quantitative CT is superior to spirometry in predicting air leak duration after major lung resection for cancer. Quantitative CT may aid in the identification of patients, particularly among those with a smoking history, requiring additional preventive procedures against air leak.

  10. Extending Theory-Based Quantitative Predictions to New Health Behaviors.

    PubMed

    Brick, Leslie Ann D; Velicer, Wayne F; Redding, Colleen A; Rossi, Joseph S; Prochaska, James O

    2016-04-01

    Traditional null hypothesis significance testing suffers many limitations and is poorly adapted to theory testing. A proposed alternative approach, called Testing Theory-based Quantitative Predictions, uses effect size estimates and confidence intervals to directly test predictions based on theory. This paper replicates findings from previous smoking studies and extends the approach to diet and sun protection behaviors using baseline data from a Transtheoretical Model behavioral intervention (N = 5407). Effect size predictions were developed using two methods: (1) applying refined effect size estimates from previous smoking research or (2) using predictions developed by an expert panel. Thirteen of 15 predictions were confirmed for smoking. For diet, 7 of 14 predictions were confirmed using smoking predictions and 6 of 16 using expert panel predictions. For sun protection, 3 of 11 predictions were confirmed using smoking predictions and 5 of 19 using expert panel predictions. Expert panel predictions and smoking-based predictions poorly predicted effect sizes for diet and sun protection constructs. Future studies should aim to use previous empirical data to generate predictions whenever possible. The best results occur when there have been several iterations of predictions for a behavior, such as with smoking, demonstrating that expected values begin to converge on the population effect size. Overall, the study supports necessity in strengthening and revising theory with empirical data.

  11. Predictability and Robustness in the Manipulation of Dynamically Complex Objects

    PubMed Central

    Hasson, Christopher J.

    2017-01-01

    Manipulation of complex objects and tools is a hallmark of many activities of daily living, but how the human neuromotor control system interacts with such objects is not well understood. Even the seemingly simple task of transporting a cup of coffee without spilling creates complex interaction forces that humans need to compensate for. Predicting the behavior of an underactuated object with nonlinear fluid dynamics based on an internal model appears daunting. Hence, this research tests the hypothesis that humans learn strategies that make interactions predictable and robust to inaccuracies in neural representations of object dynamics. The task of moving a cup of coffee is modeled with a cart-and-pendulum system that is rendered in a virtual environment, where subjects interact with a virtual cup with a rolling ball inside using a robotic manipulandum. To gain insight into human control strategies, we operationalize predictability and robustness to permit quantitative theory-based assessment. Predictability is quantified by the mutual information between the applied force and the object dynamics; robustness is quantified by the energy margin away from failure. Three studies are reviewed that show how with practice subjects develop movement strategies that are predictable and robust. Alternative criteria, common for free movement, such as maximization of smoothness and minimization of force, do not account for the observed data. As manual dexterity is compromised in many individuals with neurological disorders, the experimental paradigm and its analyses are a promising platform to gain insights into neurological diseases, such as dystonia and multiple sclerosis, as well as healthy aging. PMID:28035560

  12. In silico modeling to predict drug-induced phospholipidosis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Sydney S.; Kim, Jae S.; Valerio, Luis G., E-mail: luis.valerio@fda.hhs.gov

    2013-06-01

    Drug-induced phospholipidosis (DIPL) is a preclinical finding during pharmaceutical drug development that has implications on the course of drug development and regulatory safety review. A principal characteristic of drugs inducing DIPL is known to be a cationic amphiphilic structure. This provides evidence for a structure-based explanation and opportunity to analyze properties and structures of drugs with the histopathologic findings for DIPL. In previous work from the FDA, in silico quantitative structure–activity relationship (QSAR) modeling using machine learning approaches has shown promise with a large dataset of drugs but included unconfirmed data as well. In this study, we report the constructionmore » and validation of a battery of complementary in silico QSAR models using the FDA's updated database on phospholipidosis, new algorithms and predictive technologies, and in particular, we address high performance with a high-confidence dataset. The results of our modeling for DIPL include rigorous external validation tests showing 80–81% concordance. Furthermore, the predictive performance characteristics include models with high sensitivity and specificity, in most cases above ≥ 80% leading to desired high negative and positive predictivity. These models are intended to be utilized for regulatory toxicology applied science needs in screening new drugs for DIPL. - Highlights: • New in silico models for predicting drug-induced phospholipidosis (DIPL) are described. • The training set data in the models is derived from the FDA's phospholipidosis database. • We find excellent predictivity values of the models based on external validation. • The models can support drug screening and regulatory decision-making on DIPL.« less

  13. Hydrological-niche models predict water plant functional group distributions in diverse wetland types.

    PubMed

    Deane, David C; Nicol, Jason M; Gehrig, Susan L; Harding, Claire; Aldridge, Kane T; Goodman, Abigail M; Brookes, Justin D

    2017-06-01

    Human use of water resources threatens environmental water supplies. If resource managers are to develop policies that avoid unacceptable ecological impacts, some means to predict ecosystem response to changes in water availability is necessary. This is difficult to achieve at spatial scales relevant for water resource management because of the high natural variability in ecosystem hydrology and ecology. Water plant functional groups classify species with similar hydrological niche preferences together, allowing a qualitative means to generalize community responses to changes in hydrology. We tested the potential for functional groups in making quantitative prediction of water plant functional group distributions across diverse wetland types over a large geographical extent. We sampled wetlands covering a broad range of hydrogeomorphic and salinity conditions in South Australia, collecting both hydrological and floristic data from 687 quadrats across 28 wetland hydrological gradients. We built hydrological-niche models for eight water plant functional groups using a range of candidate models combining different surface inundation metrics. We then tested the predictive performance of top-ranked individual and averaged models for each functional group. Cross validation showed that models achieved acceptable predictive performance, with correct classification rates in the range 0.68-0.95. Model predictions can be made at any spatial scale that hydrological data are available and could be implemented in a geographical information system. We show the response of water plant functional groups to inundation is consistent enough across diverse wetland types to quantify the probability of hydrological impacts over regional spatial scales. © 2017 by the Ecological Society of America.

  14. Quantitative Comparison of Photothermal Heat Generation between Gold Nanospheres and Nanorods.

    PubMed

    Qin, Zhenpeng; Wang, Yiru; Randrianalisoa, Jaona; Raeesi, Vahid; Chan, Warren C W; Lipiński, Wojciech; Bischof, John C

    2016-07-21

    Gold nanoparticles (GNPs) are widely used for biomedical applications due to unique optical properties, established synthesis methods, and biological compatibility. Despite important applications of plasmonic heating in thermal therapy, imaging, and diagnostics, the lack of quantification in heat generation leads to difficulties in comparing the heating capability for new plasmonic nanostructures and predicting the therapeutic and diagnostic outcome. This study quantifies GNP heat generation by experimental measurements and theoretical predictions for gold nanospheres (GNS) and nanorods (GNR). Interestingly, the results show a GNP-type dependent agreement between experiment and theory. The measured heat generation of GNS matches well with theory, while the measured heat generation of GNR is only 30% of that predicted theoretically at peak absorption. This then leads to a surprising finding that the polydispersity, the deviation of nanoparticle size and shape from nominal value, significantly influences GNR heat generation (>70% reduction), while having a limited effect for GNS (<10% change). This work demonstrates that polydispersity is an important metric in quantitatively predicting plasmonic heat generation and provides a validated framework to quantitatively compare the heating capabilities between gold and other plasmonic nanostructures.

  15. Quantitative Comparison of Photothermal Heat Generation between Gold Nanospheres and Nanorods

    NASA Astrophysics Data System (ADS)

    Qin, Zhenpeng; Wang, Yiru; Randrianalisoa, Jaona; Raeesi, Vahid; Chan, Warren C. W.; Lipiński, Wojciech; Bischof, John C.

    2016-07-01

    Gold nanoparticles (GNPs) are widely used for biomedical applications due to unique optical properties, established synthesis methods, and biological compatibility. Despite important applications of plasmonic heating in thermal therapy, imaging, and diagnostics, the lack of quantification in heat generation leads to difficulties in comparing the heating capability for new plasmonic nanostructures and predicting the therapeutic and diagnostic outcome. This study quantifies GNP heat generation by experimental measurements and theoretical predictions for gold nanospheres (GNS) and nanorods (GNR). Interestingly, the results show a GNP-type dependent agreement between experiment and theory. The measured heat generation of GNS matches well with theory, while the measured heat generation of GNR is only 30% of that predicted theoretically at peak absorption. This then leads to a surprising finding that the polydispersity, the deviation of nanoparticle size and shape from nominal value, significantly influences GNR heat generation (>70% reduction), while having a limited effect for GNS (<10% change). This work demonstrates that polydispersity is an important metric in quantitatively predicting plasmonic heat generation and provides a validated framework to quantitatively compare the heating capabilities between gold and other plasmonic nanostructures.

  16. Application of pedagogy reflective in statistical methods course and practicum statistical methods

    NASA Astrophysics Data System (ADS)

    Julie, Hongki

    2017-08-01

    Subject Elementary Statistics, Statistical Methods and Statistical Methods Practicum aimed to equip students of Mathematics Education about descriptive statistics and inferential statistics. The students' understanding about descriptive and inferential statistics were important for students on Mathematics Education Department, especially for those who took the final task associated with quantitative research. In quantitative research, students were required to be able to present and describe the quantitative data in an appropriate manner, to make conclusions from their quantitative data, and to create relationships between independent and dependent variables were defined in their research. In fact, when students made their final project associated with quantitative research, it was not been rare still met the students making mistakes in the steps of making conclusions and error in choosing the hypothetical testing process. As a result, they got incorrect conclusions. This is a very fatal mistake for those who did the quantitative research. There were some things gained from the implementation of reflective pedagogy on teaching learning process in Statistical Methods and Statistical Methods Practicum courses, namely: 1. Twenty two students passed in this course and and one student did not pass in this course. 2. The value of the most accomplished student was A that was achieved by 18 students. 3. According all students, their critical stance could be developed by them, and they could build a caring for each other through a learning process in this course. 4. All students agreed that through a learning process that they undergo in the course, they can build a caring for each other.

  17. An exploratory study of relational, persuasive, and nonverbal communication in requests for tissue donation.

    PubMed

    Siminoff, Laura A; Traino, Heather M; Gordon, Nahida H

    2011-10-01

    This study explores the effects of tissue requesters' relational, persuasive, and nonverbal communication on families' final donation decisions. One thousand sixteen (N = 1,016) requests for tissue donation were audiotaped and analyzed using the Siminoff Communication Content and Affect Program, a computer application specifically designed to code and assist with the quantitative analysis of communication data. This study supports the important role of communication strategies in health-related decision making. Families were more likely to consent to tissue donation when confirmational messages (e.g., messages that expressed validation or acceptance) or persuasive tactics such as credibility, altruism, or esteem were used during donation discussions. Consent was also more likely when family members exhibited nonverbal immediacy or disclosed private information about themselves or the patient. The results of a hierarchical log-linear regression revealed that the use of relational communication during requests directly predicted family consent. The results provide information about surrogate decision making in end-of-life situations and may be used to guide future practice in obtaining family consent to tissue donation.

  18. Microfluidic analysis of oocyte and embryo biomechanical properties to improve outcomes in assisted reproductive technologies.

    PubMed

    Yanez, Livia Z; Camarillo, David B

    2017-04-01

    Measurement of oocyte and embryo biomechanical properties has recently emerged as an exciting new approach to obtain a quantitative, objective estimate of developmental potential. However, many traditional methods for probing cell mechanical properties are time consuming, labor intensive and require expensive equipment. Microfluidic technology is currently making its way into many aspects of assisted reproductive technologies (ART), and is particularly well suited to measure embryo biomechanics due to the potential for robust, automated single-cell analysis at a low cost. This review will highlight microfluidic approaches to measure oocyte and embryo mechanics along with their ability to predict developmental potential and find practical application in the clinic. Although these new devices must be extensively validated before they can be integrated into the existing clinical workflow, they could eventually be used to constantly monitor oocyte and embryo developmental progress and enable more optimal decision making in ART. © The Author 2016. Published by Oxford University Press on behalf of the European Society of Human Reproduction and Embryology. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  19. Semiclassical Wigner theory of photodissociation in three dimensions: Shedding light on its basis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arbelo-González, W.; CNRS, Institut des Sciences Moléculaires, UMR 5255, 33405 Talence; Université Bordeaux, Institut des Sciences Moléculaires, UMR 5255, 33405 Talence

    2015-04-07

    The semiclassical Wigner theory (SCWT) of photodissociation dynamics, initially proposed by Brown and Heller [J. Chem. Phys. 75, 186 (1981)] in order to describe state distributions in the products of direct collinear photodissociations, was recently extended to realistic three-dimensional triatomic processes of the same type [Arbelo-González et al., Phys. Chem. Chem. Phys. 15, 9994 (2013)]. The resulting approach, which takes into account rotational motions in addition to vibrational and translational ones, was applied to a triatomic-like model of methyl iodide photodissociation and its predictions were found to be in nearly quantitative agreement with rigorous quantum results, but at a muchmore » lower computational cost, making thereby SCWT a potential tool for the study of polyatomic reaction dynamics. Here, we analyse the main reasons for this agreement by means of an elementary model of fragmentation explicitly dealing with the rotational motion only. We show that our formulation of SCWT makes it a semiclassical approximation to an approximate planar quantum treatment of the dynamics, both of sufficient quality for the whole treatment to be satisfying.« less

  20. An Exploratory Study of Relational, Persuasive, and Nonverbal Communication in Requests for Tissue Donation

    PubMed Central

    SIMINOFF, LAURA A.; TRAINO, HEATHER M.; GORDON, NAHIDA H.

    2011-01-01

    This study explores the effects of tissue requesters’ relational, persuasive, and nonverbal communication on families’ final donation decisions. One thousand sixteen (N=1,016) requests for tissue donation were audiotaped and analyzed using the Siminoff Communication Content and Affect Program, a computer application specifically designed to code and assist with the quantitative analysis of communication data. This study supports the important role of communication strategies in health-related decision making. Families were more likely to consent to tissue donation when confirmational messages (e.g., messages that expressed validation or acceptance) or persuasive tactics such as credibility, altruism, or esteem were used during donation discussions. Consent was also more likely when family members exhibited nonverbal immediacy or disclosed private information about themselves or the patient. The results of a hierarchical log-linear regression revealed that the use of relational communication during requests directly predicted family consent. The results provide information about surrogate decision making in end-of-life situations and may be used to guide future practice in obtaining family consent to tissue donation. PMID:21512935

  1. Visualizing Uncertainty for Probabilistic Weather Forecasting based on Reforecast Analogs

    NASA Astrophysics Data System (ADS)

    Pelorosso, Leandro; Diehl, Alexandra; Matković, Krešimir; Delrieux, Claudio; Ruiz, Juan; Gröeller, M. Eduard; Bruckner, Stefan

    2016-04-01

    Numerical weather forecasts are prone to uncertainty coming from inaccuracies in the initial and boundary conditions and lack of precision in numerical models. Ensemble of forecasts partially addresses these problems by considering several runs of the numerical model. Each forecast is generated with different initial and boundary conditions and different model configurations [GR05]. The ensembles can be expressed as probabilistic forecasts, which have proven to be very effective in the decision-making processes [DE06]. The ensemble of forecasts represents only some of the possible future atmospheric states, usually underestimating the degree of uncertainty in the predictions [KAL03, PH06]. Hamill and Whitaker [HW06] introduced the "Reforecast Analog Regression" (RAR) technique to overcome the limitations of ensemble forecasting. This technique produces probabilistic predictions based on the analysis of historical forecasts and observations. Visual analytics provides tools for processing, visualizing, and exploring data to get new insights and discover hidden information patterns in an interactive exchange between the user and the application [KMS08]. In this work, we introduce Albero, a visual analytics solution for probabilistic weather forecasting based on the RAR technique. Albero targets at least two different type of users: "forecasters", who are meteorologists working in operational weather forecasting and "researchers", who work in the construction of numerical prediction models. Albero is an efficient tool for analyzing precipitation forecasts, allowing forecasters to make and communicate quick decisions. Our solution facilitates the analysis of a set of probabilistic forecasts, associated statistical data, observations and uncertainty. A dashboard with small-multiples of probabilistic forecasts allows the forecasters to analyze at a glance the distribution of probabilities as a function of time, space, and magnitude. It provides the user with a more accurate measure of forecast uncertainty that could result in better decision-making. It offers different level of abstractions to help with the recalibration of the RAR method. It also has an inspection tool that displays the selected analogs, their observations and statistical data. It gives the users access to inner parts of the method, unveiling hidden information. References [GR05] GNEITING T., RAFTERY A. E.: Weather forecasting with ensemble methods. Science 310, 5746, 248-249, 2005. [KAL03] KALNAY E.: Atmospheric modeling, data assimilation and predictability. Cambridge University Press, 2003. [PH06] PALMER T., HAGEDORN R.: Predictability of weather and climate. Cambridge University Press, 2006. [HW06] HAMILL T. M., WHITAKER J. S.: Probabilistic quantitative precipitation forecasts based on reforecast analogs: Theory and application. Monthly Weather Review 134, 11, 3209-3229, 2006. [DE06] DEITRICK S., EDSALL R.: The influence of uncertainty visualization on decision making: An empirical evaluation. Springer, 2006. [KMS08] KEIM D. A., MANSMANN F., SCHNEIDEWIND J., THOMAS J., ZIEGLER H.: Visual analytics: Scope and challenges. Springer, 2008.

  2. Know the risk, take the win: how executive functions and probability processing influence advantageous decision making under risk conditions.

    PubMed

    Brand, Matthias; Schiebener, Johannes; Pertl, Marie-Theres; Delazer, Margarete

    2014-01-01

    Recent models on decision making under risk conditions have suggested that numerical abilities are important ingredients of advantageous decision-making performance, but empirical evidence is still limited. The results of our first study show that logical reasoning and basic mental calculation capacities predict ratio processing and that ratio processing predicts decision making under risk. In the second study, logical reasoning together with executive functions predicted probability processing (numeracy and probability knowledge), and probability processing predicted decision making under risk. These findings suggest that increasing an individual's understanding of ratios and probabilities should lead to more advantageous decisions under risk conditions.

  3. Testing 40 Predictions from the Transtheoretical Model Again, with Confidence

    ERIC Educational Resources Information Center

    Velicer, Wayne F.; Brick, Leslie Ann D.; Fava, Joseph L.; Prochaska, James O.

    2013-01-01

    Testing Theory-based Quantitative Predictions (TTQP) represents an alternative to traditional Null Hypothesis Significance Testing (NHST) procedures and is more appropriate for theory testing. The theory generates explicit effect size predictions and these effect size estimates, with related confidence intervals, are used to test the predictions.…

  4. Quantitative Correlation of in Vivo Properties with in Vitro Assay Results: The in Vitro Binding of a Biotin–DNA Analogue Modifier with Streptavidin Predicts the in Vivo Avidin-Induced Clearability of the Analogue-Modified Antibody

    PubMed Central

    Dou, Shuping; Virostko, John; Greiner, Dale L.; Powers, Alvin C.; Liu, Guozheng

    2016-01-01

    Quantitative prediction of in vivo behavior using an in vitro assay would dramatically accelerate pharmaceutical development. However, studies quantitatively correlating in vivo properties with in vitro assay results are rare because of the difficulty in quantitatively understanding the in vivo behavior of an agent. We now demonstrate such a correlation as a case study based on our quantitative understanding of the in vivo chemistry. In an ongoing pretargeting project, we designed a trifunctional antibody (Ab) that concomitantly carried a biotin and a DNA analogue (hereafter termed MORF). The biotin and the MORF were fused into one structure prior to conjugation to the Ab for the concomitant attachment. Because it was known that avidin-bound Ab molecules leave the circulation rapidly, this design would theoretically allow complete clearance by avidin. The clearability of the trifunctional Ab was determined by calculating the blood MORF concentration ratio of avidin-treated Ab to non-avidin-treated Ab using mice injected with these compounds. In theory, any compromised clearability should be due to the presence of impurities. In vitro, we measured the biotinylated percentage of the Ab-reacting (MORF-biotin)⊃-NH2 modifier, by addition of streptavidin to the radiolabeled (MORF-biotin)⊃-NH2 samples and subsequent high-performance liquid chromatography (HPLC) analysis. On the basis of our previous quantitative understanding, we predicted that the clearability of the Ab would be equal to the biotinylation percentage measured via HPLC. We validated this prediction within a 3% difference. In addition to the high avidin-induced clearability of the trifunctional Ab (up to ~95%) achieved by the design, we were able to predict the required quality of the (MORF-biotin)⊃-NH2 modifier for any given in vivo clearability. This approach may greatly reduce the steps and time currently required in pharmaceutical development in the process of synthesis, chemical analysis, in vitro cell study, and in vivo validation. PMID:26103429

  5. The role of quantitative estrogen receptor status in predicting tumor response at surgery in breast cancer patients treated with neoadjuvant chemotherapy.

    PubMed

    Raphael, Jacques; Gandhi, Sonal; Li, Nim; Lu, Fang-I; Trudeau, Maureen

    2017-07-01

    Estrogen receptor (ER) negative (-) breast cancer (BC) patients have better tumor response rates than ER-positive (+) patients after neoadjuvant chemotherapy (NCT). We conducted a retrospective review using the institutional database "Biomatrix" to assess the value of quantitative ER status in predicting tumor response at surgery and to identify potential predictors of survival outcomes. Univariate followed by multivariable regression analyses were conducted to assess the association between quantitative ER and tumor response assessed as tumor size reduction and pathologic complete response (pCR). Predictors of recurrence-free survival (RFS) were identified using a cox proportional hazards model (CPH). A log-rank test was used to compare RFS between groups if a significant predictor was identified. 304 patients were included with a median follow-up of 43.3 months (Q1-Q3 28.7-61.1) and a mean age of 49.7 years (SD 10.9). Quantitative ER was inversely associated with tumor size reduction and pCR (OR 0.99, 95% CI 0.99-1.00, p = 0.027 and 0.98 95% CI 0.97-0.99, p < 0.0001, respectively). A cut-off of 60 and 80% predicted best the association with tumor size reduction and pCR, respectively. pCR was shown to be an independent predictor of RFS (HR 0.17, 95% CI 0.07-0.43, p = 0.0002) in all patients. At 5 years, 93% of patients with pCR and 72% of patients with residual tumor were recurrence-free, respectively (p = 0.0012). Quantitative ER status is inversely associated with tumor response in BC patients treated with NCT. A cut-off of 60 and 80% predicts best the association with tumor size reduction and pCR, respectively. Therefore, patients with an ER status higher than the cut-off might benefit from a neoadjuvant endocrine therapy approach. Patients with pCR had better survival outcomes independently of their tumor phenotype. Further prospective studies are needed to validate the clinical utility of quantitative ER as a predictive marker of tumor response.

  6. A Quantitative Methodology to Examine the Development of Moral Judgment

    ERIC Educational Resources Information Center

    Buchanan, James P.; Thompson, Spencer K.

    1973-01-01

    Unlike Piaget's clinical procedure, the experiment's methodology allowed substantiation of the ability of children to simultaneously weigh damage and intent information when making a moral judgment. Other advantages of this quantitative methodology are also presented. (Authors)

  7. A quantitative risk-based model for reasoning over critical system properties

    NASA Technical Reports Server (NTRS)

    Feather, M. S.

    2002-01-01

    This position paper suggests the use of a quantitative risk-based model to help support reeasoning and decision making that spans many of the critical properties such as security, safety, survivability, fault tolerance, and real-time.

  8. Quantitative magnetic resonance imaging in traumatic brain injury.

    PubMed

    Bigler, E D

    2001-04-01

    Quantitative neuroimaging has now become a well-established method for analyzing magnetic resonance imaging in traumatic brain injury (TBI). A general review of studies that have examined quantitative changes following TBI is presented. The consensus of quantitative neuroimaging studies is that most brain structures demonstrate changes in volume or surface area after injury. The patterns of atrophy are consistent with the generalized nature of brain injury and diffuse axonal injury. Various clinical caveats are provided including how quantitative neuroimaging findings can be used clinically and in predicting rehabilitation outcome. The future of quantitative neuroimaging also is discussed.

  9. Decision-making in schizophrenia: A predictive-coding perspective.

    PubMed

    Sterzer, Philipp; Voss, Martin; Schlagenhauf, Florian; Heinz, Andreas

    2018-05-31

    Dysfunctional decision-making has been implicated in the positive and negative symptoms of schizophrenia. Decision-making can be conceptualized within the framework of hierarchical predictive coding as the result of a Bayesian inference process that uses prior beliefs to infer states of the world. According to this idea, prior beliefs encoded at higher levels in the brain are fed back as predictive signals to lower levels. Whenever these predictions are violated by the incoming sensory data, a prediction error is generated and fed forward to update beliefs encoded at higher levels. Well-documented impairments in cognitive decision-making support the view that these neural inference mechanisms are altered in schizophrenia. There is also extensive evidence relating the symptoms of schizophrenia to aberrant signaling of prediction errors, especially in the domain of reward and value-based decision-making. Moreover, the idea of altered predictive coding is supported by evidence for impaired low-level sensory mechanisms and motor processes. We review behavioral and neural findings from these research areas and provide an integrated view suggesting that schizophrenia may be related to a pervasive alteration in predictive coding at multiple hierarchical levels, including cognitive and value-based decision-making processes as well as sensory and motor systems. We relate these findings to decision-making processes and propose that varying degrees of impairment in the implicated brain areas contribute to the variety of psychotic experiences. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. The prognostic value of sleep patterns in disorders of consciousness in the sub-acute phase.

    PubMed

    Arnaldi, Dario; Terzaghi, Michele; Cremascoli, Riccardo; De Carli, Fabrizio; Maggioni, Giorgio; Pistarini, Caterina; Nobili, Flavio; Moglia, Arrigo; Manni, Raffaele

    2016-02-01

    This study aimed to evaluate, through polysomnographic analysis, the prognostic value of sleep patterns, compared to other prognostic factors, in patients with disorders of consciousness (DOCs) in the sub-acute phase. Twenty-seven patients underwent 24-h polysomnography and clinical evaluation 3.5 ± 2 months after brain injury. Their clinical outcome was assessed 18.5 ± 9.9 months later. Polysomnographic recordings were evaluated using visual and quantitative indexes. A general linear model was applied to identify features able to predict clinical outcome. Clinical status at follow-up was analysed as a function of the baseline clinical status, the interval between brain injury and follow-up evaluation, patient age and gender, the aetiology of the injury, the lesion site, and visual and quantitative sleep indexes. A better clinical outcome was predicted by a visual index indicating the presence of sleep integrity (p=0.0006), a better baseline clinical status (p=0.014), and younger age (p=0.031). Addition of the quantitative sleep index strengthened the prediction. More structured sleep emerged as a valuable predictor of a positive clinical outcome in sub-acute DOC patients, even stronger than established predictors (e.g. age and baseline clinical condition). Both visual and quantitative sleep evaluation could be helpful in predicting clinical outcome in sub-acute DOCs. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  11. Towards a differentiated understanding of active travel behaviour: Using social theory to explore everyday commuting

    PubMed Central

    Guell, C.; Panter, J.; Jones, N.R.; Ogilvie, D.

    2012-01-01

    Fostering physical activity is an established public health priority for the primary prevention of a variety of chronic diseases. One promising population approach is to seek to embed physical activity in everyday lives by promoting walking and cycling to and from work (‘active commuting’) as an alternative to driving. Predominantly quantitative epidemiological studies have investigated travel behaviours, their determinants and how they may be changed towards more active choices. This study aimed to depart from narrow behavioural approaches to travel and investigate the social context of commuting with qualitative social research methods. Within a social practice theory framework, we explored how people describe their commuting experiences and make commuting decisions, and how travel behaviour is embedded in and shaped by commuters' complex social worlds. Forty-nine semi-structured interviews and eighteen photo-elicitation interviews with accompanying field notes were conducted with a subset of the Commuting and Health in Cambridge study cohort, based in the UK. The findings are discussed in terms of three particularly pertinent facets of the commuting experience. Firstly, choice and decisions are shaped by the constantly changing and fluid nature of commuters' social worlds. Secondly, participants express ambiguities in relation to their reasoning, ambitions and identities as commuters. Finally, commuting needs to be understood as an embodied and emotional practice. With this in mind, we suggest that everyday decision-making in commuting requires the tactical negotiation of these complexities. This study can help to explain the limitations of more quantitative and static models and frameworks in predicting travel behaviour and identify future research directions. PMID:22486840

  12. Quantitative anatomical analysis of facial expression using a 3D motion capture system: Application to cosmetic surgery and facial recognition technology.

    PubMed

    Lee, Jae-Gi; Jung, Su-Jin; Lee, Hyung-Jin; Seo, Jung-Hyuk; Choi, You-Jin; Bae, Hyun-Sook; Park, Jong-Tae; Kim, Hee-Jin

    2015-09-01

    The topography of the facial muscles differs between males and females and among individuals of the same gender. To explain the unique expressions that people can make, it is important to define the shapes of the muscle, their associations with the skin, and their relative functions. Three-dimensional (3D) motion-capture analysis, often used to study facial expression, was used in this study to identify characteristic skin movements in males and females when they made six representative basic expressions. The movements of 44 reflective markers (RMs) positioned on anatomical landmarks were measured. Their mean displacement was large in males [ranging from 14.31 mm (fear) to 41.15 mm (anger)], and 3.35-4.76 mm smaller in females [ranging from 9.55 mm (fear) to 37.80 mm (anger)]. The percentages of RMs involved in the ten highest mean maximum displacement values in making at least one expression were 47.6% in males and 61.9% in females. The movements of the RMs were larger in males than females but were more limited. Expanding our understanding of facial expression requires morphological studies of facial muscles and studies of related complex functionality. Conducting these together with quantitative analyses, as in the present study, will yield data valuable for medicine, dentistry, and engineering, for example, for surgical operations on facial regions, software for predicting changes in facial features and expressions after corrective surgery, and the development of face-mimicking robots. © 2015 Wiley Periodicals, Inc.

  13. Machine Learning in Medical Imaging.

    PubMed

    Giger, Maryellen L

    2018-03-01

    Advances in both imaging and computers have synergistically led to a rapid rise in the potential use of artificial intelligence in various radiological imaging tasks, such as risk assessment, detection, diagnosis, prognosis, and therapy response, as well as in multi-omics disease discovery. A brief overview of the field is given here, allowing the reader to recognize the terminology, the various subfields, and components of machine learning, as well as the clinical potential. Radiomics, an expansion of computer-aided diagnosis, has been defined as the conversion of images to minable data. The ultimate benefit of quantitative radiomics is to (1) yield predictive image-based phenotypes of disease for precision medicine or (2) yield quantitative image-based phenotypes for data mining with other -omics for discovery (ie, imaging genomics). For deep learning in radiology to succeed, note that well-annotated large data sets are needed since deep networks are complex, computer software and hardware are evolving constantly, and subtle differences in disease states are more difficult to perceive than differences in everyday objects. In the future, machine learning in radiology is expected to have a substantial clinical impact with imaging examinations being routinely obtained in clinical practice, providing an opportunity to improve decision support in medical image interpretation. The term of note is decision support, indicating that computers will augment human decision making, making it more effective and efficient. The clinical impact of having computers in the routine clinical practice may allow radiologists to further integrate their knowledge with their clinical colleagues in other medical specialties and allow for precision medicine. Copyright © 2018. Published by Elsevier Inc.

  14. A toolbox for discrete modelling of cell signalling dynamics.

    PubMed

    Paterson, Yasmin Z; Shorthouse, David; Pleijzier, Markus W; Piterman, Nir; Bendtsen, Claus; Hall, Benjamin A; Fisher, Jasmin

    2018-06-18

    In an age where the volume of data regarding biological systems exceeds our ability to analyse it, many researchers are looking towards systems biology and computational modelling to help unravel the complexities of gene and protein regulatory networks. In particular, the use of discrete modelling allows generation of signalling networks in the absence of full quantitative descriptions of systems, which are necessary for ordinary differential equation (ODE) models. In order to make such techniques more accessible to mainstream researchers, tools such as the BioModelAnalyzer (BMA) have been developed to provide a user-friendly graphical interface for discrete modelling of biological systems. Here we use the BMA to build a library of discrete target functions of known canonical molecular interactions, translated from ordinary differential equations (ODEs). We then show that these BMA target functions can be used to reconstruct complex networks, which can correctly predict many known genetic perturbations. This new library supports the accessibility ethos behind the creation of BMA, providing a toolbox for the construction of complex cell signalling models without the need for extensive experience in computer programming or mathematical modelling, and allows for construction and simulation of complex biological systems with only small amounts of quantitative data.

  15. The Insignificance of Thresholds in Environmental Impact Assessment: An Illustrative Case Study in Canada

    NASA Astrophysics Data System (ADS)

    Murray, Cathryn Clarke; Wong, Janson; Singh, Gerald G.; Mach, Megan; Lerner, Jackie; Ranieri, Bernardo; Peterson St-Laurent, Guillaume; Guimaraes, Alice; Chan, Kai M. A.

    2018-06-01

    Environmental assessment is the process that decision-makers rely on to predict, evaluate, and prevent biophysical, social, and economic impacts of potential project developments. The determination of significance in environmental assessment is central to environmental management in many nations. We reviewed ten recent environmental impact assessments from British Columbia, Canada and systematically reviewed and scored significance determination and the approaches used by assessors, the use of thresholds in significance determination, threshold exceedances, and the outcomes. Findings of significant impacts were exceedingly rare and practitioners used a combination of significance determination approaches, most commonly relying upon reasoned argumentation. Quantitative thresholds were rarely employed, with less than 10% of the valued components evaluated using thresholds. Even where quantitative thresholds for significance were exceeded, in every case practitioners used a variety of rationales to demote negative impacts to non-significance. These reasons include combinations of scale (temporal and spatial) of impacts, an already exceeded baseline, model uncertainty and/or substituting less stringent thresholds. Governments and agencies can better protect resources by requiring clear and defensible significance determinations, by making government-defined thresholds legally enforceable and accountable, and by requiring or encouraging significance determination through inclusive and collaborative approaches.

  16. Current Challenges in the First Principle Quantitative Modelling of the Lower Hybrid Current Drive in Tokamaks

    NASA Astrophysics Data System (ADS)

    Peysson, Y.; Bonoli, P. T.; Chen, J.; Garofalo, A.; Hillairet, J.; Li, M.; Qian, J.; Shiraiwa, S.; Decker, J.; Ding, B. J.; Ekedahl, A.; Goniche, M.; Zhai, X.

    2017-10-01

    The Lower Hybrid (LH) wave is widely used in existing tokamaks for tailoring current density profile or extending pulse duration to steady-state regimes. Its high efficiency makes it particularly attractive for a fusion reactor, leading to consider it for this purpose in ITER tokamak. Nevertheless, if basics of the LH wave in tokamak plasma are well known, quantitative modeling of experimental observations based on first principles remains a highly challenging exercise, despite considerable numerical efforts achieved so far. In this context, a rigorous methodology must be carried out in the simulations to identify the minimum number of physical mechanisms that must be considered to reproduce experimental shot to shot observations and also scalings (density, power spectrum). Based on recent simulations carried out for EAST, Alcator C-Mod and Tore Supra tokamaks, the state of the art in LH modeling is reviewed. The capability of fast electron bremsstrahlung, internal inductance li and LH driven current at zero loop voltage to constrain all together LH simulations is discussed, as well as the needs of further improvements (diagnostics, codes, LH model), for robust interpretative and predictive simulations.

  17. A Decision Support System for Concrete Bridge Maintenance

    NASA Astrophysics Data System (ADS)

    Rashidi, Maria; Lemass, Brett; Gibson, Peter

    2010-05-01

    The maintenance of bridges as a key element in transportation infrastructure has become a major concern for asset managers and society due to increasing traffic volumes, deterioration of existing bridges and well-publicised bridge failures. A pivotal responsibility for asset managers in charge of bridge remediation is to identify the risks and assess the consequences of remediation programs to ensure that the decisions are transparent and lead to the lowest predicted losses in recognized constraint areas. The ranking of bridge remediation treatments can be quantitatively assessed using a weighted constraint approach to structure the otherwise ill-structured phases of problem definition, conceptualization and embodiment [1]. This Decision Support System helps asset managers in making the best decision with regards to financial limitations and other dominant constraints imposed upon the problem at hand. The risk management framework in this paper deals with the development of a quantitative intelligent decision support system for bridge maintenance which has the ability to provide a source for consistent decisions through selecting appropriate remediation treatments based upon cost, service life, product durability/sustainability, client preferences, legal and environmental constraints. Model verification and validation through industry case studies is ongoing.

  18. Statistical Mechanics of US Supreme Court

    NASA Astrophysics Data System (ADS)

    Lee, Edward; Broedersz, Chase; Bialek, William; Biophysics Theory Group Team

    2014-03-01

    We build simple models for the distribution of voting patterns in a group, using the Supreme Court of the United States as an example. The least structured, or maximum entropy, model that is consistent with the observed pairwise correlations among justices' votes is equivalent to an Ising spin glass. While all correlations (perhaps surprisingly) are positive, the effective pairwise interactions in the spin glass model have both signs, recovering some of our intuition that justices on opposite sides of the ideological spectrum should have a negative influence on one another. Despite the competing interactions, a strong tendency toward unanimity emerges from the model, and this agrees quantitatively with the data. The model shows that voting patterns are organized in a relatively simple ``energy landscape,'' correctly predicts the extent to which each justice is correlated with the majority, and gives us a measure of the influence that justices exert on one another. These results suggest that simple models, grounded in statistical physics, can capture essential features of collective decision making quantitatively, even in a complex political context. Funded by National Science Foundation Grants PHY-0957573 and CCF-0939370, WM Keck Foundation, Lewis-Sigler Fellowship, Burroughs Wellcome Fund, and Winston Foundation.

  19. Revealing the drug-resistant mechanism for diarylpyrimidine analogue inhibitors of HIV-1 reverse transcriptase.

    PubMed

    Zhang, Hao; Qin, Fang; Ye, Wei; Li, Zeng; Ma, Songyao; Xia, Yan; Jiang, Yi; Zhu, Jiayi; Li, Yixue; Zhang, Jian; Chen, Hai-Feng

    2011-09-01

    Diaryltriazine (DATA) and diarylpyrimidine (DAPY) were two category inhibitors with highly potent activity for wild type (wt) and four principal mutant types (L100I, K103N, Y181C and Y188L) of HIV-1 reverse transcriptase (RT). We had revealed the drug-resistant mechanism of DATA analogue inhibitors with molecular dynamics simulation and three-dimensional quantitative structure-activity relationship (3D-QSAR) methods. In this work, we investigated the drug-resistant mechanism of DAPY analogue inhibitors. It was found that DAPY analogue inhibitors form more hydrogen bonds and hydrophobic contacts with wild type and mutants of HIV-1 RT than DATA inhibitors. This could explain that DAPY analogue inhibitors are more potent than DATA for the wild type and mutants of HIV-1 RT. Then, 3D-QSAR models were constructed for these inhibitors of wild type and four principal mutant types HIV-1 RT and evaluated by test set compounds. These combined models can be used to design new chemical entities and make quantitative prediction of the bioactivities for HIV-1 RT inhibitors before resorting to in vitro and in vivo experiment. © 2011 John Wiley & Sons A/S.

  20. An Extended View of Mars Ozone

    NASA Technical Reports Server (NTRS)

    Fast, Kelly

    2011-01-01

    We present an ongoing effort to characterize chemistry in Mars' atmosphere in multiple seasons on timescales longer than flight missions through coordinated efforts by GSFC's HIPWAC spectrometer and Mars Express SPICAM, archival measurements, and tests/application of photochemical models. The trace species ozone (O3) is an effective probe of atmospheric chemistry because it is destroyed by chemically active odd hydrogen species (HO(sub x)) that result from water vapor photolysis. Observed ozone abundance on Mars is a critical test for three-dimensional photochemistry coupled general circulation models (GCM) that make specific predictions for the spatial, diurnal, and seasonal behavior of ozone and related chemistry and climatological conditions. Coordinated measurements by HIPWAC and SPICAM quantitatively linked mission data to the 23-year GSFC ozone data record and also revealed unanticipated inter-decadal variability of same-season ozone abundances, a possible indicator of changing cloud activity (heterogeneous sink for HO(sub x)). A detailed study of long-term conditions is critical to characterizing the predictability of Mars' seasonal chemical behavior, particularly in light of the implications of and the lack of explanation for reported methane behavior.

  1. Use of artificial intelligence in the design of small peptide antibiotics effective against a broad spectrum of highly antibiotic-resistant superbugs.

    PubMed

    Cherkasov, Artem; Hilpert, Kai; Jenssen, Håvard; Fjell, Christopher D; Waldbrook, Matt; Mullaly, Sarah C; Volkmer, Rudolf; Hancock, Robert E W

    2009-01-16

    Increased multiple antibiotic resistance in the face of declining antibiotic discovery is one of society's most pressing health issues. Antimicrobial peptides represent a promising new class of antibiotics. Here we ask whether it is possible to make small broad spectrum peptides employing minimal assumptions, by capitalizing on accumulating chemical biology information. Using peptide array technology, two large random 9-amino-acid peptide libraries were iteratively created using the amino acid composition of the most active peptides. The resultant data was used together with Artificial Neural Networks, a powerful machine learning technique, to create quantitative in silico models of antibiotic activity. On the basis of random testing, these models proved remarkably effective in predicting the activity of 100,000 virtual peptides. The best peptides, representing the top quartile of predicted activities, were effective against a broad array of multidrug-resistant "Superbugs" with activities that were equal to or better than four highly used conventional antibiotics, more effective than the most advanced clinical candidate antimicrobial peptide, and protective against Staphylococcus aureus infections in animal models.

  2. Range Process Simulation Tool

    NASA Technical Reports Server (NTRS)

    Phillips, Dave; Haas, William; Barth, Tim; Benjamin, Perakath; Graul, Michael; Bagatourova, Olga

    2005-01-01

    Range Process Simulation Tool (RPST) is a computer program that assists managers in rapidly predicting and quantitatively assessing the operational effects of proposed technological additions to, and/or upgrades of, complex facilities and engineering systems such as the Eastern Test Range. Originally designed for application to space transportation systems, RPST is also suitable for assessing effects of proposed changes in industrial facilities and large organizations. RPST follows a model-based approach that includes finite-capacity schedule analysis and discrete-event process simulation. A component-based, scalable, open architecture makes RPST easily and rapidly tailorable for diverse applications. Specific RPST functions include: (1) definition of analysis objectives and performance metrics; (2) selection of process templates from a processtemplate library; (3) configuration of process models for detailed simulation and schedule analysis; (4) design of operations- analysis experiments; (5) schedule and simulation-based process analysis; and (6) optimization of performance by use of genetic algorithms and simulated annealing. The main benefits afforded by RPST are provision of information that can be used to reduce costs of operation and maintenance, and the capability for affordable, accurate, and reliable prediction and exploration of the consequences of many alternative proposed decisions.

  3. Energetic fluctuations in amorphous semiconducting polymers: Impact on charge-carrier mobility

    NASA Astrophysics Data System (ADS)

    Gali, Sai Manoj; D'Avino, Gabriele; Aurel, Philippe; Han, Guangchao; Yi, Yuanping; Papadopoulos, Theodoros A.; Coropceanu, Veaceslav; Brédas, Jean-Luc; Hadziioannou, Georges; Zannoni, Claudio; Muccioli, Luca

    2017-10-01

    We present a computational approach to model hole transport in an amorphous semiconducting fluorene-triphenylamine copolymer (TFB), which is based on the combination of molecular dynamics to predict the morphology of the oligomeric system and Kinetic Monte Carlo (KMC), parameterized with quantum chemistry calculations, to simulate hole transport. Carrying out a systematic comparison with available experimental results, we discuss the role that different transport parameters play in the KMC simulation and in particular the dynamic nature of positional and energetic disorder on the temperature and electric field dependence of charge mobility. It emerges that a semi-quantitative agreement with experiments is found only when the dynamic nature of the disorder is taken into account. This study establishes a clear link between microscopic quantities and macroscopic hole mobility for TFB and provides substantial evidence of the importance of incorporating fluctuations, at the molecular level, to obtain results that are in good agreement with temperature and electric field-dependent experimental mobilities. Our work makes a step forward towards the application of nanoscale theoretical schemes as a tool for predictive material screening.

  4. Accumulator and random-walk models of psychophysical discrimination: a counter-evaluation.

    PubMed

    Vickers, D; Smith, P

    1985-01-01

    In a recent assessment of models of psychophysical discrimination, Heath criticises the accumulator model for its reliance on computer simulation and qualitative evidence, and contrasts it unfavourably with a modified random-walk model, which yields exact predictions, is susceptible to critical test, and is provided with simple parameter-estimation techniques. A counter-evaluation is presented, in which the approximations employed in the modified random-walk analysis are demonstrated to be seriously inaccurate, the resulting parameter estimates to be artefactually determined, and the proposed test not critical. It is pointed out that Heath's specific application of the model is not legitimate, his data treatment inappropriate, and his hypothesis concerning confidence inconsistent with experimental results. Evidence from adaptive performance changes is presented which shows that the necessary assumptions for quantitative analysis in terms of the modified random-walk model are not satisfied, and that the model can be reconciled with data at the qualitative level only by making it virtually indistinguishable from an accumulator process. A procedure for deriving exact predictions for an accumulator process is outlined.

  5. Frequency Modulation of Transcriptional Bursting Enables Sensitive and Rapid Gene Regulation.

    PubMed

    Li, Congxin; Cesbron, François; Oehler, Michael; Brunner, Michael; Höfer, Thomas

    2018-04-25

    Gene regulation is a complex non-equilibrium process. Here, we show that quantitating the temporal regulation of key gene states (transcriptionally inactive, active, and refractory) provides a parsimonious framework for analyzing gene regulation. Our theory makes two non-intuitive predictions. First, for transcription factors (TFs) that regulate transcription burst frequency, as opposed to amplitude or duration, weak TF binding is sufficient to elicit strong transcriptional responses. Second, refractoriness of a gene after a transcription burst enables rapid responses to stimuli. We validate both predictions experimentally by exploiting the natural, optogenetic-like responsiveness of the Neurospora GATA-type TF White Collar Complex (WCC) to blue light. Further, we demonstrate that differential regulation of WCC target genes is caused by different gene activation rates, not different TF occupancy, and that these rates are tuned by both the core promoter and the distance between TF-binding site and core promoter. In total, our work demonstrates the relevance of a kinetic, non-equilibrium framework for understanding transcriptional regulation. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  6. Revisiting a model of ontogenetic growth: estimating model parameters from theory and data.

    PubMed

    Moses, Melanie E; Hou, Chen; Woodruff, William H; West, Geoffrey B; Nekola, Jeffery C; Zuo, Wenyun; Brown, James H

    2008-05-01

    The ontogenetic growth model (OGM) of West et al. provides a general description of how metabolic energy is allocated between production of new biomass and maintenance of existing biomass during ontogeny. Here, we reexamine the OGM, make some minor modifications and corrections, and further evaluate its ability to account for empirical variation on rates of metabolism and biomass in vertebrates both during ontogeny and across species of varying adult body size. We show that the updated version of the model is internally consistent and is consistent with other predictions of metabolic scaling theory and empirical data. The OGM predicts not only the near universal sigmoidal form of growth curves but also the M(1/4) scaling of the characteristic times of ontogenetic stages in addition to the curvilinear decline in growth efficiency described by Brody. Additionally, the OGM relates the M(3/4) scaling across adults of different species to the scaling of metabolic rate across ontogeny within species. In providing a simple, quantitative description of how energy is allocated to growth, the OGM calls attention to unexplained variation, unanswered questions, and opportunities for future research.

  7. Systems analysis of apoptosis protein expression allows the case-specific prediction of cell death responsiveness of melanoma cells

    PubMed Central

    Passante, E; Würstle, M L; Hellwig, C T; Leverkus, M; Rehm, M

    2013-01-01

    Many cancer entities and their associated cell line models are highly heterogeneous in their responsiveness to apoptosis inducers and, despite a detailed understanding of the underlying signaling networks, cell death susceptibility currently cannot be predicted reliably from protein expression profiles. Here, we demonstrate that an integration of quantitative apoptosis protein expression data with pathway knowledge can predict the cell death responsiveness of melanoma cell lines. By a total of 612 measurements, we determined the absolute expression (nM) of 17 core apoptosis regulators in a panel of 11 melanoma cell lines, and enriched these data with systems-level information on apoptosis pathway topology. By applying multivariate statistical analysis and multi-dimensional pattern recognition algorithms, the responsiveness of individual cell lines to tumor necrosis factor-related apoptosis-inducing ligand (TRAIL) or dacarbazine (DTIC) could be predicted with very high accuracy (91 and 82% correct predictions), and the most effective treatment option for individual cell lines could be pre-determined in silico. In contrast, cell death responsiveness was poorly predicted when not taking knowledge on protein–protein interactions into account (55 and 36% correct predictions). We also generated mathematical predictions on whether anti-apoptotic Bcl-2 family members or x-linked inhibitor of apoptosis protein (XIAP) can be targeted to enhance TRAIL responsiveness in individual cell lines. Subsequent experiments, making use of pharmacological Bcl-2/Bcl-xL inhibition or siRNA-based XIAP depletion, confirmed the accuracy of these predictions. We therefore demonstrate that cell death responsiveness to TRAIL or DTIC can be predicted reliably in a large number of melanoma cell lines when investigating expression patterns of apoptosis regulators in the context of their network-level interplay. The capacity to predict responsiveness at the cellular level may contribute to personalizing anti-cancer treatments in the future. PMID:23933815

  8. Pharmacodynamic-pharmacokinetic integration as a guide to medicinal chemistry.

    PubMed

    Gabrielsson, Johan; Fjellström, Ola; Ulander, Johan; Rowley, Michael; Van Der Graaf, Piet H

    2011-01-01

    A primary objective of pharmacokinetic-pharmacodynamic (PKPD) reasoning is to identify key in vivo drug and system proper¬ties, enabling prediction of the magnitude and time course of drug responses under physiological and pathological conditions in animals and man. Since the pharmacological response generated by a drug is highly dependent on the actual system used to study its action, knowledge about its potency and efficacy at a given concentration or dose is insufficient to obtain a proper understanding of its pharmacodynamic profile. Hence, the output of PKPD activities extends beyond the provision of quantitative measures (models) of results, to the design of future protocols. Furthermore, because PKPD integrates DMPK (e.g. clearance) and pharmacology (e.g. potency),it provides an anchor point for compound selection, and, as such, should be viewed as an important weapon in medicinal chemistry. Here we outline key PK concepts relevant to PD, and then consider real-life experiments to illustrate the importance to the medicinal chemist of data obtained by PKPD. Useful assumptions and potential pitfalls are described, providing a holistic view of the plethora of determinants behind in vitro-in vivo correlations. By condensing complexity to simplicity, there are not only consequences for experimental design, and for the ranking and design of compounds, but it is also possible to make important predictions such as the impact of changes in drug potency and kinetics. In short, by using quantitative methods to tease apart pharmacodynamic complexities such as temporal differences and changes in plasma protein binding, it is possible to target the changes necessary for improving a compound's profile.

  9. Quantitative model of the effects of contamination and space environment on in-flight aging of thermal coatings

    NASA Astrophysics Data System (ADS)

    Vanhove, Emilie; Roussel, Jean-François; Remaury, Stéphanie; Faye, Delphine; Guigue, Pascale

    2014-09-01

    The in-orbit aging of thermo-optical properties of thermal coatings critically impacts both spacecraft thermal balance and heating power consumption. Nevertheless, in-flight thermal coating aging is generally larger than the one measured on ground and the current knowledge does not allow making reliable predictions1. As a result, a large oversizing of thermal control systems is required. To address this issue, the Centre National d'Etudes Spatiales has developed a low-cost experiment, called THERME, which enables to monitor the in-flight time-evolution of the solar absorptivity of a large variety of coatings, including commonly used coatings and new materials by measuring their temperature. This experiment has been carried out on sunsynchronous spacecrafts for more than 27 years, allowing thus the generation of a very large set of telemetry measurements. The aim of this work was to develop a model able to semi-quantitatively reproduce these data with a restraint number of parameters. The underlying objectives were to better understand the contribution of the different involved phenomena and, later on, to predict the thermal coating aging at end of life. The physical processes modeled include contamination deposition, UV aging of both contamination layers and intrinsic material and atomic oxygen erosion. Efforts were particularly focused on the satellite leading wall as this face is exposed to the highest variations in environmental conditions during the solar cycle. The non-monotonous time-evolution of the solar absorptivity of thermal coatings is shown to be due to a succession of contamination and contaminant erosion by atomic oxygen phased with the solar cycle.

  10. Annual distribution of allergenic fungal spores in atmospheric particulate matter in the Eastern Mediterranean; a comparative study between ergosterol and quantitative PCR analysis

    NASA Astrophysics Data System (ADS)

    Lang-Yona, N.; Dannemiller, K.; Yamamoto, N.; Burshtein, N.; Peccia, J.; Yarden, O.; Rudich, Y.

    2012-03-01

    Airborne fungal spores are an important fraction of atmospheric particulate matter and are major causative agents of allergenic and infectious diseases. Predicting the variability and species of allergy-causing fungal spores requires detailed and reliable methods for identification and quantification. There are diverse methods for their detection in the atmosphere and in the indoor environments; yet, it is important to optimize suitable methods for characterization of fungal spores in atmospheric samples. In this study we sampled and characterized total and specific airborne fungal spores from PM10 samples collected in Rehovot, Israel over an entire year. The total fungal spore concentrations vary throughout the year although the species variability was nearly the same. Seasonal equivalent spore concentrations analyzed by real-time quantitative-PCR-based methods were fall > winter > spring > summer. Reported concentrations based on ergosterol analysis for the same samples were and fall > spring > winter > summer. Correlation between the two analytical methods was found only for the spring season. These poor associations may be due to the per-spore ergosterol variations that arise from both varying production rates, as well as molecular degradation of ergosterol. While conversion of genome copies to spore concentration is not yet straightforward, the potential for improving this conversion and the ability of qPCR to identify groups of fungi or specific species makes this method preferable for environmental spore quantification. Identifying tools for establishing the relation between the presence of species and the actual ability to induce allergies is still needed in order to predict the effect on human health.

  11. Annual distribution of allergenic fungal spores in atmospheric particulate matter in the eastern mediterranean; a comparative study between ergosterol and quantitative PCR analysis

    NASA Astrophysics Data System (ADS)

    Lang-Yona, N.; Dannemiller, K.; Yamamoto, N.; Burshtein, N.; Peccia, J.; Yarden, O.; Rudich, Y.

    2011-10-01

    Airborne fungal spores are an important fraction of atmospheric particulate matter and are major causative agents of allergenic and infectious diseases. Predicting the variability and species of allergy-causing fungal spores requires detailed and reliable methods for identification and quantification. There are diverse methods for their detection in the atmosphere and in the indoor environments; yet, it is important to optimize suitable methods for characterization of fungal spores in atmospheric samples. In this study we sampled and characterized total and specific airborne fungal spores from PM10 samples collected in Rohovot, Israel over an entire year. The total fungal spore concentrations vary throughout the year although the species variability was nearly the same. Seasonal equivalent spore concentrations analyzed by real-time quantitative-PCR-based methods were fall > winter > spring > summer. Reported concentrations based on ergosterol analysis for the same samples were and fall > spring > winter > summer. Correlation between the two analytical methods was found only for the spring season. These poor associations may be due to the per-spore ergosterol variations that arise from both varying production rates, as well as molecular degradation of ergosterol. While conversion of genome copies to spore concentration is not yet straightforward, the potential for improving this conversion and the ability of qPCR to identify groups of fungi or specific species makes this method preferable for environmental spore quantification. Identifying tools for establishing the relation between the presence of species and the actual ability to induce allergies is still needed in order to predict the effect on human health.

  12. Uncertain Henry's law constants compromise equilibrium partitioning calculations of atmospheric oxidation products

    NASA Astrophysics Data System (ADS)

    Wang, Chen; Yuan, Tiange; Wood, Stephen A.; Goss, Kai-Uwe; Li, Jingyi; Ying, Qi; Wania, Frank

    2017-06-01

    Gas-particle partitioning governs the distribution, removal, and transport of organic compounds in the atmosphere and the formation of secondary organic aerosol (SOA). The large variety of atmospheric species and their wide range of properties make predicting this partitioning equilibrium challenging. Here we expand on earlier work and predict gas-organic and gas-aqueous phase partitioning coefficients for 3414 atmospherically relevant molecules using COSMOtherm, SPARC Performs Automated Reasoning in Chemistry (SPARC), and poly-parameter linear free-energy relationships. The Master Chemical Mechanism generated the structures by oxidizing primary emitted volatile organic compounds. Predictions for gas-organic phase partitioning coefficients (KWIOM/G) by different methods are on average within 1 order of magnitude of each other, irrespective of the numbers of functional groups, except for predictions by COSMOtherm and SPARC for compounds with more than three functional groups, which have a slightly higher discrepancy. Discrepancies between predictions of gas-aqueous partitioning (KW/G) are much larger and increase with the number of functional groups in the molecule. In particular, COSMOtherm often predicts much lower KW/G for highly functionalized compounds than the other methods. While the quantum-chemistry-based COSMOtherm accounts for the influence of intra-molecular interactions on conformation, highly functionalized molecules likely fall outside of the applicability domain of the other techniques, which at least in part rely on empirical data for calibration. Further analysis suggests that atmospheric phase distribution calculations are sensitive to the partitioning coefficient estimation method, in particular to the estimated value of KW/G. The large uncertainty in KW/G predictions for highly functionalized organic compounds needs to be resolved to improve the quantitative treatment of SOA formation.

  13. Molecular Structure-Based Large-Scale Prediction of Chemical-Induced Gene Expression Changes.

    PubMed

    Liu, Ruifeng; AbdulHameed, Mohamed Diwan M; Wallqvist, Anders

    2017-09-25

    The quantitative structure-activity relationship (QSAR) approach has been used to model a wide range of chemical-induced biological responses. However, it had not been utilized to model chemical-induced genomewide gene expression changes until very recently, owing to the complexity of training and evaluating a very large number of models. To address this issue, we examined the performance of a variable nearest neighbor (v-NN) method that uses information on near neighbors conforming to the principle that similar structures have similar activities. Using a data set of gene expression signatures of 13 150 compounds derived from cell-based measurements in the NIH Library of Integrated Network-based Cellular Signatures program, we were able to make predictions for 62% of the compounds in a 10-fold cross validation test, with a correlation coefficient of 0.61 between the predicted and experimentally derived signatures-a reproducibility rivaling that of high-throughput gene expression measurements. To evaluate the utility of the predicted gene expression signatures, we compared the predicted and experimentally derived signatures in their ability to identify drugs known to cause specific liver, kidney, and heart injuries. Overall, the predicted and experimentally derived signatures had similar receiver operating characteristics, whose areas under the curve ranged from 0.71 to 0.77 and 0.70 to 0.73, respectively, across the three organ injury models. However, detailed analyses of enrichment curves indicate that signatures predicted from multiple near neighbors outperformed those derived from experiments, suggesting that averaging information from near neighbors may help improve the signal from gene expression measurements. Our results demonstrate that the v-NN method can serve as a practical approach for modeling large-scale, genomewide, chemical-induced, gene expression changes.

  14. Radiomics biomarkers for accurate tumor progression prediction of oropharyngeal cancer

    NASA Astrophysics Data System (ADS)

    Hadjiiski, Lubomir; Chan, Heang-Ping; Cha, Kenny H.; Srinivasan, Ashok; Wei, Jun; Zhou, Chuan; Prince, Mark; Papagerakis, Silvana

    2017-03-01

    Accurate tumor progression prediction for oropharyngeal cancers is crucial for identifying patients who would best be treated with optimized treatment and therefore minimize the risk of under- or over-treatment. An objective decision support system that can merge the available radiomics, histopathologic and molecular biomarkers in a predictive model based on statistical outcomes of previous cases and machine learning may assist clinicians in making more accurate assessment of oropharyngeal tumor progression. In this study, we evaluated the feasibility of developing individual and combined predictive models based on quantitative image analysis from radiomics, histopathology and molecular biomarkers for oropharyngeal tumor progression prediction. With IRB approval, 31, 84, and 127 patients with head and neck CT (CT-HN), tumor tissue microarrays (TMAs) and molecular biomarker expressions, respectively, were collected. For 8 of the patients all 3 types of biomarkers were available and they were sequestered in a test set. The CT-HN lesions were automatically segmented using our level sets based method. Morphological, texture and molecular based features were extracted from CT-HN and TMA images, and selected features were merged by a neural network. The classification accuracy was quantified using the area under the ROC curve (AUC). Test AUCs of 0.87, 0.74, and 0.71 were obtained with the individual predictive models based on radiomics, histopathologic, and molecular features, respectively. Combining the radiomics and molecular models increased the test AUC to 0.90. Combining all 3 models increased the test AUC further to 0.94. This preliminary study demonstrates that the individual domains of biomarkers are useful and the integrated multi-domain approach is most promising for tumor progression prediction.

  15. A general way for quantitative magnetic measurement by transmitted electrons

    NASA Astrophysics Data System (ADS)

    Song, Dongsheng; Li, Gen; Cai, Jianwang; Zhu, Jing

    2016-01-01

    EMCD (electron magnetic circular dichroism) technique opens a new door to explore magnetic properties by transmitted electrons. The recently developed site-specific EMCD technique makes it possible to obtain rich magnetic information from the Fe atoms sited at nonequivalent crystallographic planes in NiFe2O4, however it is based on a critical demand for the crystallographic structure of the testing sample. Here, we have further improved and tested the method for quantitative site-specific magnetic measurement applicable for more complex crystallographic structure by using the effective dynamical diffraction effects (general routine for selecting proper diffraction conditions, making use of the asymmetry of dynamical diffraction for design of experimental geometry and quantitative measurement, etc), and taken yttrium iron garnet (Y3Fe5O12, YIG) with more complex crystallographic structure as an example to demonstrate its applicability. As a result, the intrinsic magnetic circular dichroism signals, spin and orbital magnetic moment of iron with site-specific are quantitatively determined. The method will further promote the development of quantitative magnetic measurement with high spatial resolution by transmitted electrons.

  16. Projecting technology change to improve space technology planning and systems management

    NASA Astrophysics Data System (ADS)

    Walk, Steven Robert

    2011-04-01

    Projecting technology performance evolution has been improving over the years. Reliable quantitative forecasting methods have been developed that project the growth, diffusion, and performance of technology in time, including projecting technology substitutions, saturation levels, and performance improvements. These forecasts can be applied at the early stages of space technology planning to better predict available future technology performance, assure the successful selection of technology, and improve technology systems management strategy. Often what is published as a technology forecast is simply scenario planning, usually made by extrapolating current trends into the future, with perhaps some subjective insight added. Typically, the accuracy of such predictions falls rapidly with distance in time. Quantitative technology forecasting (QTF), on the other hand, includes the study of historic data to identify one of or a combination of several recognized universal technology diffusion or substitution patterns. In the same manner that quantitative models of physical phenomena provide excellent predictions of system behavior, so do QTF models provide reliable technological performance trajectories. In practice, a quantitative technology forecast is completed to ascertain with confidence when the projected performance of a technology or system of technologies will occur. Such projections provide reliable time-referenced information when considering cost and performance trade-offs in maintaining, replacing, or migrating a technology, component, or system. This paper introduces various quantitative technology forecasting techniques and illustrates their practical application in space technology and technology systems management.

  17. Machine learning for predicting the response of breast cancer to neoadjuvant chemotherapy

    PubMed Central

    Mani, Subramani; Chen, Yukun; Li, Xia; Arlinghaus, Lori; Chakravarthy, A Bapsi; Abramson, Vandana; Bhave, Sandeep R; Levy, Mia A; Xu, Hua; Yankeelov, Thomas E

    2013-01-01

    Objective To employ machine learning methods to predict the eventual therapeutic response of breast cancer patients after a single cycle of neoadjuvant chemotherapy (NAC). Materials and methods Quantitative dynamic contrast-enhanced MRI and diffusion-weighted MRI data were acquired on 28 patients before and after one cycle of NAC. A total of 118 semiquantitative and quantitative parameters were derived from these data and combined with 11 clinical variables. We used Bayesian logistic regression in combination with feature selection using a machine learning framework for predictive model building. Results The best predictive models using feature selection obtained an area under the curve of 0.86 and an accuracy of 0.86, with a sensitivity of 0.88 and a specificity of 0.82. Discussion With the numerous options for NAC available, development of a method to predict response early in the course of therapy is needed. Unfortunately, by the time most patients are found not to be responding, their disease may no longer be surgically resectable, and this situation could be avoided by the development of techniques to assess response earlier in the treatment regimen. The method outlined here is one possible solution to this important clinical problem. Conclusions Predictive modeling approaches based on machine learning using readily available clinical and quantitative MRI data show promise in distinguishing breast cancer responders from non-responders after the first cycle of NAC. PMID:23616206

  18. Using GPS To Teach More Than Accurate Positions.

    ERIC Educational Resources Information Center

    Johnson, Marie C.; Guth, Peter L.

    2002-01-01

    Undergraduate science majors need practice in critical thinking, quantitative analysis, and judging whether their calculated answers are physically reasonable. Develops exercises using handheld Global Positioning System (GPS) receivers. Reinforces students' abilities to think quantitatively, make realistic "back of the envelope"…

  19. Strategy generalization across orientation tasks: testing a computational cognitive model.

    PubMed

    Gunzelmann, Glenn

    2008-07-08

    Humans use their spatial information processing abilities flexibly to facilitate problem solving and decision making in a variety of tasks. This article explores the question of whether a general strategy can be adapted for performing two different spatial orientation tasks by testing the predictions of a computational cognitive model. Human performance was measured on an orientation task requiring participants to identify the location of a target either on a map (find-on-map) or within an egocentric view of a space (find-in-scene). A general strategy instantiated in a computational cognitive model of the find-on-map task, based on the results from Gunzelmann and Anderson (2006), was adapted to perform both tasks and used to generate performance predictions for a new study. The qualitative fit of the model to the human data supports the view that participants were able to tailor a general strategy to the requirements of particular spatial tasks. The quantitative differences between the predictions of the model and the performance of human participants in the new experiment expose individual differences in sample populations. The model provides a means of accounting for those differences and a framework for understanding how human spatial abilities are applied to naturalistic spatial tasks that involve reasoning with maps. 2008 Cognitive Science Society, Inc.

  20. Controlling the hydration of the skin though the application of occluding barrier creams

    PubMed Central

    Sparr, Emma; Millecamps, Danielle; Isoir, Muriel; Burnier, Véronique; Larsson, Åsa; Cabane, Bernard

    2013-01-01

    The skin is a barrier membrane that separates environments with profoundly different water contents. The barrier properties are assured by the outer layer of the skin, the stratum corneum (SC), which controls the transepidermal water loss. The SC acts as a responding membrane, since its hydration and permeability vary with the boundary condition, which is the activity of water at the outer surface of the skin. We show how this boundary condition can be changed by the application of a barrier cream that makes a film with a high resistance to the transport of water. We present a quantitative model that predicts hydration and water transport in SC that is covered by such a film. We also develop an experimental method for measuring the specific resistance to water transport of films made of occluding barrier creams. Finally, we combine the theoretical model with the measured properties of the barrier creams to predict how a film of cream changes the activity of water at the outer surface of the SC. Using the known variations of SC permeability and hydration with the water activity in its environment (i.e. the relative humidity), we can thus predict how a film of barrier cream changes SC hydration. PMID:23269846

  1. Controlling the hydration of the skin though the application of occluding barrier creams.

    PubMed

    Sparr, Emma; Millecamps, Danielle; Isoir, Muriel; Burnier, Véronique; Larsson, Åsa; Cabane, Bernard

    2013-03-06

    The skin is a barrier membrane that separates environments with profoundly different water contents. The barrier properties are assured by the outer layer of the skin, the stratum corneum (SC), which controls the transepidermal water loss. The SC acts as a responding membrane, since its hydration and permeability vary with the boundary condition, which is the activity of water at the outer surface of the skin. We show how this boundary condition can be changed by the application of a barrier cream that makes a film with a high resistance to the transport of water. We present a quantitative model that predicts hydration and water transport in SC that is covered by such a film. We also develop an experimental method for measuring the specific resistance to water transport of films made of occluding barrier creams. Finally, we combine the theoretical model with the measured properties of the barrier creams to predict how a film of cream changes the activity of water at the outer surface of the SC. Using the known variations of SC permeability and hydration with the water activity in its environment (i.e. the relative humidity), we can thus predict how a film of barrier cream changes SC hydration.

  2. Analyses of Field Test Data at the Atucha-1 Spent Fuel Pools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sitaraman, S.

    A field test was conducted at the Atucha-1 spent nuclear fuel pools to validate a software package for gross defect detection that is used in conjunction with the inspection tool, Spent Fuel Neutron Counter (SFNC). A set of measurements was taken with the SFNC and the software predictions were compared with these data and analyzed. The data spanned a wide range of cooling times and a set of burnup levels leading to count rates from the several hundreds to around twenty per second. The current calibration in the software using linear fitting required the use of multiple calibration factors tomore » cover the entire range of count rates recorded. The solution to this was to use power regression data fitting to normalize the predicted response and derive one calibration factor that can be applied to the entire set of data. The resulting comparisons between the predicted and measured responses were generally good and provided a quantitative method of detecting missing fuel in virtually all situations. Since the current version of the software uses the linear calibration method, it would need to be updated with the new power regression method to make it more user-friendly for real time verification and fieldable for the range of responses that will be encountered.« less

  3. Foraging theory predicts predator-prey energy fluxes.

    PubMed

    Brose, U; Ehnes, R B; Rall, B C; Vucic-Pestic, O; Berlow, E L; Scheu, S

    2008-09-01

    1. In natural communities, populations are linked by feeding interactions that make up complex food webs. The stability of these complex networks is critically dependent on the distribution of energy fluxes across these feeding links. 2. In laboratory experiments with predatory beetles and spiders, we studied the allometric scaling (body-mass dependence) of metabolism and per capita consumption at the level of predator individuals and per link energy fluxes at the level of feeding links. 3. Despite clear power-law scaling of the metabolic and per capita consumption rates with predator body mass, the per link predation rates on individual prey followed hump-shaped relationships with the predator-prey body mass ratios. These results contrast with the current metabolic paradigm, and find better support in foraging theory. 4. This suggests that per link energy fluxes from prey populations to predator individuals peak at intermediate body mass ratios, and total energy fluxes from prey to predator populations decrease monotonically with predator and prey mass. Surprisingly, contrary to predictions of metabolic models, this suggests that for any prey species, the per link and total energy fluxes to its largest predators are smaller than those to predators of intermediate body size. 5. An integration of metabolic and foraging theory may enable a quantitative and predictive understanding of energy flux distributions in natural food webs.

  4. Theoretical Study of pKa Values for Trivalent Rare-Earth Metal Cations in Aqueous Solution.

    PubMed

    Yu, Donghai; Du, Ruobing; Xiao, Ji-Chang; Xu, Shengming; Rong, Chunying; Liu, Shubin

    2018-01-18

    Molecular acidity of trivalent rare-earth metal cations in aqueous solution is an important factor dedicated to the efficiency of their extraction and separation processes. In this work, the aqueous acidity of these metal ions has been quantitatively investigated using a few theoretical approaches. Our computational results expressed in terms of pK a values agree well with the tetrad effect of trivalent rare-earth ions extensively reported in the extraction and separation of these elements. Strong linear relationships have been observed between the acidity and quantum electronic descriptors such as the molecular electrostatic potential on the acidic nucleus and the sum of the valence natural atomic orbitals energies of the dissociating proton. Making use of the predicted pK a values, we have also predicted the major ionic forms of these species in the aqueous environment with different pH values, which can be employed to rationalize the behavior difference of different rare-earth metal cations during the extraction process. Our present results should provide needed insights not only for the qualitatively understanding about the extraction and separation between yttrium and lanthanide elements but also for the prediction of novel and more efficient rare-earth metal extractants in the future.

  5. Novel Application of Quantitative Single-Photon Emission Computed Tomography/Computed Tomography to Predict Early Response to Methimazole in Graves' Disease

    PubMed Central

    Kim, Hyun Joo; Bang, Ji-In; Kim, Ji-Young; Moon, Jae Hoon; So, Young

    2017-01-01

    Objective Since Graves' disease (GD) is resistant to antithyroid drugs (ATDs), an accurate quantitative thyroid function measurement is required for the prediction of early responses to ATD. Quantitative parameters derived from the novel technology, single-photon emission computed tomography/computed tomography (SPECT/CT), were investigated for the prediction of achievement of euthyroidism after methimazole (MMI) treatment in GD. Materials and Methods A total of 36 GD patients (10 males, 26 females; mean age, 45.3 ± 13.8 years) were enrolled for this study, from April 2015 to January 2016. They underwent quantitative thyroid SPECT/CT 20 minutes post-injection of 99mTc-pertechnetate (5 mCi). Association between the time to biochemical euthyroidism after MMI treatment and %uptake, standardized uptake value (SUV), functional thyroid mass (SUVmean × thyroid volume) from the SPECT/CT, and clinical/biochemical variables, were investigated. Results GD patients had a significantly greater %uptake (6.9 ± 6.4%) than historical control euthyroid patients (n = 20, 0.8 ± 0.5%, p < 0.001) from the same quantitative SPECT/CT protocol. Euthyroidism was achieved in 14 patients at 156 ± 62 days post-MMI treatment, but 22 patients had still not achieved euthyroidism by the last follow-up time-point (208 ± 80 days). In the univariate Cox regression analysis, the initial MMI dose (p = 0.014), %uptake (p = 0.015), and functional thyroid mass (p = 0.016) were significant predictors of euthyroidism in response to MMI treatment. However, only %uptake remained significant in a multivariate Cox regression analysis (p = 0.034). A %uptake cutoff of 5.0% dichotomized the faster responding versus the slower responding GD patients (p = 0.006). Conclusion A novel parameter of thyroid %uptake from quantitative SPECT/CT is a predictive indicator of an early response to MMI in GD patients. PMID:28458607

  6. Using behavioral statistical physics to understand supply and demand

    NASA Astrophysics Data System (ADS)

    Farmer, Doyne

    2007-03-01

    We construct a quantitative theory for a proxy for supply and demand curves using methods that look and feel a lot like physics. Neoclassical economics postulates that supply and demand curves can be explained as the result of rational agents selfishly maximizing their utility, but this approach has had very little empirical success. We take quite a different approach, building supply and demand curves out of impulsive responses to not-quite-random trading fluctuations. Because of reasons of empirical measurability, as a good proxy for changes in supply and demand we study the aggregate price impact function R(V), giving the average logarithmic price change R as a function of the signed trading volume V. (If a trade vi is initiated by a buyer, it has a plus sign, and vice versa for sellers; the signed trading volume for a series of N successive trades is VN(t) = ∑i=t^i=t+N vi). We develop a ``zero-intelligence" null hypothesis that each trade vi gives an impulsive kick f(vi) to the price, so that the average return RN(t) = ∑i=t^i=t+N f(vi). Under the assumption that vi is IID, R(VN) has a characteristic concave shape, becoming linear in the limit as N ->∞. Under some circumstances this is universal for large N, in the sense that it is independent of the functional form of f. While this null hypothesis gives useful qualitative intuition, to make it quantitatively correct, one must add two additional elements: (1) The signs of vi are a long-memory process and (2) the return R is efficient, in the sense that it is not possible to make profits with a linear prediction of the signs of vi. Using data from the London Stock Exchange we demonstrate that this theory works well, predicting both the magnitude and shape of R(VN). We show that the fluctuations in R are very large and for some purposes more important than the average behavior. A computer model for the fluctuations suggests the existence of an equation of state relating the diffusion rate of prices to the flow of trading orders.

  7. Bayesian model of categorical effects in L1 and L2 speech perception

    NASA Astrophysics Data System (ADS)

    Kronrod, Yakov

    In this dissertation I present a model that captures categorical effects in both first language (L1) and second language (L2) speech perception. In L1 perception, categorical effects range between extremely strong for consonants to nearly continuous perception of vowels. I treat the problem of speech perception as a statistical inference problem and by quantifying categoricity I obtain a unified model of both strong and weak categorical effects. In this optimal inference mechanism, the listener uses their knowledge of categories and the acoustics of the signal to infer the intended productions of the speaker. The model splits up speech variability into meaningful category variance and perceptual noise variance. The ratio of these two variances, which I call Tau, directly correlates with the degree of categorical effects for a given phoneme or continuum. By fitting the model to behavioral data from different phonemes, I show how a single parametric quantitative variation can lead to the different degrees of categorical effects seen in perception experiments with different phonemes. In L2 perception, L1 categories have been shown to exert an effect on how L2 sounds are identified and how well the listener is able to discriminate them. Various models have been developed to relate the state of L1 categories with both the initial and eventual ability to process the L2. These models largely lacked a formalized metric to measure perceptual distance, a means of making a-priori predictions of behavior for a new contrast, and a way of describing non-discrete gradient effects. In the second part of my dissertation, I apply the same computational model that I used to unify L1 categorical effects to examining L2 perception. I show that we can use the model to make the same type of predictions as other SLA models, but also provide a quantitative framework while formalizing all measures of similarity and bias. Further, I show how using this model to consider L2 learners at different stages of development we can track specific parameters of categories as they change over time, giving us a look into the actual process of L2 category development.

  8. Genomic Prediction for Quantitative Traits Is Improved by Mapping Variants to Gene Ontology Categories in Drosophila melanogaster

    PubMed Central

    Edwards, Stefan M.; Sørensen, Izel F.; Sarup, Pernille; Mackay, Trudy F. C.; Sørensen, Peter

    2016-01-01

    Predicting individual quantitative trait phenotypes from high-resolution genomic polymorphism data is important for personalized medicine in humans, plant and animal breeding, and adaptive evolution. However, this is difficult for populations of unrelated individuals when the number of causal variants is low relative to the total number of polymorphisms and causal variants individually have small effects on the traits. We hypothesized that mapping molecular polymorphisms to genomic features such as genes and their gene ontology categories could increase the accuracy of genomic prediction models. We developed a genomic feature best linear unbiased prediction (GFBLUP) model that implements this strategy and applied it to three quantitative traits (startle response, starvation resistance, and chill coma recovery) in the unrelated, sequenced inbred lines of the Drosophila melanogaster Genetic Reference Panel. Our results indicate that subsetting markers based on genomic features increases the predictive ability relative to the standard genomic best linear unbiased prediction (GBLUP) model. Both models use all markers, but GFBLUP allows differential weighting of the individual genetic marker relationships, whereas GBLUP weighs the genetic marker relationships equally. Simulation studies show that it is possible to further increase the accuracy of genomic prediction for complex traits using this model, provided the genomic features are enriched for causal variants. Our GFBLUP model using prior information on genomic features enriched for causal variants can increase the accuracy of genomic predictions in populations of unrelated individuals and provides a formal statistical framework for leveraging and evaluating information across multiple experimental studies to provide novel insights into the genetic architecture of complex traits. PMID:27235308

  9. Common-sense chemistry: The use of assumptions and heuristics in problem solving

    NASA Astrophysics Data System (ADS)

    Maeyer, Jenine Rachel

    Students experience difficulty learning and understanding chemistry at higher levels, often because of cognitive biases stemming from common sense reasoning constraints. These constraints can be divided into two categories: assumptions (beliefs held about the world around us) and heuristics (the reasoning strategies or rules used to build predictions and make decisions). A better understanding and characterization of these constraints are of central importance in the development of curriculum and teaching strategies that better support student learning in science. It was the overall goal of this thesis to investigate student reasoning in chemistry, specifically to better understand and characterize the assumptions and heuristics used by undergraduate chemistry students. To achieve this, two mixed-methods studies were conducted, each with quantitative data collected using a questionnaire and qualitative data gathered through semi-structured interviews. The first project investigated the reasoning heuristics used when ranking chemical substances based on the relative value of a physical or chemical property, while the second study characterized the assumptions and heuristics used when making predictions about the relative likelihood of different types of chemical processes. Our results revealed that heuristics for cue selection and decision-making played a significant role in the construction of answers during the interviews. Many study participants relied frequently on one or more of the following heuristics to make their decisions: recognition, representativeness, one-reason decision-making, and arbitrary trend. These heuristics allowed students to generate answers in the absence of requisite knowledge, but often led students astray. When characterizing assumptions, our results indicate that students relied on intuitive, spurious, and valid assumptions about the nature of chemical substances and processes in building their responses. In particular, many interviewees seemed to view chemical reactions as macroscopic reassembling processes where favorability was related to the perceived ease with which reactants broke apart or products formed. Students also expressed spurious chemical assumptions based on the misinterpretation and overgeneralization of periodicity and electronegativity. Our findings suggest the need to create more opportunities for college chemistry students to monitor their thinking, develop and apply analytical ways of reasoning, and evaluate the effectiveness of shortcut reasoning procedures in different contexts.

  10. Anticipated health behaviour changes and perceived control in response to disclosure of genetic risk of breast and ovarian cancer: a quantitative survey study among women in the UK

    PubMed Central

    Meisel, Susanne F; Side, Lucy; Gessler, Sue; Hann, Katie E J; Wardle, Jane; Lanceley, Anne

    2017-01-01

    Background Genetic risk assessment for breast cancer and ovarian cancer (BCOC) is expected to make major inroads into mainstream clinical practice. It is important to evaluate the potential impact on women ahead of its implementation in order to maximise health benefits, as predictive genetic testing without adequate support could lead to adverse psychological and behavioural responses to risk disclosure. Objective To examine anticipated health behaviour changes and perceived control to disclosure of genetic risk for BCOC and establish demographic and person-specific correlates of adverse anticipated responses in a population-based sample of women. Design Cross-sectional quantitative survey study carried out by the UK Office for National Statistics in January and March 2014. Setting Face-to-face computer-assisted interviews conducted by trained researchers in participants’ homes. Participants 837 women randomly chosen from households across the UK identified from the Royal Mail’s Postcode Address File. Outcome measures Anticipated health behaviour change and perceived control to disclosure of BCOC risk. Results In response to a genetic test result, most women (72%) indicated ‘I would try harder to have a healthy lifestyle’, and over half (55%) felt ‘it would give me more control over my life’. These associations were independent of demographic factors or perceived risk of BCOC in Bonferroni-corrected multivariate analyses. However, a minority of women (14%) felt ‘it isn’t worth making lifestyle changes’ and that ‘I would feel less free to make choices in my life’ (16%) in response to BCOC risk disclosure. The former belief was more likely to be held by women who were educated below university degree level (P<0.001) after adjusting for other demographic and person-specific correlates. Conclusion These findings indicate that women in the UK largely anticipate that they would engage in positive health behaviour changes in response to BCOC risk disclosure. PMID:29275340

  11. How Children Use Examples to Make Conditional Predictions

    ERIC Educational Resources Information Center

    Kalish, Charles W.

    2010-01-01

    Two experiments explored children's and adults' use of examples to make conditional predictions. In Experiment 1 adults (N = 20) but not 4-year-olds (N = 21) or 8-year-olds (N =1 8) distinguished predictable from unpredictable features when features were partially correlated (e.g., necessary but not sufficient). Children did make reliable…

  12. Making Social Work Count: A Curriculum Innovation to Teach Quantitative Research Methods and Statistical Analysis to Undergraduate Social Work Students in the United Kingdom

    ERIC Educational Resources Information Center

    Teater, Barbra; Roy, Jessica; Carpenter, John; Forrester, Donald; Devaney, John; Scourfield, Jonathan

    2017-01-01

    Students in the United Kingdom (UK) are found to lack knowledge and skills in quantitative research methods. To address this gap, a quantitative research method and statistical analysis curriculum comprising 10 individual lessons was developed, piloted, and evaluated at two universities The evaluation found that BSW students' (N = 81)…

  13. Toward a systematic exploration of nano-bio interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bai, Xue; Liu, Fang; Liu, Yin

    Many studies of nanomaterials make non-systematic alterations of nanoparticle physicochemical properties. Given the immense size of the property space for nanomaterials, such approaches are not very useful in elucidating fundamental relationships between inherent physicochemical properties of these materials and their interactions with, and effects on, biological systems. Data driven artificial intelligence methods such as machine learning algorithms have proven highly effective in generating models with good predictivity and some degree of interpretability. They can provide a viable method of reducing or eliminating animal testing. However, careful experimental design with the modelling of the results in mind is a proven andmore » efficient way of exploring large materials spaces. This approach, coupled with high speed automated experimental synthesis and characterization technologies now appearing, is the fastest route to developing models that regulatory bodies may find useful. We advocate greatly increased focus on systematic modification of physicochemical properties of nanoparticles combined with comprehensive biological evaluation and computational analysis. This is essential to obtain better mechanistic understanding of nano-bio interactions, and to derive quantitatively predictive and robust models for the properties of nanomaterials that have useful domains of applicability. - Highlights: • Nanomaterials studies make non-systematic alterations to nanoparticle properties. • Vast nanomaterials property spaces require systematic studies of nano-bio interactions. • Experimental design and modelling are efficient ways of exploring materials spaces. • We advocate systematic modification and computational analysis to probe nano-bio interactions.« less

  14. Predictive analytics and child protection: constraints and opportunities.

    PubMed

    Russell, Jesse

    2015-08-01

    This paper considers how predictive analytics might inform, assist, and improve decision making in child protection. Predictive analytics represents recent increases in data quantity and data diversity, along with advances in computing technology. While the use of data and statistical modeling is not new to child protection decision making, its use in child protection is experiencing growth, and efforts to leverage predictive analytics for better decision-making in child protection are increasing. Past experiences, constraints and opportunities are reviewed. For predictive analytics to make the most impact on child protection practice and outcomes, it must embrace established criteria of validity, equity, reliability, and usefulness. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Predictive Model of Systemic Toxicity (SOT)

    EPA Science Inventory

    In an effort to ensure chemical safety in light of regulatory advances away from reliance on animal testing, USEPA and L’Oréal have collaborated to develop a quantitative systemic toxicity prediction model. Prediction of human systemic toxicity has proved difficult and remains a ...

  16. Quantitative PET Imaging with Novel HER3 Targeted Peptides Selected by Phage Display to Predict Androgen Independent Prostate Cancer Progression

    DTIC Science & Technology

    2017-08-01

    9 4 1. Introduction The subject of this research is the design and testing of a PET imaging agent for the detection and...AWARD NUMBER: W81XWH-16-1-0447 TITLE: Quantitative PET Imaging with Novel HER3-Targeted Peptides Selected by Phage Display to Predict Androgen...MA 02114 REPORT DATE: August 2017 TYPE OF REPORT: Annual PREPARED FOR: U.S. Army Medical Research and Materiel Command Fort Detrick, Maryland

  17. Quantitative somatosensory testing of the penis: optimizing the clinical neurological examination.

    PubMed

    Bleustein, Clifford B; Eckholdt, Haftan; Arezzo, Joseph C; Melman, Arnold

    2003-06-01

    Quantitative somatosensory testing, including vibration, pressure, spatial perception and thermal thresholds of the penis, has demonstrated neuropathy in patients with a history of erectile dysfunction of all etiologies. We evaluated which measurement of neurological function of the penis was best at predicting erectile dysfunction and examined the impact of location on the penis for quantitative somatosensory testing measurements. A total of 107 patients were evaluated. All patients were required to complete the erectile function domain of the International Index of Erectile Function (IIEF) questionnaire, of whom 24 had no complaints of erectile dysfunction and scored within the "normal" range on the IIEF. Patients were subsequently tested on ventral middle penile shaft, proximal dorsal midline penile shaft and glans penis (with foreskin retracted) for vibration, pressure, spatial perception, and warm and cold thermal thresholds. Mixed models repeated measures analysis of variance controlling for age, diabetes and hypertension revealed that method of measurement (quantitative somatosensory testing) was predictive of IIEF score (F = 209, df = 4,1315, p <0.001), while site of measurement on the penis was not. To determine the best method of measurement, we used hierarchical regression, which revealed that warm temperature was the best predictor of erectile dysfunction with pseudo R(2) = 0.19, p <0.0007. There was no significant improvement in predicting erectile dysfunction when another test was added. Using 37C and greater as the warm thermal threshold yielded a sensitivity of 88.5%, specificity 70.0% and positive predictive value 85.5%. Quantitative somatosensory testing using warm thermal threshold measurements taken at the glans penis can be used alone to assess the neurological status of the penis. Warm thermal thresholds alone offer a quick, noninvasive accurate method of evaluating penile neuropathy in an office setting.

  18. Trainee and Instructor Task Quantification: Development of Quantitative Indices and a Predictive Methodology.

    ERIC Educational Resources Information Center

    Whaton, George R.; And Others

    As the first step in a program to develop quantitative techniques for prescribing the design and use of training systems, the present study attempted: to compile an initial set of quantitative indices, to determine whether these indices could be used to describe a sample of trainee tasks and differentiate among them, to develop a predictive…

  19. Label-free cell-cycle analysis by high-throughput quantitative phase time-stretch imaging flow cytometry

    NASA Astrophysics Data System (ADS)

    Mok, Aaron T. Y.; Lee, Kelvin C. M.; Wong, Kenneth K. Y.; Tsia, Kevin K.

    2018-02-01

    Biophysical properties of cells could complement and correlate biochemical markers to characterize a multitude of cellular states. Changes in cell size, dry mass and subcellular morphology, for instance, are relevant to cell-cycle progression which is prevalently evaluated by DNA-targeted fluorescence measurements. Quantitative-phase microscopy (QPM) is among the effective biophysical phenotyping tools that can quantify cell sizes and sub-cellular dry mass density distribution of single cells at high spatial resolution. However, limited camera frame rate and thus imaging throughput makes QPM incompatible with high-throughput flow cytometry - a gold standard in multiparametric cell-based assay. Here we present a high-throughput approach for label-free analysis of cell cycle based on quantitative-phase time-stretch imaging flow cytometry at a throughput of > 10,000 cells/s. Our time-stretch QPM system enables sub-cellular resolution even at high speed, allowing us to extract a multitude (at least 24) of single-cell biophysical phenotypes (from both amplitude and phase images). Those phenotypes can be combined to track cell-cycle progression based on a t-distributed stochastic neighbor embedding (t-SNE) algorithm. Using multivariate analysis of variance (MANOVA) discriminant analysis, cell-cycle phases can also be predicted label-free with high accuracy at >90% in G1 and G2 phase, and >80% in S phase. We anticipate that high throughput label-free cell cycle characterization could open new approaches for large-scale single-cell analysis, bringing new mechanistic insights into complex biological processes including diseases pathogenesis.

  20. A Compressed Sensing-Based Wearable Sensor Network for Quantitative Assessment of Stroke Patients

    PubMed Central

    Yu, Lei; Xiong, Daxi; Guo, Liquan; Wang, Jiping

    2016-01-01

    Clinical rehabilitation assessment is an important part of the therapy process because it is the premise for prescribing suitable rehabilitation interventions. However, the commonly used assessment scales have the following two drawbacks: (1) they are susceptible to subjective factors; (2) they only have several rating levels and are influenced by a ceiling effect, making it impossible to exactly detect any further improvement in the movement. Meanwhile, energy constraints are a primary design consideration in wearable sensor network systems since they are often battery-operated. Traditionally, for wearable sensor network systems that follow the Shannon/Nyquist sampling theorem, there are many data that need to be sampled and transmitted. This paper proposes a novel wearable sensor network system to monitor and quantitatively assess the upper limb motion function, based on compressed sensing technology. With the sparse representation model, less data is transmitted to the computer than with traditional systems. The experimental results show that the accelerometer signals of Bobath handshake and shoulder touch exercises can be compressed, and the length of the compressed signal is less than 1/3 of the raw signal length. More importantly, the reconstruction errors have no influence on the predictive accuracy of the Brunnstrom stage classification model. It also indicated that the proposed system can not only reduce the amount of data during the sampling and transmission processes, but also, the reconstructed accelerometer signals can be used for quantitative assessment without any loss of useful information. PMID:26861337

Top