Sample records for quantitative theoretical prediction

  1. Quantitative force measurements using frequency modulation atomic force microscopy—theoretical foundations

    NASA Astrophysics Data System (ADS)

    Sader, John E.; Uchihashi, Takayuki; Higgins, Michael J.; Farrell, Alan; Nakayama, Yoshikazu; Jarvis, Suzanne P.

    2005-03-01

    Use of the atomic force microscope (AFM) in quantitative force measurements inherently requires a theoretical framework enabling conversion of the observed deflection properties of the cantilever to an interaction force. In this paper, the theoretical foundations of using frequency modulation atomic force microscopy (FM-AFM) in quantitative force measurements are examined and rigorously elucidated, with consideration being given to both 'conservative' and 'dissipative' interactions. This includes a detailed discussion of the underlying assumptions involved in such quantitative force measurements, the presentation of globally valid explicit formulae for evaluation of so-called 'conservative' and 'dissipative' forces, discussion of the origin of these forces, and analysis of the applicability of FM-AFM to quantitative force measurements in liquid.

  2. Quantitative self-assembly prediction yields targeted nanomedicines

    NASA Astrophysics Data System (ADS)

    Shamay, Yosi; Shah, Janki; Işık, Mehtap; Mizrachi, Aviram; Leibold, Josef; Tschaharganeh, Darjus F.; Roxbury, Daniel; Budhathoki-Uprety, Januka; Nawaly, Karla; Sugarman, James L.; Baut, Emily; Neiman, Michelle R.; Dacek, Megan; Ganesh, Kripa S.; Johnson, Darren C.; Sridharan, Ramya; Chu, Karen L.; Rajasekhar, Vinagolu K.; Lowe, Scott W.; Chodera, John D.; Heller, Daniel A.

    2018-02-01

    Development of targeted nanoparticle drug carriers often requires complex synthetic schemes involving both supramolecular self-assembly and chemical modification. These processes are generally difficult to predict, execute, and control. We describe herein a targeted drug delivery system that is accurately and quantitatively predicted to self-assemble into nanoparticles based on the molecular structures of precursor molecules, which are the drugs themselves. The drugs assemble with the aid of sulfated indocyanines into particles with ultrahigh drug loadings of up to 90%. We devised quantitative structure-nanoparticle assembly prediction (QSNAP) models to identify and validate electrotopological molecular descriptors as highly predictive indicators of nano-assembly and nanoparticle size. The resulting nanoparticles selectively targeted kinase inhibitors to caveolin-1-expressing human colon cancer and autochthonous liver cancer models to yield striking therapeutic effects while avoiding pERK inhibition in healthy skin. This finding enables the computational design of nanomedicines based on quantitative models for drug payload selection.

  3. Searching for an Accurate Marker-Based Prediction of an Individual Quantitative Trait in Molecular Plant Breeding

    PubMed Central

    Fu, Yong-Bi; Yang, Mo-Hua; Zeng, Fangqin; Biligetu, Bill

    2017-01-01

    Molecular plant breeding with the aid of molecular markers has played an important role in modern plant breeding over the last two decades. Many marker-based predictions for quantitative traits have been made to enhance parental selection, but the trait prediction accuracy remains generally low, even with the aid of dense, genome-wide SNP markers. To search for more accurate trait-specific prediction with informative SNP markers, we conducted a literature review on the prediction issues in molecular plant breeding and on the applicability of an RNA-Seq technique for developing function-associated specific trait (FAST) SNP markers. To understand whether and how FAST SNP markers could enhance trait prediction, we also performed a theoretical reasoning on the effectiveness of these markers in a trait-specific prediction, and verified the reasoning through computer simulation. To the end, the search yielded an alternative to regular genomic selection with FAST SNP markers that could be explored to achieve more accurate trait-specific prediction. Continuous search for better alternatives is encouraged to enhance marker-based predictions for an individual quantitative trait in molecular plant breeding. PMID:28729875

  4. Quantitative prediction of drug side effects based on drug-related features.

    PubMed

    Niu, Yanqing; Zhang, Wen

    2017-09-01

    Unexpected side effects of drugs are great concern in the drug development, and the identification of side effects is an important task. Recently, machine learning methods are proposed to predict the presence or absence of interested side effects for drugs, but it is difficult to make the accurate prediction for all of them. In this paper, we transform side effect profiles of drugs as their quantitative scores, by summing up their side effects with weights. The quantitative scores may measure the dangers of drugs, and thus help to compare the risk of different drugs. Here, we attempt to predict quantitative scores of drugs, namely the quantitative prediction. Specifically, we explore a variety of drug-related features and evaluate their discriminative powers for the quantitative prediction. Then, we consider several feature combination strategies (direct combination, average scoring ensemble combination) to integrate three informative features: chemical substructures, targets, and treatment indications. Finally, the average scoring ensemble model which produces the better performances is used as the final quantitative prediction model. Since weights for side effects are empirical values, we randomly generate different weights in the simulation experiments. The experimental results show that the quantitative method is robust to different weights, and produces satisfying results. Although other state-of-the-art methods cannot make the quantitative prediction directly, the prediction results can be transformed as the quantitative scores. By indirect comparison, the proposed method produces much better results than benchmark methods in the quantitative prediction. In conclusion, the proposed method is promising for the quantitative prediction of side effects, which may work cooperatively with existing state-of-the-art methods to reveal dangers of drugs.

  5. Aircraft noise prediction program theoretical manual, part 1

    NASA Technical Reports Server (NTRS)

    Zorumski, W. E.

    1982-01-01

    Aircraft noise prediction theoretical methods are given. The prediction of data which affect noise generation and propagation is addressed. These data include the aircraft flight dynamics, the source noise parameters, and the propagation effects.

  6. Quantitation in chiral capillary electrophoresis: theoretical and practical considerations.

    PubMed

    D'Hulst, A; Verbeke, N

    1994-06-01

    Capillary electrophoresis (CE) represents a decisive step forward in stereoselective analysis. The present paper deals with the theoretical aspects of the quantitation of peak separation in chiral CE. Because peak shape is very different in CE with respect to high performance liquid chromatography (HPLC), the resolution factor Rs, commonly used to describe the extent of separation between enantiomers as well as unrelated compounds, is demonstrated to be of limited value for the assessment of chiral separations in CE. Instead, the conjunct use of a relative chiral separation factor (RCS) and the percent chiral separation (% CS) is advocated. An array of examples is given to illustrate this. The practical aspects of method development using maltodextrins--which have been proposed previously as a major innovation in chiral selectors applicable in CE--are documented with the stereoselective analysis of coumarinic anticoagulant drugs. The possibilities of quantitation using CE were explored under two extreme conditions. Using ibuprofen, it has been demonstrated that enantiomeric excess determinations are possible down to a 1% level of optical contamination and stereoselective determinations are still possible with a good precision near the detection limit, increasing sample load by very long injection times. The theoretical aspects of this possibility are addressed in the discussion.

  7. University Students' Understanding of the Concepts Empirical, Theoretical, Qualitative and Quantitative Research

    ERIC Educational Resources Information Center

    Murtonen, Mari

    2015-01-01

    University research education in many disciplines is frequently confronted by problems with students' weak level of understanding of research concepts. A mind map technique was used to investigate how students understand central methodological concepts of empirical, theoretical, qualitative and quantitative. The main hypothesis was that some…

  8. A Theoretical Model for Predicting Fracture Strength and Critical Flaw Size of the ZrB2-ZrC Composites at High Temperatures

    NASA Astrophysics Data System (ADS)

    Wang, Ruzhuan; Li, Xiaobo; Wang, Jing; Jia, Bi; Li, Weiguo

    2018-06-01

    This work shows a new rational theoretical model for quantitatively predicting fracture strength and critical flaw size of the ZrB2-ZrC composites at different temperatures, which is based on a new proposed temperature dependent fracture surface energy model and the Griffith criterion. The fracture model takes into account the combined effects of temperature and damage terms (surface flaws and internal flaws) with no any fitting parameters. The predictions of fracture strength and critical flaw size of the ZrB2-ZrC composites at high temperatures agree well with experimental data. Then using the theoretical method, the improvement and design of materials are proposed. The proposed model can be used to predict the fracture strength, find the critical flaw and study the effects of microstructures on the fracture mechanism of the ZrB2-ZrC composites at high temperatures, which thus could become a potential convenient, practical and economical technical means for predicting fracture properties and material design.

  9. Quantitative prediction of solute strengthening in aluminium alloys.

    PubMed

    Leyson, Gerard Paul M; Curtin, William A; Hector, Louis G; Woodward, Christopher F

    2010-09-01

    Despite significant advances in computational materials science, a quantitative, parameter-free prediction of the mechanical properties of alloys has been difficult to achieve from first principles. Here, we present a new analytic theory that, with input from first-principles calculations, is able to predict the strengthening of aluminium by substitutional solute atoms. Solute-dislocation interaction energies in and around the dislocation core are first calculated using density functional theory and a flexible-boundary-condition method. An analytic model for the strength, or stress to move a dislocation, owing to the random field of solutes, is then presented. The theory, which has no adjustable parameters and is extendable to other metallic alloys, predicts both the energy barriers to dislocation motion and the zero-temperature flow stress, allowing for predictions of finite-temperature flow stresses. Quantitative comparisons with experimental flow stresses at temperature T=78 K are made for Al-X alloys (X=Mg, Si, Cu, Cr) and good agreement is obtained.

  10. Young children's core symbolic and nonsymbolic quantitative knowledge in the prediction of later mathematics achievement.

    PubMed

    Geary, David C; vanMarle, Kristy

    2016-12-01

    At the beginning of preschool (M = 46 months of age), 197 (94 boys) children were administered tasks that assessed a suite of nonsymbolic and symbolic quantitative competencies as well as their executive functions, verbal and nonverbal intelligence, preliteracy skills, and their parents' education level. The children's mathematics achievement was assessed at the end of preschool (M = 64 months). We used a series of Bayesian and standard regression analyses to winnow this broad set of competencies down to the core subset of quantitative skills that predict later mathematics achievement, controlling other factors. This knowledge included children's fluency in reciting the counting string, their understanding of the cardinal value of number words, and recognition of Arabic numerals, as well as their sensitivity to the relative quantity of 2 collections of objects. The results inform theoretical models of the foundations of children's early quantitative development and have practical implications for the design of early interventions for children at risk for poor long-term mathematics achievement. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  11. Universality and predictability in molecular quantitative genetics.

    PubMed

    Nourmohammad, Armita; Held, Torsten; Lässig, Michael

    2013-12-01

    Molecular traits, such as gene expression levels or protein binding affinities, are increasingly accessible to quantitative measurement by modern high-throughput techniques. Such traits measure molecular functions and, from an evolutionary point of view, are important as targets of natural selection. We review recent developments in evolutionary theory and experiments that are expected to become building blocks of a quantitative genetics of molecular traits. We focus on universal evolutionary characteristics: these are largely independent of a trait's genetic basis, which is often at least partially unknown. We show that universal measurements can be used to infer selection on a quantitative trait, which determines its evolutionary mode of conservation or adaptation. Furthermore, universality is closely linked to predictability of trait evolution across lineages. We argue that universal trait statistics extends over a range of cellular scales and opens new avenues of quantitative evolutionary systems biology. Copyright © 2013. Published by Elsevier Ltd.

  12. Confirmation of theoretical colour predictions for layering dental composite materials.

    PubMed

    Mikhail, Sarah S; Johnston, William M

    2014-04-01

    The aim of this study is to confirm the theoretical colour predictions for single and double layers of dental composite materials on an opaque backing. Single and double layers of composite resins were fabricated, placed in optical contact with a grey backing and measured for spectral radiance. The spectral reflectance and colour were directly determined. Absorption and scattering coefficients as previously reported, the measured thickness of the single layers and the effective reflectance of the grey backing were utilized to theoretically predict the reflectance of the single layer using corrected Kubelka-Munk reflectance theory. For double layers the predicted effective reflectance of the single layer was used as the reflectance of the backing of the second layer and the thickness of the second layer was used to predict the reflectance of the double layer. Colour differences, using both the CIELAB and CIEDE2000 formulae, measured the discrepancy between each directly determined colour and its corresponding theoretical colour. The colour difference discrepancies generally ranged around the perceptibility threshold but were consistently below the respective acceptability threshold. This theory can predict the colour of layers of composite resin within acceptability limits and generally also within perceptibility limits. This theory could therefore be incorporated into computer-based optical measuring instruments that can automate the shade selections for layers of a more opaque first layer under a more translucent second layer for those clinical situations where an underlying background colour and a desirable final colour can be measured. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Chaotic advection at large Péclet number: Electromagnetically driven experiments, numerical simulations, and theoretical predictions

    NASA Astrophysics Data System (ADS)

    Figueroa, Aldo; Meunier, Patrice; Cuevas, Sergio; Villermaux, Emmanuel; Ramos, Eduardo

    2014-01-01

    We present a combination of experiment, theory, and modelling on laminar mixing at large Péclet number. The flow is produced by oscillating electromagnetic forces in a thin electrolytic fluid layer, leading to oscillating dipoles, quadrupoles, octopoles, and disordered flows. The numerical simulations are based on the Diffusive Strip Method (DSM) which was recently introduced (P. Meunier and E. Villermaux, "The diffusive strip method for scalar mixing in two-dimensions," J. Fluid Mech. 662, 134-172 (2010)) to solve the advection-diffusion problem by combining Lagrangian techniques and theoretical modelling of the diffusion. Numerical simulations obtained with the DSM are in reasonable agreement with quantitative dye visualization experiments of the scalar fields. A theoretical model based on log-normal Probability Density Functions (PDFs) of stretching factors, characteristic of homogeneous turbulence in the Batchelor regime, allows to predict the PDFs of scalar in agreement with numerical and experimental results. This model also indicates that the PDFs of scalar are asymptotically close to log-normal at late stages, except for the large concentration levels which correspond to low stretching factors.

  14. Aircraft noise prediction program theoretical manual: Rotorcraft System Noise Prediction System (ROTONET), part 4

    NASA Technical Reports Server (NTRS)

    Weir, Donald S.; Jumper, Stephen J.; Burley, Casey L.; Golub, Robert A.

    1995-01-01

    This document describes the theoretical methods used in the rotorcraft noise prediction system (ROTONET), which is a part of the NASA Aircraft Noise Prediction Program (ANOPP). The ANOPP code consists of an executive, database manager, and prediction modules for jet engine, propeller, and rotor noise. The ROTONET subsystem contains modules for the prediction of rotor airloads and performance with momentum theory and prescribed wake aerodynamics, rotor tone noise with compact chordwise and full-surface solutions to the Ffowcs-Williams-Hawkings equations, semiempirical airfoil broadband noise, and turbulence ingestion broadband noise. Flight dynamics, atmosphere propagation, and noise metric calculations are covered in NASA TM-83199, Parts 1, 2, and 3.

  15. Benchmarking B-Cell Epitope Prediction with Quantitative Dose-Response Data on Antipeptide Antibodies: Towards Novel Pharmaceutical Product Development

    PubMed Central

    Caoili, Salvador Eugenio C.

    2014-01-01

    B-cell epitope prediction can enable novel pharmaceutical product development. However, a mechanistically framed consensus has yet to emerge on benchmarking such prediction, thus presenting an opportunity to establish standards of practice that circumvent epistemic inconsistencies of casting the epitope prediction task as a binary-classification problem. As an alternative to conventional dichotomous qualitative benchmark data, quantitative dose-response data on antibody-mediated biological effects are more meaningful from an information-theoretic perspective in the sense that such effects may be expressed as probabilities (e.g., of functional inhibition by antibody) for which the Shannon information entropy (SIE) can be evaluated as a measure of informativeness. Accordingly, half-maximal biological effects (e.g., at median inhibitory concentrations of antibody) correspond to maximally informative data while undetectable and maximal biological effects correspond to minimally informative data. This applies to benchmarking B-cell epitope prediction for the design of peptide-based immunogens that elicit antipeptide antibodies with functionally relevant cross-reactivity. Presently, the Immune Epitope Database (IEDB) contains relatively few quantitative dose-response data on such cross-reactivity. Only a small fraction of these IEDB data is maximally informative, and many more of them are minimally informative (i.e., with zero SIE). Nevertheless, the numerous qualitative data in IEDB suggest how to overcome the paucity of informative benchmark data. PMID:24949474

  16. Chaotic advection at large Péclet number: Electromagnetically driven experiments, numerical simulations, and theoretical predictions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Figueroa, Aldo; Meunier, Patrice; Villermaux, Emmanuel

    2014-01-15

    We present a combination of experiment, theory, and modelling on laminar mixing at large Péclet number. The flow is produced by oscillating electromagnetic forces in a thin electrolytic fluid layer, leading to oscillating dipoles, quadrupoles, octopoles, and disordered flows. The numerical simulations are based on the Diffusive Strip Method (DSM) which was recently introduced (P. Meunier and E. Villermaux, “The diffusive strip method for scalar mixing in two-dimensions,” J. Fluid Mech. 662, 134–172 (2010)) to solve the advection-diffusion problem by combining Lagrangian techniques and theoretical modelling of the diffusion. Numerical simulations obtained with the DSM are in reasonable agreement withmore » quantitative dye visualization experiments of the scalar fields. A theoretical model based on log-normal Probability Density Functions (PDFs) of stretching factors, characteristic of homogeneous turbulence in the Batchelor regime, allows to predict the PDFs of scalar in agreement with numerical and experimental results. This model also indicates that the PDFs of scalar are asymptotically close to log-normal at late stages, except for the large concentration levels which correspond to low stretching factors.« less

  17. Prediction and theoretical characterization of p-type organic semiconductor crystals for field-effect transistor applications.

    PubMed

    Atahan-Evrenk, Sule; Aspuru-Guzik, Alán

    2014-01-01

    The theoretical prediction and characterization of the solid-state structure of organic semiconductors has tremendous potential for the discovery of new high performance materials. To date, the theoretical analysis mostly relied on the availability of crystal structures obtained through X-ray diffraction. However, the theoretical prediction of the crystal structures of organic semiconductor molecules remains a challenge. This review highlights some of the recent advances in the determination of structure-property relationships of the known organic semiconductor single-crystals and summarizes a few available studies on the prediction of the crystal structures of p-type organic semiconductors for transistor applications.

  18. The turbulent recirculating flow field in a coreless induction furnace. A comparison of theoretical predictions with measurements

    NASA Technical Reports Server (NTRS)

    El-Kaddah, N.; Szekely, J.

    1982-01-01

    A mathematical representation for the electromagnetic force field and the fluid flow field in a coreless induction furnace is presented. The fluid flow field was represented by writing the axisymmetric turbulent Navier-Stokes equation, containing the electromagnetic body force term. The electromagnetic body force field was calculated by using a technique of mutual inductances. The kappa-epsilon model was employed for evaluating the turbulent viscosity and the resultant differential equations were solved numerically. Theoretically predicted velocity fields are in reasonably good agreement with the experimental measurements reported by Hunt and Moore; furthermore, the agreement regarding the turbulent intensities are essentially quantitative. These results indicate that the kappa-epsilon model provides a good engineering representation of the turbulent recirculating flows occurring in induction furnaces. At this stage it is not clear whether the discrepancies between measurements and the predictions, which were not very great in any case, are attributable either to the model or to the measurement techniques employed.

  19. Quantitative prediction of phase transformations in silicon during nanoindentation

    NASA Astrophysics Data System (ADS)

    Zhang, Liangchi; Basak, Animesh

    2013-08-01

    This paper establishes the first quantitative relationship between the phases transformed in silicon and the shape characteristics of nanoindentation curves. Based on an integrated analysis using TEM and unit cell properties of phases, the volumes of the phases emerged in a nanoindentation are formulated as a function of pop-out size and depth of nanoindentation impression. This simple formula enables a fast, accurate and quantitative prediction of the phases in a nanoindentation cycle, which has been impossible before.

  20. Comparison of experimental surface pressures with theoretical predictions on twin two-dimensional convergent-divergent nozzles

    NASA Technical Reports Server (NTRS)

    Carlson, J. R.; Pendergraft, O. C., Jr.; Burley, J. R., II

    1986-01-01

    A three-dimensional subsonic aerodynamic panel code (VSAERO) was used to predict the effects of upper and lower external nozzle flap geometry on the external afterbody/nozzle pressure coefficient distributions and external nozzle drag of nonaxisymmetric convergent-divergent exhaust nozzles having parallel external sidewalls installed on a generic twin-engine high performance aircraft model. Nozzle static pressure coefficient distributions along the upper and lower surfaces near the model centerline and near the outer edges (corner) of the two surfaces were calculated, and nozzle drag was predicted using these surface pressure distributions. A comparison between the theoretical predictions and experimental wind tunnel data is made to evaluate the utility of the code in calculating the flow about these types of non-axisymmetric afterbody configurations. For free-stream Mach numbers of 0.60 and 0.90, the conditions where the flows were attached on the boattails yielded the best comparison between the theoretical predictions and the experimental data. For the Boattail terminal angles of greater than 15 deg., the experimental data for M = 0.60 and 0.90 indicated areas of separated flow, so the theoretical predictions failed to match the experimental data. Even though calculations of regions of separated flows are within the capabilities of the theoretical method, acceptable solutions were not obtained.

  1. Theoretical Prediction of Microgravity Ignition Delay of Polymeric Fuels in Low Velocity Flows

    NASA Technical Reports Server (NTRS)

    Fernandez-Pello, A. C.; Torero, J. L.; Zhou, Y. Y.; Walther, D.; Ross, H. D.

    2001-01-01

    A new flammability apparatus and protocol, FIST (Forced Flow Ignition and Flame Spread Test), is under development. Based on the LIFT (Lateral Ignition and Flame Spread Test) protocol, FIST better reflects the environments expected in spacebased facilities. The final objective of the FIST research is to provide NASA with a test methodology that complements the existing protocol and provides a more comprehensive assessment of material flammability of practical materials for space applications. Theoretical modeling, an extensive normal gravity data bank and a few validation space experiments will support the testing methodology. The objective of the work presented here is to predict the ignition delay and critical heat flux for ignition of solid fuels in microgravity at airflow velocities below those induced in normal gravity. This is achieved through the application of a numerical model previously developed of piloted ignition of solid polymeric materials exposed to an external radiant heat flux. The model predictions will provide quantitative results about ignition of practical materials in the limiting conditions expected in space facilities. Experimental data of surface temperature histories and ignition delay obtained in the KC-135 aircraft are used to determine the critical pyrolysate mass flux for ignition and this value is subsequently used to predict the ignition delay and the critical heat flux for ignition of the material. Surface temperature and piloted ignition delay calculations for Polymethylmethacrylate (PMMA) and a Polypropylene/Fiberglass (PP/GL) composite were conducted under both reduced and normal gravity conditions. It was found that ignition delay times are significantly shorter at velocities below those induced by natural convection.

  2. Quantitative Evaluation of Performance in Interventional Neuroradiology: An Integrated Curriculum Featuring Theoretical and Practical Challenges.

    PubMed

    Ernst, Marielle; Kriston, Levente; Romero, Javier M; Frölich, Andreas M; Jansen, Olav; Fiehler, Jens; Buhk, Jan-Hendrik

    2016-01-01

    We sought to develop a standardized curriculum capable of assessing key competencies in Interventional Neuroradiology by the use of models and simulators in an objective, quantitative, and efficient way. In this evaluation we analyzed the associations between the practical experience, theoretical knowledge, and the skills lab performance of interventionalists. We evaluated the endovascular skills of 26 participants of the Advanced Course in Endovascular Interventional Neuroradiology of the European Society of Neuroradiology with a set of three tasks (aneurysm coiling and thrombectomy in a virtual simulator and placement of an intra-aneurysmal flow disruptor in a flow model). Practical experience was assessed by a survey. Participants completed a written and oral examination to evaluate theoretical knowledge. Bivariate and multivariate analyses were performed. In multivariate analysis knowledge of materials and techniques in Interventional Neuroradiology was moderately associated with skills in aneurysm coiling and thrombectomy. Experience in mechanical thrombectomy was moderately associated with thrombectomy skills, while age was negatively associated with thrombectomy skills. We found no significant association between age, sex, or work experience and skills in aneurysm coiling. Our study gives an example of how an integrated curriculum for reasonable and cost-effective assessment of key competences of an interventional neuroradiologist could look. In addition to traditional assessment of theoretical knowledge practical skills are measured by the use of endovascular simulators yielding objective, quantitative, and constructive data for the evaluation of the current performance status of participants as well as the evolution of their technical competency over time.

  3. A Theoretical Model to Predict Both Horizontal Displacement and Vertical Displacement for Electromagnetic Induction-Based Deep Displacement Sensors

    PubMed Central

    Shentu, Nanying; Zhang, Hongjian; Li, Qing; Zhou, Hongliang; Tong, Renyuan; Li, Xiong

    2012-01-01

    Deep displacement observation is one basic means of landslide dynamic study and early warning monitoring and a key part of engineering geological investigation. In our previous work, we proposed a novel electromagnetic induction-based deep displacement sensor (I-type) to predict deep horizontal displacement and a theoretical model called equation-based equivalent loop approach (EELA) to describe its sensing characters. However in many landslide and related geological engineering cases, both horizontal displacement and vertical displacement vary apparently and dynamically so both may require monitoring. In this study, a II-type deep displacement sensor is designed by revising our I-type sensor to simultaneously monitor the deep horizontal displacement and vertical displacement variations at different depths within a sliding mass. Meanwhile, a new theoretical modeling called the numerical integration-based equivalent loop approach (NIELA) has been proposed to quantitatively depict II-type sensors’ mutual inductance properties with respect to predicted horizontal displacements and vertical displacements. After detailed examinations and comparative studies between measured mutual inductance voltage, NIELA-based mutual inductance and EELA-based mutual inductance, NIELA has verified to be an effective and quite accurate analytic model for characterization of II-type sensors. The NIELA model is widely applicable for II-type sensors’ monitoring on all kinds of landslides and other related geohazards with satisfactory estimation accuracy and calculation efficiency. PMID:22368467

  4. A theoretical model to predict both horizontal displacement and vertical displacement for electromagnetic induction-based deep displacement sensors.

    PubMed

    Shentu, Nanying; Zhang, Hongjian; Li, Qing; Zhou, Hongliang; Tong, Renyuan; Li, Xiong

    2012-01-01

    Deep displacement observation is one basic means of landslide dynamic study and early warning monitoring and a key part of engineering geological investigation. In our previous work, we proposed a novel electromagnetic induction-based deep displacement sensor (I-type) to predict deep horizontal displacement and a theoretical model called equation-based equivalent loop approach (EELA) to describe its sensing characters. However in many landslide and related geological engineering cases, both horizontal displacement and vertical displacement vary apparently and dynamically so both may require monitoring. In this study, a II-type deep displacement sensor is designed by revising our I-type sensor to simultaneously monitor the deep horizontal displacement and vertical displacement variations at different depths within a sliding mass. Meanwhile, a new theoretical modeling called the numerical integration-based equivalent loop approach (NIELA) has been proposed to quantitatively depict II-type sensors' mutual inductance properties with respect to predicted horizontal displacements and vertical displacements. After detailed examinations and comparative studies between measured mutual inductance voltage, NIELA-based mutual inductance and EELA-based mutual inductance, NIELA has verified to be an effective and quite accurate analytic model for characterization of II-type sensors. The NIELA model is widely applicable for II-type sensors' monitoring on all kinds of landslides and other related geohazards with satisfactory estimation accuracy and calculation efficiency.

  5. Quantitative Evaluation of Performance in Interventional Neuroradiology: An Integrated Curriculum Featuring Theoretical and Practical Challenges

    PubMed Central

    Ernst, Marielle; Kriston, Levente; Romero, Javier M.; Frölich, Andreas M.; Jansen, Olav; Fiehler, Jens; Buhk, Jan-Hendrik

    2016-01-01

    Purpose We sought to develop a standardized curriculum capable of assessing key competencies in Interventional Neuroradiology by the use of models and simulators in an objective, quantitative, and efficient way. In this evaluation we analyzed the associations between the practical experience, theoretical knowledge, and the skills lab performance of interventionalists. Materials and Methods We evaluated the endovascular skills of 26 participants of the Advanced Course in Endovascular Interventional Neuroradiology of the European Society of Neuroradiology with a set of three tasks (aneurysm coiling and thrombectomy in a virtual simulator and placement of an intra-aneurysmal flow disruptor in a flow model). Practical experience was assessed by a survey. Participants completed a written and oral examination to evaluate theoretical knowledge. Bivariate and multivariate analyses were performed. Results In multivariate analysis knowledge of materials and techniques in Interventional Neuroradiology was moderately associated with skills in aneurysm coiling and thrombectomy. Experience in mechanical thrombectomy was moderately associated with thrombectomy skills, while age was negatively associated with thrombectomy skills. We found no significant association between age, sex, or work experience and skills in aneurysm coiling. Conclusion Our study gives an example of how an integrated curriculum for reasonable and cost-effective assessment of key competences of an interventional neuroradiologist could look. In addition to traditional assessment of theoretical knowledge practical skills are measured by the use of endovascular simulators yielding objective, quantitative, and constructive data for the evaluation of the current performance status of participants as well as the evolution of their technical competency over time. PMID:26848840

  6. A comparison of SAR ATR performance with information theoretic predictions

    NASA Astrophysics Data System (ADS)

    Blacknell, David

    2003-09-01

    Performance assessment of automatic target detection and recognition algorithms for SAR systems (or indeed any other sensors) is essential if the military utility of the system / algorithm mix is to be quantified. This is a relatively straightforward task if extensive trials data from an existing system is used. However, a crucial requirement is to assess the potential performance of novel systems as a guide to procurement decisions. This task is no longer straightforward since a hypothetical system cannot provide experimental trials data. QinetiQ has previously developed a theoretical technique for classification algorithm performance assessment based on information theory. The purpose of the study presented here has been to validate this approach. To this end, experimental SAR imagery of targets has been collected using the QinetiQ Enhanced Surveillance Radar to allow algorithm performance assessments as a number of parameters are varied. In particular, performance comparisons can be made for (i) resolutions up to 0.1m, (ii) single channel versus polarimetric (iii) targets in the open versus targets in scrubland and (iv) use versus non-use of camouflage. The change in performance as these parameters are varied has been quantified from the experimental imagery whilst the information theoretic approach has been used to predict the expected variation of performance with parameter value. A comparison of these measured and predicted assessments has revealed the strengths and weaknesses of the theoretical technique as will be discussed in the paper.

  7. Extending Theory-Based Quantitative Predictions to New Health Behaviors.

    PubMed

    Brick, Leslie Ann D; Velicer, Wayne F; Redding, Colleen A; Rossi, Joseph S; Prochaska, James O

    2016-04-01

    Traditional null hypothesis significance testing suffers many limitations and is poorly adapted to theory testing. A proposed alternative approach, called Testing Theory-based Quantitative Predictions, uses effect size estimates and confidence intervals to directly test predictions based on theory. This paper replicates findings from previous smoking studies and extends the approach to diet and sun protection behaviors using baseline data from a Transtheoretical Model behavioral intervention (N = 5407). Effect size predictions were developed using two methods: (1) applying refined effect size estimates from previous smoking research or (2) using predictions developed by an expert panel. Thirteen of 15 predictions were confirmed for smoking. For diet, 7 of 14 predictions were confirmed using smoking predictions and 6 of 16 using expert panel predictions. For sun protection, 3 of 11 predictions were confirmed using smoking predictions and 5 of 19 using expert panel predictions. Expert panel predictions and smoking-based predictions poorly predicted effect sizes for diet and sun protection constructs. Future studies should aim to use previous empirical data to generate predictions whenever possible. The best results occur when there have been several iterations of predictions for a behavior, such as with smoking, demonstrating that expected values begin to converge on the population effect size. Overall, the study supports necessity in strengthening and revising theory with empirical data.

  8. Toward a Theoretical Model of Decision-Making and Resistance to Change among Higher Education Online Course Designers

    ERIC Educational Resources Information Center

    Dodd, Bucky J.

    2013-01-01

    Online course design is an emerging practice in higher education, yet few theoretical models currently exist to explain or predict how the diffusion of innovations occurs in this space. This study used a descriptive, quantitative survey research design to examine theoretical relationships between decision-making style and resistance to change…

  9. Uncertainties of predictions from parton distributions II: theoretical errors

    NASA Astrophysics Data System (ADS)

    Martin, A. D.; Roberts, R. G.; Stirling, W. J.; Thorne, R. S.

    2004-06-01

    We study the uncertainties in parton distributions, determined in global fits to deep inelastic and related hard scattering data, due to so-called theoretical errors. Amongst these, we include potential errors due to the change of perturbative order (NLO to NNLO), ln(1/x) and ln(1-x) effects, absorptive corrections and higher-twist contributions. We investigate these uncertainties both by including explicit corrections to our standard global analysis and by examining the sensitivity to changes of the x, Q 2, W 2 cuts on the data that are fitted. In this way we expose those kinematic regions where the conventional DGLAP description is inadequate. As a consequence we obtain a set of NLO, and of NNLO, conservative partons where the data are fully consistent with DGLAP evolution, but over a restricted kinematic domain. We also examine the potential effects of such issues as the choice of input parametrisation, heavy target corrections, assumptions about the strange quark sea and isospin violation. Hence we are able to compare the theoretical errors with those uncertainties due to errors on the experimental measurements, which we studied previously. We use W and Higgs boson production at the Tevatron and the LHC as explicit examples of the uncertainties arising from parton distributions. For many observables the theoretical error is dominant, but for the cross section for W production at the Tevatron both the theoretical and experimental uncertainties are small, and hence the NNLO prediction may serve as a valuable luminosity monitor.

  10. Quantitative prediction of oral cancer risk in patients with oral leukoplakia.

    PubMed

    Liu, Yao; Li, Yicheng; Fu, Yue; Liu, Tong; Liu, Xiaoyong; Zhang, Xinyan; Fu, Jie; Guan, Xiaobing; Chen, Tong; Chen, Xiaoxin; Sun, Zheng

    2017-07-11

    Exfoliative cytology has been widely used for early diagnosis of oral squamous cell carcinoma. We have developed an oral cancer risk index using DNA index value to quantitatively assess cancer risk in patients with oral leukoplakia, but with limited success. In order to improve the performance of the risk index, we collected exfoliative cytology, histopathology, and clinical follow-up data from two independent cohorts of normal, leukoplakia and cancer subjects (training set and validation set). Peaks were defined on the basis of first derivatives with positives, and modern machine learning techniques were utilized to build statistical prediction models on the reconstructed data. Random forest was found to be the best model with high sensitivity (100%) and specificity (99.2%). Using the Peaks-Random Forest model, we constructed an index (OCRI2) as a quantitative measurement of cancer risk. Among 11 leukoplakia patients with an OCRI2 over 0.5, 4 (36.4%) developed cancer during follow-up (23 ± 20 months), whereas 3 (5.3%) of 57 leukoplakia patients with an OCRI2 less than 0.5 developed cancer (32 ± 31 months). OCRI2 is better than other methods in predicting oral squamous cell carcinoma during follow-up. In conclusion, we have developed an exfoliative cytology-based method for quantitative prediction of cancer risk in patients with oral leukoplakia.

  11. Predicting Child Abuse Potential: An Empirical Investigation of Two Theoretical Frameworks

    ERIC Educational Resources Information Center

    Begle, Angela Moreland; Dumas, Jean E.; Hanson, Rochelle F.

    2010-01-01

    This study investigated two theoretical risk models predicting child maltreatment potential: (a) Belsky's (1993) developmental-ecological model and (b) the cumulative risk model in a sample of 610 caregivers (49% African American, 46% European American; 53% single) with a child between 3 and 6 years old. Results extend the literature by using a…

  12. Quantitative Comparison of Photothermal Heat Generation between Gold Nanospheres and Nanorods.

    PubMed

    Qin, Zhenpeng; Wang, Yiru; Randrianalisoa, Jaona; Raeesi, Vahid; Chan, Warren C W; Lipiński, Wojciech; Bischof, John C

    2016-07-21

    Gold nanoparticles (GNPs) are widely used for biomedical applications due to unique optical properties, established synthesis methods, and biological compatibility. Despite important applications of plasmonic heating in thermal therapy, imaging, and diagnostics, the lack of quantification in heat generation leads to difficulties in comparing the heating capability for new plasmonic nanostructures and predicting the therapeutic and diagnostic outcome. This study quantifies GNP heat generation by experimental measurements and theoretical predictions for gold nanospheres (GNS) and nanorods (GNR). Interestingly, the results show a GNP-type dependent agreement between experiment and theory. The measured heat generation of GNS matches well with theory, while the measured heat generation of GNR is only 30% of that predicted theoretically at peak absorption. This then leads to a surprising finding that the polydispersity, the deviation of nanoparticle size and shape from nominal value, significantly influences GNR heat generation (>70% reduction), while having a limited effect for GNS (<10% change). This work demonstrates that polydispersity is an important metric in quantitatively predicting plasmonic heat generation and provides a validated framework to quantitatively compare the heating capabilities between gold and other plasmonic nanostructures.

  13. Quantitative Comparison of Photothermal Heat Generation between Gold Nanospheres and Nanorods

    NASA Astrophysics Data System (ADS)

    Qin, Zhenpeng; Wang, Yiru; Randrianalisoa, Jaona; Raeesi, Vahid; Chan, Warren C. W.; Lipiński, Wojciech; Bischof, John C.

    2016-07-01

    Gold nanoparticles (GNPs) are widely used for biomedical applications due to unique optical properties, established synthesis methods, and biological compatibility. Despite important applications of plasmonic heating in thermal therapy, imaging, and diagnostics, the lack of quantification in heat generation leads to difficulties in comparing the heating capability for new plasmonic nanostructures and predicting the therapeutic and diagnostic outcome. This study quantifies GNP heat generation by experimental measurements and theoretical predictions for gold nanospheres (GNS) and nanorods (GNR). Interestingly, the results show a GNP-type dependent agreement between experiment and theory. The measured heat generation of GNS matches well with theory, while the measured heat generation of GNR is only 30% of that predicted theoretically at peak absorption. This then leads to a surprising finding that the polydispersity, the deviation of nanoparticle size and shape from nominal value, significantly influences GNR heat generation (>70% reduction), while having a limited effect for GNS (<10% change). This work demonstrates that polydispersity is an important metric in quantitatively predicting plasmonic heat generation and provides a validated framework to quantitatively compare the heating capabilities between gold and other plasmonic nanostructures.

  14. PREDICTING TOXICOLOGICAL ENDPOINTS OF CHEMICALS USING QUANTITATIVE STRUCTURE-ACTIVITY RELATIONSHIPS (QSARS)

    EPA Science Inventory

    Quantitative structure-activity relationships (QSARs) are being developed to predict the toxicological endpoints for untested chemicals similar in structure to chemicals that have known experimental toxicological data. Based on a very large number of predetermined descriptors, a...

  15. Impact of implementation choices on quantitative predictions of cell-based computational models

    NASA Astrophysics Data System (ADS)

    Kursawe, Jochen; Baker, Ruth E.; Fletcher, Alexander G.

    2017-09-01

    'Cell-based' models provide a powerful computational tool for studying the mechanisms underlying the growth and dynamics of biological tissues in health and disease. An increasing amount of quantitative data with cellular resolution has paved the way for the quantitative parameterisation and validation of such models. However, the numerical implementation of cell-based models remains challenging, and little work has been done to understand to what extent implementation choices may influence model predictions. Here, we consider the numerical implementation of a popular class of cell-based models called vertex models, which are often used to study epithelial tissues. In two-dimensional vertex models, a tissue is approximated as a tessellation of polygons and the vertices of these polygons move due to mechanical forces originating from the cells. Such models have been used extensively to study the mechanical regulation of tissue topology in the literature. Here, we analyse how the model predictions may be affected by numerical parameters, such as the size of the time step, and non-physical model parameters, such as length thresholds for cell rearrangement. We find that vertex positions and summary statistics are sensitive to several of these implementation parameters. For example, the predicted tissue size decreases with decreasing cell cycle durations, and cell rearrangement may be suppressed by large time steps. These findings are counter-intuitive and illustrate that model predictions need to be thoroughly analysed and implementation details carefully considered when applying cell-based computational models in a quantitative setting.

  16. Distinguishing prognostic and predictive biomarkers: An information theoretic approach.

    PubMed

    Sechidis, Konstantinos; Papangelou, Konstantinos; Metcalfe, Paul D; Svensson, David; Weatherall, James; Brown, Gavin

    2018-05-02

    The identification of biomarkers to support decision-making is central to personalised medicine, in both clinical and research scenarios. The challenge can be seen in two halves: identifying predictive markers, which guide the development/use of tailored therapies; and identifying prognostic markers, which guide other aspects of care and clinical trial planning, i.e. prognostic markers can be considered as covariates for stratification. Mistakenly assuming a biomarker to be predictive, when it is in fact largely prognostic (and vice-versa) is highly undesirable, and can result in financial, ethical and personal consequences. We present a framework for data-driven ranking of biomarkers on their prognostic/predictive strength, using a novel information theoretic method. This approach provides a natural algebra to discuss and quantify the individual predictive and prognostic strength, in a self-consistent mathematical framework. Our contribution is a novel procedure, INFO+, which naturally distinguishes the prognostic vs predictive role of each biomarker and handles higher order interactions. In a comprehensive empirical evaluation INFO+ outperforms more complex methods, most notably when noise factors dominate, and biomarkers are likely to be falsely identified as predictive, when in fact they are just strongly prognostic. Furthermore, we show that our methods can be 1-3 orders of magnitude faster than competitors, making it useful for biomarker discovery in 'big data' scenarios. Finally, we apply our methods to identify predictive biomarkers on two real clinical trials, and introduce a new graphical representation that provides greater insight into the prognostic and predictive strength of each biomarker. R implementations of the suggested methods are available at https://github.com/sechidis. konstantinos.sechidis@manchester.ac.uk. Supplementary data are available at Bioinformatics online.

  17. Predicting Children's Reading and Mathematics Achievement from Early Quantitative Knowledge and Domain-General Cognitive Abilities

    PubMed Central

    Chu, Felicia W.; vanMarle, Kristy; Geary, David C.

    2016-01-01

    One hundred children (44 boys) participated in a 3-year longitudinal study of the development of basic quantitative competencies and the relation between these competencies and later mathematics and reading achievement. The children's preliteracy knowledge, intelligence, executive functions, and parental educational background were also assessed. The quantitative tasks assessed a broad range of symbolic and nonsymbolic knowledge and were administered four times across 2 years of preschool. Mathematics achievement was assessed at the end of each of 2 years of preschool, and mathematics and word reading achievement were assessed at the end of kindergarten. Our goals were to determine how domain-general abilities contribute to growth in children's quantitative knowledge and to determine how domain-general and domain-specific abilities contribute to children's preschool mathematics achievement and kindergarten mathematics and reading achievement. We first identified four core quantitative competencies (e.g., knowledge of the cardinal value of number words) that predict later mathematics achievement. The domain-general abilities were then used to predict growth in these competencies across 2 years of preschool, and the combination of domain-general abilities, preliteracy skills, and core quantitative competencies were used to predict mathematics achievement across preschool and mathematics and word reading achievement at the end of kindergarten. Both intelligence and executive functions predicted growth in the four quantitative competencies, especially across the first year of preschool. A combination of domain-general and domain-specific competencies predicted preschoolers' mathematics achievement, with a trend for domain-specific skills to be more strongly related to achievement at the beginning of preschool than at the end of preschool. Preschool preliteracy skills, sensitivity to the relative quantities of collections of objects, and cardinal knowledge predicted

  18. Predicting Children's Reading and Mathematics Achievement from Early Quantitative Knowledge and Domain-General Cognitive Abilities.

    PubMed

    Chu, Felicia W; vanMarle, Kristy; Geary, David C

    2016-01-01

    One hundred children (44 boys) participated in a 3-year longitudinal study of the development of basic quantitative competencies and the relation between these competencies and later mathematics and reading achievement. The children's preliteracy knowledge, intelligence, executive functions, and parental educational background were also assessed. The quantitative tasks assessed a broad range of symbolic and nonsymbolic knowledge and were administered four times across 2 years of preschool. Mathematics achievement was assessed at the end of each of 2 years of preschool, and mathematics and word reading achievement were assessed at the end of kindergarten. Our goals were to determine how domain-general abilities contribute to growth in children's quantitative knowledge and to determine how domain-general and domain-specific abilities contribute to children's preschool mathematics achievement and kindergarten mathematics and reading achievement. We first identified four core quantitative competencies (e.g., knowledge of the cardinal value of number words) that predict later mathematics achievement. The domain-general abilities were then used to predict growth in these competencies across 2 years of preschool, and the combination of domain-general abilities, preliteracy skills, and core quantitative competencies were used to predict mathematics achievement across preschool and mathematics and word reading achievement at the end of kindergarten. Both intelligence and executive functions predicted growth in the four quantitative competencies, especially across the first year of preschool. A combination of domain-general and domain-specific competencies predicted preschoolers' mathematics achievement, with a trend for domain-specific skills to be more strongly related to achievement at the beginning of preschool than at the end of preschool. Preschool preliteracy skills, sensitivity to the relative quantities of collections of objects, and cardinal knowledge predicted

  19. Can quantitative sensory testing predict responses to analgesic treatment?

    PubMed

    Grosen, K; Fischer, I W D; Olesen, A E; Drewes, A M

    2013-10-01

    The role of quantitative sensory testing (QST) in prediction of analgesic effect in humans is scarcely investigated. This updated review assesses the effectiveness in predicting analgesic effects in healthy volunteers, surgical patients and patients with chronic pain. A systematic review of English written, peer-reviewed articles was conducted using PubMed and Embase (1980-2013). Additional studies were identified by chain searching. Search terms included 'quantitative sensory testing', 'sensory testing' and 'analgesics'. Studies on the relationship between QST and response to analgesic treatment in human adults were included. Appraisal of the methodological quality of the included studies was based on evaluative criteria for prognostic studies. Fourteen studies (including 720 individuals) met the inclusion criteria. Significant correlations were observed between responses to analgesics and several QST parameters including (1) heat pain threshold in experimental human pain, (2) electrical and heat pain thresholds, pressure pain tolerance and suprathreshold heat pain in surgical patients, and (3) electrical and heat pain threshold and conditioned pain modulation in patients with chronic pain. Heterogeneity among studies was observed especially with regard to application of QST and type and use of analgesics. Although promising, the current evidence is not sufficiently robust to recommend the use of any specific QST parameter in predicting analgesic response. Future studies should focus on a range of different experimental pain modalities rather than a single static pain stimulation paradigm. © 2013 European Federation of International Association for the Study of Pain Chapters.

  20. Physics of mind: Experimental confirmations of theoretical predictions.

    PubMed

    Schoeller, Félix; Perlovsky, Leonid; Arseniev, Dmitry

    2018-02-02

    What is common among Newtonian mechanics, statistical physics, thermodynamics, quantum physics, the theory of relativity, astrophysics and the theory of superstrings? All these areas of physics have in common a methodology, which is discussed in the first few lines of the review. Is a physics of the mind possible? Is it possible to describe how a mind adapts in real time to changes in the physical world through a theory based on a few basic laws? From perception and elementary cognition to emotions and abstract ideas allowing high-level cognition and executive functioning, at nearly all levels of study, the mind shows variability and uncertainties. Is it possible to turn psychology and neuroscience into so-called "hard" sciences? This review discusses several established first principles for the description of mind and their mathematical formulations. A mathematical model of mind is derived from these principles. This model includes mechanisms of instincts, emotions, behavior, cognition, concepts, language, intuitions, and imagination. We clarify fundamental notions such as the opposition between the conscious and the unconscious, the knowledge instinct and aesthetic emotions, as well as humans' universal abilities for symbols and meaning. In particular, the review discusses in length evolutionary and cognitive functions of aesthetic emotions and musical emotions. Several theoretical predictions are derived from the model, some of which have been experimentally confirmed. These empirical results are summarized and we introduce new theoretical developments. Several unsolved theoretical problems are proposed, as well as new experimental challenges for future research. Copyright © 2017. Published by Elsevier B.V.

  1. Theoretical prediction of airplane stability derivatives at subcritical speeds

    NASA Technical Reports Server (NTRS)

    Tulinius, J.; Clever, W.; Nieman, A.; Dunn, K.; Gaither, B.

    1973-01-01

    The theoretical development and application is described of an analysis for predicting the major static and rotary stability derivatives for a complete airplane. The analysis utilizes potential flow theory to compute the surface flow fields and pressures on any configuration that can be synthesized from arbitrary lifting bodies and nonplanar thick lifting panels. The pressures are integrated to obtain section and total configuration loads and moments due side slip, angle of attack, pitching motion, rolling motion, yawing motion, and control surface deflection. Subcritical compressibility is accounted for by means of the Gothert similarity rule.

  2. A theoretical quantitative genetic study of negative ecological interactions and extinction times in changing environments.

    PubMed

    Jones, Adam G

    2008-04-25

    Rapid human-induced changes in the environment at local, regional and global scales appear to be contributing to population declines and extinctions, resulting in an unprecedented biodiversity crisis. Although in the short term populations can respond ecologically to environmental alterations, in the face of persistent change populations must evolve or become extinct. Existing models of evolution and extinction in changing environments focus only on single species, even though the dynamics of extinction almost certainly depend upon the nature of species interactions. Here, I use a model of quantitative trait evolution in a two-species community to show that negative ecological interactions, such as predation and competition, can produce unexpected results regarding time to extinction. Under some circumstances, negative interactions can be expected to hasten the extinction of species declining in numbers. However, under other circumstances, negative interactions can actually increase times to extinction. This effect occurs across a wide range of parameter values and can be substantial, in some cases allowing a population to persist for 40 percent longer than it would in the absence of the species interaction. This theoretical study indicates that negative species interactions can have unexpected positive effects on times to extinction. Consequently, detailed studies of selection and demographics will be necessary to predict the consequences of species interactions in changing environments for any particular ecological community.

  3. A comparison of naïve and sophisticated subject behavior with game theoretic predictions

    PubMed Central

    McCabe, Kevin A.; Smith, Vernon L.

    2000-01-01

    We use an extensive form two-person game as the basis for two experiments designed to compare the behavior of two groups of subjects with each other and with the subgame perfect theoretical prediction in an anonymous interaction protocol. The two subject groups are undergraduates and advanced graduate students, the latter having studied economics and game theory. There is no difference in their choice behavior, and both groups depart substantially from game theoretic predictions. We also compare a subsample of the same graduate students with a typical undergraduate sample in an asset trading environment in which inexperienced undergraduates invariably produce substantial departures from the rational expectations prediction. In this way, we examine how robust are the results across two distinct anonymous interactive environments. In the constant sum trading game, the graduate students closely track the predictions of rational theory. Our interpretation is that the graduate student subjects' departure from subgame perfection to achieve cooperative outcomes in the two-person bargaining game is a consequence of a deliberate strategy and is not the result of error or inadequate learning. PMID:10725349

  4. Information-theoretic model selection for optimal prediction of stochastic dynamical systems from data

    NASA Astrophysics Data System (ADS)

    Darmon, David

    2018-03-01

    In the absence of mechanistic or phenomenological models of real-world systems, data-driven models become necessary. The discovery of various embedding theorems in the 1980s and 1990s motivated a powerful set of tools for analyzing deterministic dynamical systems via delay-coordinate embeddings of observations of their component states. However, in many branches of science, the condition of operational determinism is not satisfied, and stochastic models must be brought to bear. For such stochastic models, the tool set developed for delay-coordinate embedding is no longer appropriate, and a new toolkit must be developed. We present an information-theoretic criterion, the negative log-predictive likelihood, for selecting the embedding dimension for a predictively optimal data-driven model of a stochastic dynamical system. We develop a nonparametric estimator for the negative log-predictive likelihood and compare its performance to a recently proposed criterion based on active information storage. Finally, we show how the output of the model selection procedure can be used to compare candidate predictors for a stochastic system to an information-theoretic lower bound.

  5. Vortex shedding from obstacles: theoretical frequency prediction

    NASA Astrophysics Data System (ADS)

    Pier, Benoît

    2001-11-01

    The existence of self-sustained oscillations in spatially developing systems is closely related to the presence of a locally absolutely unstable region. A recent investigation of a ``synthetic wake'' (a wake with no solid obstacle and no reverse flow region) has proved [Pier and Huerre, J. Fluid Mech. 435, 145 (2001)] that the observed Kármán vortex street is a nonlinear elephant global mode. The same criterion is now shown to hold for real obstacles. Local properties are derived from the unperturbed basic flow computed by enforcing a symmetry condition on the central line. Application of the theoretical criterion then yields the expected Strouhal vortex shedding frequency. The thus predicted frequency is in excellent agreement with direct numerical simulations of the complete flow. The use of the frequency selection mechanism to control the vortex shedding will also be discussed.

  6. Cancer imaging phenomics toolkit: quantitative imaging analytics for precision diagnostics and predictive modeling of clinical outcome.

    PubMed

    Davatzikos, Christos; Rathore, Saima; Bakas, Spyridon; Pati, Sarthak; Bergman, Mark; Kalarot, Ratheesh; Sridharan, Patmaa; Gastounioti, Aimilia; Jahani, Nariman; Cohen, Eric; Akbari, Hamed; Tunc, Birkan; Doshi, Jimit; Parker, Drew; Hsieh, Michael; Sotiras, Aristeidis; Li, Hongming; Ou, Yangming; Doot, Robert K; Bilello, Michel; Fan, Yong; Shinohara, Russell T; Yushkevich, Paul; Verma, Ragini; Kontos, Despina

    2018-01-01

    The growth of multiparametric imaging protocols has paved the way for quantitative imaging phenotypes that predict treatment response and clinical outcome, reflect underlying cancer molecular characteristics and spatiotemporal heterogeneity, and can guide personalized treatment planning. This growth has underlined the need for efficient quantitative analytics to derive high-dimensional imaging signatures of diagnostic and predictive value in this emerging era of integrated precision diagnostics. This paper presents cancer imaging phenomics toolkit (CaPTk), a new and dynamically growing software platform for analysis of radiographic images of cancer, currently focusing on brain, breast, and lung cancer. CaPTk leverages the value of quantitative imaging analytics along with machine learning to derive phenotypic imaging signatures, based on two-level functionality. First, image analysis algorithms are used to extract comprehensive panels of diverse and complementary features, such as multiparametric intensity histogram distributions, texture, shape, kinetics, connectomics, and spatial patterns. At the second level, these quantitative imaging signatures are fed into multivariate machine learning models to produce diagnostic, prognostic, and predictive biomarkers. Results from clinical studies in three areas are shown: (i) computational neuro-oncology of brain gliomas for precision diagnostics, prediction of outcome, and treatment planning; (ii) prediction of treatment response for breast and lung cancer, and (iii) risk assessment for breast cancer.

  7. Testing process predictions of models of risky choice: a quantitative model comparison approach

    PubMed Central

    Pachur, Thorsten; Hertwig, Ralph; Gigerenzer, Gerd; Brandstätter, Eduard

    2013-01-01

    This article presents a quantitative model comparison contrasting the process predictions of two prominent views on risky choice. One view assumes a trade-off between probabilities and outcomes (or non-linear functions thereof) and the separate evaluation of risky options (expectation models). Another view assumes that risky choice is based on comparative evaluation, limited search, aspiration levels, and the forgoing of trade-offs (heuristic models). We derived quantitative process predictions for a generic expectation model and for a specific heuristic model, namely the priority heuristic (Brandstätter et al., 2006), and tested them in two experiments. The focus was on two key features of the cognitive process: acquisition frequencies (i.e., how frequently individual reasons are looked up) and direction of search (i.e., gamble-wise vs. reason-wise). In Experiment 1, the priority heuristic predicted direction of search better than the expectation model (although neither model predicted the acquisition process perfectly); acquisition frequencies, however, were inconsistent with both models. Additional analyses revealed that these frequencies were primarily a function of what Rubinstein (1988) called “similarity.” In Experiment 2, the quantitative model comparison approach showed that people seemed to rely more on the priority heuristic in difficult problems, but to make more trade-offs in easy problems. This finding suggests that risky choice may be based on a mental toolbox of strategies. PMID:24151472

  8. Theoretical model for plasmonic photothermal response of gold nanostructures solutions

    NASA Astrophysics Data System (ADS)

    Phan, Anh D.; Nga, Do T.; Viet, Nguyen A.

    2018-03-01

    Photothermal effects of gold core-shell nanoparticles and nanorods dispersed in water are theoretically investigated using the transient bioheat equation and the extended Mie theory. Properly calculating the absorption cross section is an extremely crucial milestone to determine the elevation of solution temperature. The nanostructures are assumed to be randomly and uniformly distributed in the solution. Compared to previous experiments, our theoretical temperature increase during laser light illumination provides, in various systems, both reasonable qualitative and quantitative agreement. This approach can be a highly reliable tool to predict photothermal effects in experimentally unexplored structures. We also validate our approach and discuss itslimitations.

  9. A comparison of measured and theoretical predictions for STS ascent and entry sonic booms

    NASA Technical Reports Server (NTRS)

    Garcia, F., Jr.; Jones, J. H.; Henderson, H. R.

    1983-01-01

    Sonic boom measurements have been obtained during the flights of STS-1 through 5. During STS-1, 2, and 4, entry sonic boom measurements were obtained and ascent measurements were made on STS-5. The objectives of this measurement program were (1) to define the sonic boom characteristics of the Space Transportation System (STS), (2) provide a realistic assessment of the validity of xisting theoretical prediction techniques, and (3) establish a level of confidence for predicting future STS configuration sonic boom environments. Detail evaluation and reporting of the results of this program are in progress. This paper will address only the significant results, mainly those data obtained during the entry of STS-1 at Edwards Air Force Base (EAFB), and the ascent of STS-5 from Kennedy Space Center (KSC). The theoretical prediction technique employed in this analysis is the so called Thomas Program. This prediction technique is a semi-empirical method that required definition of the near field signatures, detailed trajectory characteristics, and the prevailing meteorological characteristics as an input. This analytical procedure then extrapolates the near field signatures from the flight altitude to an altitude consistent with each measurement location.

  10. Universal structural parameter to quantitatively predict metallic glass properties

    DOE PAGES

    Ding, Jun; Cheng, Yong-Qiang; Sheng, Howard; ...

    2016-12-12

    Quantitatively correlating the amorphous structure in metallic glasses (MGs) with their physical properties has been a long-sought goal. Here we introduce flexibility volume' as a universal indicator, to bridge the structural state the MG is in with its properties, on both atomic and macroscopic levels. The flexibility volume combines static atomic volume with dynamics information via atomic vibrations that probe local configurational space and interaction between neighbouring atoms. We demonstrate that flexibility volume is a physically appropriate parameter that can quantitatively predict the shear modulus, which is at the heart of many key properties of MGs. Moreover, the new parametermore » correlates strongly with atomic packing topology, and also with the activation energy for thermally activated relaxation and the propensity for stress-driven shear transformations. These correlations are expected to be robust across a very wide range of MG compositions, processing conditions and length scales.« less

  11. Quantitative Sensory Testing Predicts Pregabalin Efficacy in Painful Chronic Pancreatitis

    PubMed Central

    Olesen, Søren S.; Graversen, Carina; Bouwense, Stefan A. W.; van Goor, Harry; Wilder-Smith, Oliver H. G.; Drewes, Asbjørn M.

    2013-01-01

    Background A major problem in pain medicine is the lack of knowledge about which treatment suits a specific patient. We tested the ability of quantitative sensory testing to predict the analgesic effect of pregabalin and placebo in patients with chronic pancreatitis. Methods Sixty-four patients with painful chronic pancreatitis received pregabalin (150–300 mg BID) or matching placebo for three consecutive weeks. Analgesic effect was documented in a pain diary based on a visual analogue scale. Responders were defined as patients with a reduction in clinical pain score of 30% or more after three weeks of study treatment compared to baseline recordings. Prior to study medication, pain thresholds to electric skin and pressure stimulation were measured in dermatomes T10 (pancreatic area) and C5 (control area). To eliminate inter-subject differences in absolute pain thresholds an index of sensitivity between stimulation areas was determined (ratio of pain detection thresholds in pancreatic versus control area, ePDT ratio). Pain modulation was recorded by a conditioned pain modulation paradigm. A support vector machine was used to screen sensory parameters for their predictive power of pregabalin efficacy. Results The pregabalin responders group was hypersensitive to electric tetanic stimulation of the pancreatic area (ePDT ratio 1.2 (0.9–1.3)) compared to non-responders group (ePDT ratio: 1.6 (1.5–2.0)) (P = 0.001). The electrical pain detection ratio was predictive for pregabalin effect with a classification accuracy of 83.9% (P = 0.007). The corresponding sensitivity was 87.5% and specificity was 80.0%. No other parameters were predictive of pregabalin or placebo efficacy. Conclusions The present study provides first evidence that quantitative sensory testing predicts the analgesic effect of pregabalin in patients with painful chronic pancreatitis. The method can be used to tailor pain medication based on patient’s individual sensory profile and thus

  12. Perceived Attributes Predict Course Management System Adopter Status

    ERIC Educational Resources Information Center

    Keesee, Gayla S.; Shepard, MaryFriend

    2011-01-01

    This quantitative, nonexperimental study utilized Rogers's diffusion of innovation theory as the theoretical base to determine instructors' perceptions of the attributes (relative advantage, compatibility, complexity, trialability, observability) of the course management system used in order to predict adopter status. The study used a convenience…

  13. Theoretical predictions of latitude dependencies in the solar wind

    NASA Technical Reports Server (NTRS)

    Winge, C. R., Jr.; Coleman, P. J., Jr.

    1974-01-01

    Results are presented which were obtained with the Winge-Coleman model for theoretical predictions of latitudinal dependencies in the solar wind. A first-order expansion is described which allows analysis of first-order latitudinal variations in the coronal boundary conditions and results in a second-order partial differential equation for the perturbation stream function. Latitudinal dependencies are analytically separated out in the form of Legendre polynomials and their derivative, and are reduced to the solution of radial differential equations. This analysis is shown to supply an estimate of how large the coronal variation in latitude must be to produce an 11 km/sec/deg gradient in the radial velocity of the solar wind, assuming steady-state processes.

  14. Prediction of Coronal Mass Ejections From Vector Magnetograms: Quantitative Measures as Predictors

    NASA Technical Reports Server (NTRS)

    Falconer, D. A.; Moore, R. L.; Gary, G. A.; Rose, M. Franklin (Technical Monitor)

    2001-01-01

    We derived two quantitative measures of an active region's global nonpotentiality from the region's vector magnetogram, 1) the net current (I(sub N)), and 2) the length of strong-shear, strong-field main neutral line (Lss), and used these two measures in a pilot study of the CME productivity of 4 active regions. We compared the global nonpotentiality measures to the active regions' CME productivity determined from GOES and Yohkoh/SXT observations. We found that two of the active regions were highly globally nonpotential and were CME productive, while the other two active regions had little global nonpotentiality and produced no CMEs. At the Fall 2000 AGU, we reported on an expanded study (12 active regions and 17 magnetograms) in which we evaluated four quantitative global measures of an active region's magnetic field and compared these measures with the CME productivity. The four global measures (all derived from MSFC vector magnetograms) included our two previous measures (I(sub N) and L(sub ss)) as well as two new ones, the total magnetic flux (PHI) (a measure of an active region's size), and the normalized twist (alpha (bar)= muIN/PHI). We found that the three quantitative measures of global nonpotentiality (I(sub N), L(sub ss), alpha (bar)) were all well correlated (greater than 99% confidence level) with an active region's CME productivity within plus or minus 2 days of the day of the magnetogram. We will now report on our findings of how good our quantitative measures are as predictors of active-region CME productivity, using only CMEs that occurred after the magnetogram. We report the preliminary skill test of these quantitative measures as predictors. We compare the CME prediction success of our quantitative measures to the CME prediction success based on an active region's past CME productivity. We examine the cases of the handful of false positive and false negatives to look for improvements to our predictors. This work is funded by NSF through the Space

  15. Theoretical Predictions of Cross-Sections of the Super-Heavy Elements

    NASA Astrophysics Data System (ADS)

    Bouriquet, B.; Kosenko, G.; Abe, Y.

    The evaluation of the residue cross-sections of reactionssynthesising superheavy elements has been achieved by the combination of the two-step model for fusion and the evaporation code (KEWPIE) for survival probability. The theoretical scheme of those calculations is presented, and some encouraging results are given, together with some difficulties. With this approach, the measured excitation functions of the 1n reactions producing elements with Z=108, 110, 111 and 112 are well reproduced. Thus, the model has been used to predict the cross-sections of the reactions leading to the formation of the elements with Z=113 and Z=114.

  16. Development of Nomarski microscopy for quantitative determination of surface topography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hartman, J. S.; Gordon, R. L.; Lessor, D. L.

    1979-01-01

    The use of Nomarski differential interference contrast (DIC) microscopy has been extended to provide nondestructive, quantitative analysis of a sample's surface topography. Theoretical modeling has determined the dependence of the image intensity on the microscope's optical components, the sample's optical properties, and the sample's surface orientation relative to the microscope. Results include expressions to allow the inversion of image intensity data to determine sample surface slopes. A commercial Nomarski system has been modified and characterized to allow the evaluation of the optical model. Data have been recorded with smooth, planar samples that verify the theoretical predictions.

  17. A new theoretical approach to analyze complex processes in cytoskeleton proteins.

    PubMed

    Li, Xin; Kolomeisky, Anatoly B

    2014-03-20

    Cytoskeleton proteins are filament structures that support a large number of important biological processes. These dynamic biopolymers exist in nonequilibrium conditions stimulated by hydrolysis chemical reactions in their monomers. Current theoretical methods provide a comprehensive picture of biochemical and biophysical processes in cytoskeleton proteins. However, the description is only qualitative under biologically relevant conditions because utilized theoretical mean-field models neglect correlations. We develop a new theoretical method to describe dynamic processes in cytoskeleton proteins that takes into account spatial correlations in the chemical composition of these biopolymers. Our approach is based on analysis of probabilities of different clusters of subunits. It allows us to obtain exact analytical expressions for a variety of dynamic properties of cytoskeleton filaments. By comparing theoretical predictions with Monte Carlo computer simulations, it is shown that our method provides a fully quantitative description of complex dynamic phenomena in cytoskeleton proteins under all conditions.

  18. Quantitative fetal fibronectin and cervical length to predict preterm birth in asymptomatic women with previous cervical surgery.

    PubMed

    Vandermolen, Brooke I; Hezelgrave, Natasha L; Smout, Elizabeth M; Abbott, Danielle S; Seed, Paul T; Shennan, Andrew H

    2016-10-01

    Quantitative fetal fibronectin testing has demonstrated accuracy for prediction of spontaneous preterm birth in asymptomatic women with a history of preterm birth. Predictive accuracy in women with previous cervical surgery (a potentially different risk mechanism) is not known. We sought to compare the predictive accuracy of cervicovaginal fluid quantitative fetal fibronectin and cervical length testing in asymptomatic women with previous cervical surgery to that in women with 1 previous preterm birth. We conducted a prospective blinded secondary analysis of a larger observational study of cervicovaginal fluid quantitative fetal fibronectin concentration in asymptomatic women measured with a Hologic 10Q system (Hologic, Marlborough, MA). Prediction of spontaneous preterm birth (<30, <34, and <37 weeks) with cervicovaginal fluid quantitative fetal fibronectin concentration in primiparous women who had undergone at least 1 invasive cervical procedure (n = 473) was compared with prediction in women who had previous spontaneous preterm birth, preterm prelabor rupture of membranes, or late miscarriage (n = 821). Relationship with cervical length was explored. The rate of spontaneous preterm birth <34 weeks in the cervical surgery group was 3% compared with 9% in previous spontaneous preterm birth group. Receiver operating characteristic curves comparing quantitative fetal fibronectin for prediction at all 3 gestational end points were comparable between the cervical surgery and previous spontaneous preterm birth groups (34 weeks: area under the curve, 0.78 [95% confidence interval 0.64-0.93] vs 0.71 [95% confidence interval 0.64-0.78]; P = .39). Prediction of spontaneous preterm birth using cervical length compared with quantitative fetal fibronectin for prediction of preterm birth <34 weeks of gestation offered similar prediction (area under the curve, 0.88 [95% confidence interval 0.79-0.96] vs 0.77 [95% confidence interval 0.62-0.92], P = .12 in the cervical

  19. Drift mobility of photo-electrons in organic molecular crystals: Quantitative comparison between theory and experiment

    NASA Astrophysics Data System (ADS)

    Reineker, P.; Kenkre, V. M.; Kühne, R.

    1981-08-01

    A quantitative comparison of a simple theoretical prediction for the drift mobility of photo-electrons in organic molecular crystals, calculated within the model of the coupled band-like and hopping motion, with experiments in napthalene of Schein et al. and Karl et al. is given.

  20. Vibrational algorithms for quantitative crystallographic analyses of hydroxyapatite-based biomaterials: I, theoretical foundations.

    PubMed

    Pezzotti, Giuseppe; Zhu, Wenliang; Boffelli, Marco; Adachi, Tetsuya; Ichioka, Hiroaki; Yamamoto, Toshiro; Marunaka, Yoshinori; Kanamura, Narisato

    2015-05-01

    The Raman spectroscopic method has quantitatively been applied to the analysis of local crystallographic orientation in both single-crystal hydroxyapatite and human teeth. Raman selection rules for all the vibrational modes of the hexagonal structure were expanded into explicit functions of Euler angles in space and six Raman tensor elements (RTE). A theoretical treatment has also been put forward according to the orientation distribution function (ODF) formalism, which allows one to resolve the statistical orientation patterns of the nm-sized hydroxyapatite crystallite comprised in the Raman microprobe. Close-form solutions could be obtained for the Euler angles and their statistical distributions resolved with respect to the direction of the average texture axis. Polarized Raman spectra from single-crystalline hydroxyapatite and textured polycrystalline (teeth enamel) samples were compared, and a validation of the proposed Raman method could be obtained through confirming the agreement between RTE values obtained from different samples.

  1. Theoretical prediction of crystallization kinetics of a supercooled Lennard-Jones fluid

    NASA Astrophysics Data System (ADS)

    Gunawardana, K. G. S. H.; Song, Xueyu

    2018-05-01

    The first order curvature correction to the crystal-liquid interfacial free energy is calculated using a theoretical model based on the interfacial excess thermodynamic properties. The correction parameter (δ), which is analogous to the Tolman length at a liquid-vapor interface, is found to be 0.48 ± 0.05 for a Lennard-Jones (LJ) fluid. We show that this curvature correction is crucial in predicting the nucleation barrier when the size of the crystal nucleus is small. The thermodynamic driving force (Δμ) corresponding to available simulated nucleation conditions is also calculated by combining the simulated data with a classical density functional theory. In this paper, we show that the classical nucleation theory is capable of predicting the nucleation barrier with excellent agreement to the simulated results when the curvature correction to the interfacial free energy is accounted for.

  2. Predictive power of theoretical modelling of the nuclear mean field: examples of improving predictive capacities

    NASA Astrophysics Data System (ADS)

    Dedes, I.; Dudek, J.

    2018-03-01

    We examine the effects of the parametric correlations on the predictive capacities of the theoretical modelling keeping in mind the nuclear structure applications. The main purpose of this work is to illustrate the method of establishing the presence and determining the form of parametric correlations within a model as well as an algorithm of elimination by substitution (see text) of parametric correlations. We examine the effects of the elimination of the parametric correlations on the stabilisation of the model predictions further and further away from the fitting zone. It follows that the choice of the physics case and the selection of the associated model are of secondary importance in this case. Under these circumstances we give priority to the relative simplicity of the underlying mathematical algorithm, provided the model is realistic. Following such criteria, we focus specifically on an important but relatively simple case of doubly magic spherical nuclei. To profit from the algorithmic simplicity we chose working with the phenomenological spherically symmetric Woods–Saxon mean-field. We employ two variants of the underlying Hamiltonian, the traditional one involving both the central and the spin orbit potential in the Woods–Saxon form and the more advanced version with the self-consistent density-dependent spin–orbit interaction. We compare the effects of eliminating of various types of correlations and discuss the improvement of the quality of predictions (‘predictive power’) under realistic parameter adjustment conditions.

  3. Quantitative imaging features of pretreatment CT predict volumetric response to chemotherapy in patients with colorectal liver metastases.

    PubMed

    Creasy, John M; Midya, Abhishek; Chakraborty, Jayasree; Adams, Lauryn B; Gomes, Camilla; Gonen, Mithat; Seastedt, Kenneth P; Sutton, Elizabeth J; Cercek, Andrea; Kemeny, Nancy E; Shia, Jinru; Balachandran, Vinod P; Kingham, T Peter; Allen, Peter J; DeMatteo, Ronald P; Jarnagin, William R; D'Angelica, Michael I; Do, Richard K G; Simpson, Amber L

    2018-06-19

    This study investigates whether quantitative image analysis of pretreatment CT scans can predict volumetric response to chemotherapy for patients with colorectal liver metastases (CRLM). Patients treated with chemotherapy for CRLM (hepatic artery infusion (HAI) combined with systemic or systemic alone) were included in the study. Patients were imaged at baseline and approximately 8 weeks after treatment. Response was measured as the percentage change in tumour volume from baseline. Quantitative imaging features were derived from the index hepatic tumour on pretreatment CT, and features statistically significant on univariate analysis were included in a linear regression model to predict volumetric response. The regression model was constructed from 70% of data, while 30% were reserved for testing. Test data were input into the trained model. Model performance was evaluated with mean absolute prediction error (MAPE) and R 2 . Clinicopatholologic factors were assessed for correlation with response. 157 patients were included, split into training (n = 110) and validation (n = 47) sets. MAPE from the multivariate linear regression model was 16.5% (R 2 = 0.774) and 21.5% in the training and validation sets, respectively. Stratified by HAI utilisation, MAPE in the validation set was 19.6% for HAI and 25.1% for systemic chemotherapy alone. Clinical factors associated with differences in median tumour response were treatment strategy, systemic chemotherapy regimen, age and KRAS mutation status (p < 0.05). Quantitative imaging features extracted from pretreatment CT are promising predictors of volumetric response to chemotherapy in patients with CRLM. Pretreatment predictors of response have the potential to better select patients for specific therapies. • Colorectal liver metastases (CRLM) are downsized with chemotherapy but predicting the patients that will respond to chemotherapy is currently not possible. • Heterogeneity and enhancement patterns of CRLM can be

  4. Assessing deep and shallow learning methods for quantitative prediction of acute chemical toxicity.

    PubMed

    Liu, Ruifeng; Madore, Michael; Glover, Kyle P; Feasel, Michael G; Wallqvist, Anders

    2018-05-02

    Animal-based methods for assessing chemical toxicity are struggling to meet testing demands. In silico approaches, including machine-learning methods, are promising alternatives. Recently, deep neural networks (DNNs) were evaluated and reported to outperform other machine-learning methods for quantitative structure-activity relationship modeling of molecular properties. However, most of the reported performance evaluations relied on global performance metrics, such as the root mean squared error (RMSE) between the predicted and experimental values of all samples, without considering the impact of sample distribution across the activity spectrum. Here, we carried out an in-depth analysis of DNN performance for quantitative prediction of acute chemical toxicity using several datasets. We found that the overall performance of DNN models on datasets of up to 30,000 compounds was similar to that of random forest (RF) models, as measured by the RMSE and correlation coefficients between the predicted and experimental results. However, our detailed analyses demonstrated that global performance metrics are inappropriate for datasets with a highly uneven sample distribution, because they show a strong bias for the most populous compounds along the toxicity spectrum. For highly toxic compounds, DNN and RF models trained on all samples performed much worse than the global performance metrics indicated. Surprisingly, our variable nearest neighbor method, which utilizes only structurally similar compounds to make predictions, performed reasonably well, suggesting that information of close near neighbors in the training sets is a key determinant of acute toxicity predictions.

  5. Critical Quantitative Inquiry in Context

    ERIC Educational Resources Information Center

    Stage, Frances K.; Wells, Ryan S.

    2014-01-01

    This chapter briefly traces the development of the concept of critical quantitative inquiry, provides an expanded conceptualization of the tasks of critical quantitative research, offers theoretical explanation and justification for critical research using quantitative methods, and previews the work of quantitative criticalists presented in this…

  6. Prediction of the amount of urban waste solids by applying a gray theoretical model.

    PubMed

    Li, Xiao-Ming; Zeng, Guang-Ming; Wang, Ming; Liu, Jin-Jin

    2003-01-01

    Urban waste solids are now becoming one of the most crucial environmental problems. There are several different kinds of technologies normally used for waste solids disposal, among which landfill is more favorable in China than others, especially for urban waste solids. Most of the design works up to now are based on a roughly estimation of the amount of urban waste solids without any theoretical support, which lead to a series problems. To meet the basic information requirements for the design work, the amount of the urban waste solids was predicted in this research by applying the gray theoretical model GM (1,1) through non-linear differential equation simulation. The model parameters were estimated with the least square method (LSM) by running a certain MATALAB program, and the hypothesis test results show that the residual between the prediction value and the actual value approximately comply with the normal distribution N (0, 0.21(2)), and the probability of the residual within the range ( -0.17, 0.19) is more than 95%, which indicate obviously that the model can be well used for the prediction of the amount of waste solids and those had been already testified by the latest two years data about the urban waste solids from Loudi City of China. With this model, the predicted amount of the waste solids produced in Loudi City in the next 30 years is 8049000 ton in total.

  7. TheoReTS - An information system for theoretical spectra based on variational predictions from molecular potential energy and dipole moment surfaces

    NASA Astrophysics Data System (ADS)

    Rey, Michaël; Nikitin, Andrei V.; Babikov, Yurii L.; Tyuterev, Vladimir G.

    2016-09-01

    Knowledge of intensities of rovibrational transitions of various molecules and theirs isotopic species in wide spectral and temperature ranges is essential for the modeling of optical properties of planetary atmospheres, brown dwarfs and for other astrophysical applications. TheoReTS ("Theoretical Reims-Tomsk Spectral data") is an Internet accessible information system devoted to ab initio based rotationally resolved spectra predictions for some relevant molecular species. All data were generated from potential energy and dipole moment surfaces computed via high-level electronic structure calculations using variational methods for vibration-rotation energy levels and transitions. When available, empirical corrections to band centers were applied, all line intensities remaining purely ab initio. The current TheoReTS implementation contains information on four-to-six atomic molecules, including phosphine, methane, ethylene, silane, methyl-fluoride, and their isotopic species 13CH4 , 12CH3D , 12CH2D2 , 12CD4 , 13C2H4, … . Predicted hot methane line lists up to T = 2000 K are included. The information system provides the associated software for spectra simulation including absorption coefficient, absorption and emission cross-sections, transmittance and radiance. The simulations allow Lorentz, Gauss and Voight line shapes. Rectangular, triangular, Lorentzian, Gaussian, sinc and sinc squared apparatus function can be used with user-defined specifications for broadening parameters and spectral resolution. All information is organized as a relational database with the user-friendly graphical interface according to Model-View-Controller architectural tools. The full-featured web application is written on PHP using Yii framework and C++ software modules. In case of very large high-temperature line lists, a data compression is implemented for fast interactive spectra simulations of a quasi-continual absorption due to big line density. Applications for the TheoReTS may

  8. DOSIMETRY MODELING OF INHALED FORMALDEHYDE: BINNING NASAL FLUX PREDICTIONS FOR QUANTITATIVE RISK ASSESSMENT

    EPA Science Inventory

    Dosimetry Modeling of Inhaled Formaldehyde: Binning Nasal Flux Predictions for Quantitative Risk Assessment. Kimbell, J.S., Overton, J.H., Subramaniam, R.P., Schlosser, P.M., Morgan, K.T., Conolly, R.B., and Miller, F.J. (2001). Toxicol. Sci. 000, 000:000.

    Interspecies e...

  9. Quantitative modelling in cognitive ergonomics: predicting signals passed at danger.

    PubMed

    Moray, Neville; Groeger, John; Stanton, Neville

    2017-02-01

    This paper shows how to combine field observations, experimental data and mathematical modelling to produce quantitative explanations and predictions of complex events in human-machine interaction. As an example, we consider a major railway accident. In 1999, a commuter train passed a red signal near Ladbroke Grove, UK, into the path of an express. We use the Public Inquiry Report, 'black box' data, and accident and engineering reports to construct a case history of the accident. We show how to combine field data with mathematical modelling to estimate the probability that the driver observed and identified the state of the signals, and checked their status. Our methodology can explain the SPAD ('Signal Passed At Danger'), generate recommendations about signal design and placement and provide quantitative guidance for the design of safer railway systems' speed limits and the location of signals. Practitioner Summary: Detailed ergonomic analysis of railway signals and rail infrastructure reveals problems of signal identification at this location. A record of driver eye movements measures attention, from which a quantitative model for out signal placement and permitted speeds can be derived. The paper is an example of how to combine field data, basic research and mathematical modelling to solve ergonomic design problems.

  10. Quantitative chest computed tomography as a means of predicting exercise performance in severe emphysema.

    PubMed

    Crausman, R S; Ferguson, G; Irvin, C G; Make, B; Newell, J D

    1995-06-01

    We assessed the value of quantitative high-resolution computed tomography (CT) as a diagnostic and prognostic tool in smoking-related emphysema. We performed an inception cohort study of 14 patients referred with emphysema. The diagnosis of emphysema was based on a compatible history, physical examination, chest radiograph, CT scan of the lung, and pulmonary physiologic evaluation. As a group, those who underwent exercise testing were hyperinflated (percentage predicted total lung capacity +/- standard error of the mean = 133 +/- 9%), and there was evidence of air trapping (percentage predicted respiratory volume = 318 +/- 31%) and airflow limitation (forced expiratory volume in 1 sec [FEV1] = 40 +/- 7%). The exercise performance of the group was severely limited (maximum achievable workload = 43 +/- 6%) and was characterized by prominent ventilatory, gas exchange, and pulmonary vascular abnormalities. The quantitative CT index was markedly elevated in all patients (76 +/- 9; n = 14; normal < 4). There were correlations between this quantitative CT index and measures of airflow limitation (FEV1 r2 = .34, p = 09; FEV1/forced vital capacity r2 = .46, p = .04) and between maximum workload achieved (r2 = .93, p = .0001) and maximum oxygen utilization (r2 = .83, p = .0007). Quantitative chest CT assessment of disease severity is correlated with the degree of airflow limitation and exercise impairment in pulmonary emphysema.

  11. Predictive value of EEG in postanoxic encephalopathy: A quantitative model-based approach.

    PubMed

    Efthymiou, Evdokia; Renzel, Roland; Baumann, Christian R; Poryazova, Rositsa; Imbach, Lukas L

    2017-10-01

    The majority of comatose patients after cardiac arrest do not regain consciousness due to severe postanoxic encephalopathy. Early and accurate outcome prediction is therefore essential in determining further therapeutic interventions. The electroencephalogram is a standardized and commonly available tool used to estimate prognosis in postanoxic patients. The identification of pathological EEG patterns with poor prognosis relies however primarily on visual EEG scoring by experts. We introduced a model-based approach of EEG analysis (state space model) that allows for an objective and quantitative description of spectral EEG variability. We retrospectively analyzed standard EEG recordings in 83 comatose patients after cardiac arrest between 2005 and 2013 in the intensive care unit of the University Hospital Zürich. Neurological outcome was assessed one month after cardiac arrest using the Cerebral Performance Category. For a dynamic and quantitative EEG analysis, we implemented a model-based approach (state space analysis) to quantify EEG background variability independent from visual scoring of EEG epochs. Spectral variability was compared between groups and correlated with clinical outcome parameters and visual EEG patterns. Quantitative assessment of spectral EEG variability (state space velocity) revealed significant differences between patients with poor and good outcome after cardiac arrest: Lower mean velocity in temporal electrodes (T4 and T5) was significantly associated with poor prognostic outcome (p<0.005) and correlated with independently identified visual EEG patterns such as generalized periodic discharges (p<0.02). Receiver operating characteristic (ROC) analysis confirmed the predictive value of lower state space velocity for poor clinical outcome after cardiac arrest (AUC 80.8, 70% sensitivity, 15% false positive rate). Model-based quantitative EEG analysis (state space analysis) provides a novel, complementary marker for prognosis in postanoxic

  12. Quantitative Missense Variant Effect Prediction Using Large-Scale Mutagenesis Data.

    PubMed

    Gray, Vanessa E; Hause, Ronald J; Luebeck, Jens; Shendure, Jay; Fowler, Douglas M

    2018-01-24

    Large datasets describing the quantitative effects of mutations on protein function are becoming increasingly available. Here, we leverage these datasets to develop Envision, which predicts the magnitude of a missense variant's molecular effect. Envision combines 21,026 variant effect measurements from nine large-scale experimental mutagenesis datasets, a hitherto untapped training resource, with a supervised, stochastic gradient boosting learning algorithm. Envision outperforms other missense variant effect predictors both on large-scale mutagenesis data and on an independent test dataset comprising 2,312 TP53 variants whose effects were measured using a low-throughput approach. This dataset was never used for hyperparameter tuning or model training and thus serves as an independent validation set. Envision prediction accuracy is also more consistent across amino acids than other predictors. Finally, we demonstrate that Envision's performance improves as more large-scale mutagenesis data are incorporated. We precompute Envision predictions for every possible single amino acid variant in human, mouse, frog, zebrafish, fruit fly, worm, and yeast proteomes (https://envision.gs.washington.edu/). Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Theoretical study on removal rate and surface roughness in grinding a RB-SiC mirror with a fixed abrasive.

    PubMed

    Wang, Xu; Zhang, Xuejun

    2009-02-10

    This paper is based on a microinteraction principle of fabricating a RB-SiC material with a fixed abrasive. The influence of the depth formed on a RB-SiC workpiece by a diamond abrasive on the material removal rate and the surface roughness of an optical component are quantitatively discussed. A mathematical model of the material removal rate and the simulation results of the surface roughness are achieved. In spite of some small difference between the experimental results and the theoretical anticipation, which is predictable, the actual removal rate matches the theoretical prediction very well. The fixed abrasive technology's characteristic of easy prediction is of great significance in the optical fabrication industry, so this brand-new fixed abrasive technology has wide application possibilities.

  14. Associated t t ¯ production at the LHC: Theoretical predictions at NLO +NNLL accuracy

    NASA Astrophysics Data System (ADS)

    Kulesza, Anna; Motyka, Leszek; Stebel, Tomasz; Theeuwes, Vincent

    2018-06-01

    We perform threshold resummation of soft gluon corrections to the total cross section and the invariant mass distribution for the process p p →t t ¯H . The resummation is carried out at next-to-next-to-leading-logarithmic (NNLL) accuracy using the direct QCD Mellin space technique in the three-particle invariant mass kinematics. After presenting analytical expressions we discuss the impact of resummation on the numerical predictions for the associated Higgs boson production with top quarks at the LHC. We find that next-to-leading-order (NLO)+NNLL resummation leads to predictions for which the central values are remarkably stable with respect to scale variation and for which theoretical uncertainties are reduced in comparison to NLO predictions.

  15. Quantitative research on critical thinking and predicting nursing students' NCLEX-RN performance.

    PubMed

    Romeo, Elizabeth M

    2010-07-01

    The concept of critical thinking has been influential in several disciplines. Both education and nursing in general have been attempting to define, teach, and measure this concept for decades. Nurse educators realize that critical thinking is the cornerstone of the objectives and goals for nursing students. The purpose of this article is to review and analyze quantitative research findings relevant to the measurement of critical thinking abilities and skills in undergraduate nursing students and the usefulness of critical thinking as a predictor of National Council Licensure Examination-Registered Nurse (NCLEX-RN) performance. The specific issues that this integrative review examined include assessment and analysis of the theoretical and operational definitions of critical thinking, theoretical frameworks used to guide the studies, instruments used to evaluate critical thinking skills and abilities, and the role of critical thinking as a predictor of NCLEX-RN outcomes. A list of key assumptions related to critical thinking was formulated. The limitations and gaps in the literature were identified, as well as the types of future research needed in this arena. Copyright 2010, SLACK Incorporated.

  16. Using metal-ligand binding characteristics to predict metal toxicity: quantitative ion character-activity relationships (QICARs).

    PubMed Central

    Newman, M C; McCloskey, J T; Tatara, C P

    1998-01-01

    Ecological risk assessment can be enhanced with predictive models for metal toxicity. Modelings of published data were done under the simplifying assumption that intermetal trends in toxicity reflect relative metal-ligand complex stabilities. This idea has been invoked successfully since 1904 but has yet to be applied widely in quantitative ecotoxicology. Intermetal trends in toxicity were successfully modeled with ion characteristics reflecting metal binding to ligands for a wide range of effects. Most models were useful for predictive purposes based on an F-ratio criterion and cross-validation, but anomalous predictions did occur if speciation was ignored. In general, models for metals with the same valence (i.e., divalent metals) were better than those combining mono-, di-, and trivalent metals. The softness parameter (sigma p) and the absolute value of the log of the first hydrolysis constant ([symbol: see text] log KOH [symbol: see text]) were especially useful in model construction. Also, delta E0 contributed substantially to several of the two-variable models. In contrast, quantitative attempts to predict metal interactions in binary mixtures based on metal-ligand complex stabilities were not successful. PMID:9860900

  17. Applying quantitative adiposity feature analysis models to predict benefit of bevacizumab-based chemotherapy in ovarian cancer patients

    NASA Astrophysics Data System (ADS)

    Wang, Yunzhi; Qiu, Yuchen; Thai, Theresa; More, Kathleen; Ding, Kai; Liu, Hong; Zheng, Bin

    2016-03-01

    How to rationally identify epithelial ovarian cancer (EOC) patients who will benefit from bevacizumab or other antiangiogenic therapies is a critical issue in EOC treatments. The motivation of this study is to quantitatively measure adiposity features from CT images and investigate the feasibility of predicting potential benefit of EOC patients with or without receiving bevacizumab-based chemotherapy treatment using multivariate statistical models built based on quantitative adiposity image features. A dataset involving CT images from 59 advanced EOC patients were included. Among them, 32 patients received maintenance bevacizumab after primary chemotherapy and the remaining 27 patients did not. We developed a computer-aided detection (CAD) scheme to automatically segment subcutaneous fat areas (VFA) and visceral fat areas (SFA) and then extracted 7 adiposity-related quantitative features. Three multivariate data analysis models (linear regression, logistic regression and Cox proportional hazards regression) were performed respectively to investigate the potential association between the model-generated prediction results and the patients' progression-free survival (PFS) and overall survival (OS). The results show that using all 3 statistical models, a statistically significant association was detected between the model-generated results and both of the two clinical outcomes in the group of patients receiving maintenance bevacizumab (p<0.01), while there were no significant association for both PFS and OS in the group of patients without receiving maintenance bevacizumab. Therefore, this study demonstrated the feasibility of using quantitative adiposity-related CT image features based statistical prediction models to generate a new clinical marker and predict the clinical outcome of EOC patients receiving maintenance bevacizumab-based chemotherapy.

  18. Predicting phenolic acid absorption in Caco-2 cells: a theoretical permeability model and mechanistic study.

    PubMed

    Farrell, Tracy L; Poquet, Laure; Dew, Tristan P; Barber, Stuart; Williamson, Gary

    2012-02-01

    There is a considerable need to rationalize the membrane permeability and mechanism of transport for potential nutraceuticals. The aim of this investigation was to develop a theoretical permeability equation, based on a reported descriptive absorption model, enabling calculation of the transcellular component of absorption across Caco-2 monolayers. Published data for Caco-2 permeability of 30 drugs transported by the transcellular route were correlated with the descriptors 1-octanol/water distribution coefficient (log D, pH 7.4) and size, based on molecular mass. Nonlinear regression analysis was used to derive a set of model parameters a', β', and b' with an integrated molecular mass function. The new theoretical transcellular permeability (TTP) model obtained a good fit of the published data (R² = 0.93) and predicted reasonably well (R² = 0.86) the experimental apparent permeability coefficient (P(app)) for nine non-training set compounds reportedly transported by the transcellular route. For the first time, the TTP model was used to predict the absorption characteristics of six phenolic acids, and this original investigation was supported by in vitro Caco-2 cell mechanistic studies, which suggested that deviation of the P(app) value from the predicted transcellular permeability (P(app)(trans)) may be attributed to involvement of active uptake, efflux transporters, or paracellular flux.

  19. Comparison of Theoretically Predicted Electromagnetic Heavy Ion Cross Sections with CERN SPS and RHIC Data

    NASA Astrophysics Data System (ADS)

    Baltz, Anthony J.

    2002-10-01

    Theoretical predictions for a number of electromagnetically induced reactions have been compared with available ultrarelativistic heavy ion data. Calculations for three atomic process have been confronted with CERN SPS data. Theoretically predicted rates are in good agreement with data[1] for bound-electron positron pairs and ionization of single electron heavy ions. Furthermore, the exact solution of the semi-classical Dirac equation in the ultrarelativistic limit reproduces the perturbative scaling result seen in data[2] for continuum pairs (i.e. cross sections go as Z_1^2 Z_2^2). In the area of electromagnetically induced nuclear and hadronic physics, mutual Coulomb dissociation predictions are in good agreement with RHIC Zero Degree Calorimeter measurements[3], and calculations of coherent vector meson production accompanied by mutual Coulomb dissociation[4] are in good agreement with RHIC STAR data[5]. [1] H. F. Krause et al., Phys. Rev. Lett., 80, 1190 (1998). [2] C. R. Vane et al., Phys. Rev. A 56, 3682 (1997). [3] Mickey Chiu et al., Phys. Rev. Lett. 89, 012302 (2002). [4] Anthony J. Baltz, Spencer R. Klein, and Joakim Nystrand, Phys. Rev. Lett. 89, 012301 (2002). [5] C. Adler et al., STAR Collaboration, arXiv:nucl-ex/206004.

  20. Prediction of Coronal Mass Ejections from Vector Magnetograms: Quantitative Measures as Predictors

    NASA Astrophysics Data System (ADS)

    Falconer, D. A.; Moore, R. L.; Gary, G. A.

    2001-05-01

    In a pilot study of 4 active regions (Falconer, D.A. 2001, JGR, in press), we derived two quantitative measures of an active region's global nonpotentiality from the region's vector magnetogram, 1) the net current (IN), and 2) the length of the strong-shear, strong-field main neutral line (LSS), and used these two measures of the CME productivity of the active regions. We compared the global nonpotentiality measures to the active regions' CME productivity determined from GOES and Yohkoh/SXT observations. We found that two of the active regions were highly globally nonpotential and were CME productive, while the other two active regions had little global nonpotentiality and produced no CMEs. At the Fall 2000 AGU (Falconer, Moore, & Gary, 2000, EOS 81, 48 F998), we reported on an expanded study (12 active regions and 17 magnetograms) in which we evaluated four quantitative global measures of an active region's magnetic field and compared these measures with the CME productivity. The four global measures (all derived from MSFC vector magnetograms) included our two previous measures (IN and LSS) as well as two new ones, the total magnetic flux (Φ ) (a measure of an active region's size), and the normalized twist (α =μ IN/Φ ). We found that the three measures of global nonpotentiality (IN, LSS, α ) were all well correlated (>99% confidence level) with an active region's CME productivity within (2 days of the day of the magnetogram. We will now report on our findings of how good our quantitative measures are as predictors of active-region CME productivity, using only CMEs that occurred after the magnetogram. We report the preliminary skill test of these quantitative measures as predictors. We compare the CME prediction success of our quantitative measures to the CME prediction success based on an active region's past CME productivity. We examine the cases of the handful of false positive and false negatives to look for improvements to our predictors. This work is

  1. The predictive value of quantitative fibronectin testing in combination with cervical length measurement in symptomatic women.

    PubMed

    Bruijn, Merel M C; Kamphuis, Esme I; Hoesli, Irene M; Martinez de Tejada, Begoña; Loccufier, Anne R; Kühnert, Maritta; Helmer, Hanns; Franz, Marie; Porath, Martina M; Oudijk, Martijn A; Jacquemyn, Yves; Schulzke, Sven M; Vetter, Grit; Hoste, Griet; Vis, Jolande Y; Kok, Marjolein; Mol, Ben W J; van Baaren, Gert-Jan

    2016-12-01

    The combination of the qualitative fetal fibronectin test and cervical length measurement has a high negative predictive value for preterm birth within 7 days; however, positive prediction is poor. A new bedside quantitative fetal fibronectin test showed potential additional value over the conventional qualitative test, but there is limited evidence on the combination with cervical length measurement. The purpose of this study was to compare quantitative fetal fibronectin and qualitative fetal fibronectin testing in the prediction of spontaneous preterm birth within 7 days in symptomatic women who undergo cervical length measurement. We performed a European multicenter cohort study in 10 perinatal centers in 5 countries. Women between 24 and 34 weeks of gestation with signs of active labor and intact membranes underwent quantitative fibronectin testing and cervical length measurement. We assessed the risk of preterm birth within 7 days in predefined strata based on fibronectin concentration and cervical length. Of 455 women who were included in the study, 48 women (11%) delivered within 7 days. A combination of cervical length and qualitative fibronectin resulted in the identification of 246 women who were at low risk: 164 women with a cervix between 15 and 30 mm and a negative fibronectin test (<50 ng/mL; preterm birth rate, 2%) and 82 women with a cervix at >30 mm (preterm birth rate, 2%). Use of quantitative fibronectin alone resulted in a predicted risk of preterm birth within 7 days that ranged from 2% in the group with the lowest fibronectin level (<10 ng/mL) to 38% in the group with the highest fibronectin level (>500 ng/mL), with similar accuracy as that of the combination of cervical length and qualitative fibronectin. Combining cervical length and quantitative fibronectin resulted in the identification of an additional 19 women at low risk (preterm birth rate, 5%), using a threshold of 10 ng/mL in women with a cervix at <15 mm, and 6 women at high risk

  2. Quantitative Theoretical and Conceptual Framework Use in Agricultural Education Research

    ERIC Educational Resources Information Center

    Kitchel, Tracy; Ball, Anna L.

    2014-01-01

    The purpose of this philosophical paper was to articulate the disciplinary tenets for consideration when using theory in agricultural education quantitative research. The paper clarified terminology around the concept of theory in social sciences and introduced inaccuracies of theory use in agricultural education quantitative research. Finally,…

  3. Quantitative structure-retention relationship models for the prediction of the reversed-phase HPLC gradient retention based on the heuristic method and support vector machine.

    PubMed

    Du, Hongying; Wang, Jie; Yao, Xiaojun; Hu, Zhide

    2009-01-01

    The heuristic method (HM) and support vector machine (SVM) were used to construct quantitative structure-retention relationship models by a series of compounds to predict the gradient retention times of reversed-phase high-performance liquid chromatography (HPLC) in three different columns. The aims of this investigation were to predict the retention times of multifarious compounds, to find the main properties of the three columns, and to indicate the theory of separation procedures. In our method, we correlated the retention times of many diverse structural analytes in three columns (Symmetry C18, Chromolith, and SG-MIX) with their representative molecular descriptors, calculated from the molecular structures alone. HM was used to select the most important molecular descriptors and build linear regression models. Furthermore, non-linear regression models were built using the SVM method; the performance of the SVM models were better than that of the HM models, and the prediction results were in good agreement with the experimental values. This paper could give some insights into the factors that were likely to govern the gradient retention process of the three investigated HPLC columns, which could theoretically supervise the practical experiment.

  4. Quantitative computed tomography versus spirometry in predicting air leak duration after major lung resection for cancer.

    PubMed

    Ueda, Kazuhiro; Kaneda, Yoshikazu; Sudo, Manabu; Mitsutaka, Jinbo; Li, Tao-Sheng; Suga, Kazuyoshi; Tanaka, Nobuyuki; Hamano, Kimikazu

    2005-11-01

    Emphysema is a well-known risk factor for developing air leak or persistent air leak after pulmonary resection. Although quantitative computed tomography (CT) and spirometry are used to diagnose emphysema, it remains controversial whether these tests are predictive of the duration of postoperative air leak. Sixty-two consecutive patients who were scheduled to undergo major lung resection for cancer were enrolled in this prospective study to define the best predictor of postoperative air leak duration. Preoperative factors analyzed included spirometric variables and area of emphysema (proportion of the low-attenuation area) that was quantified in a three-dimensional CT lung model. Chest tubes were removed the day after disappearance of the air leak, regardless of pleural drainage. Univariate and multivariate proportional hazards analyses were used to determine the influence of preoperative factors on chest tube time (air leak duration). By univariate analysis, site of resection (upper, lower), forced expiratory volume in 1 second, predicted postoperative forced expiratory volume in 1 second, and area of emphysema (< 1%, 1% to 10%, > 10%) were significant predictors of air leak duration. By multivariate analysis, site of resection and area of emphysema were the best independent determinants of air leak duration. The results were similar for patients with a smoking history (n = 40), but neither forced expiratory volume in 1 second nor predicted postoperative forced expiratory volume in 1 second were predictive of air leak duration. Quantitative CT is superior to spirometry in predicting air leak duration after major lung resection for cancer. Quantitative CT may aid in the identification of patients, particularly among those with a smoking history, requiring additional preventive procedures against air leak.

  5. Quantitative analysis of diffusion tensor orientation: theoretical framework.

    PubMed

    Wu, Yu-Chien; Field, Aaron S; Chung, Moo K; Badie, Benham; Alexander, Andrew L

    2004-11-01

    Diffusion-tensor MRI (DT-MRI) yields information about the magnitude, anisotropy, and orientation of water diffusion of brain tissues. Although white matter tractography and eigenvector color maps provide visually appealing displays of white matter tract organization, they do not easily lend themselves to quantitative and statistical analysis. In this study, a set of visual and quantitative tools for the investigation of tensor orientations in the human brain was developed. Visual tools included rose diagrams, which are spherical coordinate histograms of the major eigenvector directions, and 3D scatterplots of the major eigenvector angles. A scatter matrix of major eigenvector directions was used to describe the distribution of major eigenvectors in a defined anatomic region. A measure of eigenvector dispersion was developed to describe the degree of eigenvector coherence in the selected region. These tools were used to evaluate directional organization and the interhemispheric symmetry of DT-MRI data in five healthy human brains and two patients with infiltrative diseases of the white matter tracts. In normal anatomical white matter tracts, a high degree of directional coherence and interhemispheric symmetry was observed. The infiltrative diseases appeared to alter the eigenvector properties of affected white matter tracts, showing decreased eigenvector coherence and interhemispheric symmetry. This novel approach distills the rich, 3D information available from the diffusion tensor into a form that lends itself to quantitative analysis and statistical hypothesis testing. (c) 2004 Wiley-Liss, Inc.

  6. Information-theoretic approach to interactive learning

    NASA Astrophysics Data System (ADS)

    Still, S.

    2009-01-01

    The principles of statistical mechanics and information theory play an important role in learning and have inspired both theory and the design of numerous machine learning algorithms. The new aspect in this paper is a focus on integrating feedback from the learner. A quantitative approach to interactive learning and adaptive behavior is proposed, integrating model- and decision-making into one theoretical framework. This paper follows simple principles by requiring that the observer's world model and action policy should result in maximal predictive power at minimal complexity. Classes of optimal action policies and of optimal models are derived from an objective function that reflects this trade-off between prediction and complexity. The resulting optimal models then summarize, at different levels of abstraction, the process's causal organization in the presence of the learner's actions. A fundamental consequence of the proposed principle is that the learner's optimal action policies balance exploration and control as an emerging property. Interestingly, the explorative component is present in the absence of policy randomness, i.e. in the optimal deterministic behavior. This is a direct result of requiring maximal predictive power in the presence of feedback.

  7. Extensions and evaluations of a general quantitative theory of forest structure and dynamics

    PubMed Central

    Enquist, Brian J.; West, Geoffrey B.; Brown, James H.

    2009-01-01

    Here, we present the second part of a quantitative theory for the structure and dynamics of forests under demographic and resource steady state. The theory is based on individual-level allometric scaling relations for how trees use resources, fill space, and grow. These scale up to determine emergent properties of diverse forests, including size–frequency distributions, spacing relations, canopy configurations, mortality rates, population dynamics, successional dynamics, and resource flux rates. The theory uniquely makes quantitative predictions for both stand-level scaling exponents and normalizations. We evaluate these predictions by compiling and analyzing macroecological datasets from several tropical forests. The close match between theoretical predictions and data suggests that forests are organized by a set of very general scaling rules. Our mechanistic theory is based on allometric scaling relations, is complementary to “demographic theory,” but is fundamentally different in approach. It provides a quantitative baseline for understanding deviations from predictions due to other factors, including disturbance, variation in branching architecture, asymmetric competition, resource limitation, and other sources of mortality, which are not included in the deliberately simplified theory. The theory should apply to a wide range of forests despite large differences in abiotic environment, species diversity, and taxonomic and functional composition. PMID:19363161

  8. What predicts depression in cardiac patients: sociodemographic factors, disease severity or theoretical vulnerabilities?

    PubMed

    Doyle, F; McGee, H M; Conroy, R M; Delaney, M

    2011-05-01

    Depression is associated with increased cardiovascular risk in acute coronary syndrome (ACS) patients, but some argue that elevated depression is actually a marker of cardiovascular disease severity. Therefore, disease indices should better predict depression than established theoretical causes of depression (interpersonal life events, reinforcing events, cognitive distortions, type D personality). However, little theory-based research has been conducted in this area. In a cross-sectional design, ACS patients (n = 336) completed questionnaires assessing depression and psychosocial vulnerabilities. Nested logistic regression assessed the relative contribution of demographic or vulnerability factors, or disease indices or vulnerabilities to depression. In multivariate analysis, all vulnerabilities were independent significant predictors of depression (scoring above threshold on any scale, 48%). Demographic variables accounted for <1% of the variance of depression status, with vulnerabilities accounting for significantly more (pseudo R² = 0.16, χ²(change) = 150.9, df = 4, p < 0.001). Disease indices accounted for 7% of the variance in depression (pseudo R² = 0.07, χ² = 137.9, p < 0.001). However, adding the vulnerabilities increased the overall variance explained to 22% (pseudo R² = 0.22, χ² = 58.6, df = 4, p < 0.001). Theoretical vulnerabilities predicted depression status better than did either demographic or disease indices. The presence of these proximal causes of depression suggests that depression in ACS patients is not simply a result of cardiovascular disease severity.

  9. Theoretical model predictions and experimental results for a wavelength switchable Tm:YAG laser.

    PubMed

    Niu, Yanxiong; Wang, Caili; Liu, Wenwen; Niu, Haisha; Xu, Bing; Man, Da

    2014-07-01

    We present a theoretical model study of a quasi-three-level laser with particular attention given to the Tm:YAG laser. The oscillating conditions of this laser were theoretically analyzed from the point of the pump threshold while taking into account reabsorption loss. The laser oscillation at 2.02 μm with large stimulated emission sections was suppressed by selecting the appropriate coating for the cavity mirrors, then an efficient laser-diode side-pumped continuous-wave Tm:YAG crystal laser operating at 2.07 μm was realized. Experiments with the Tm:YAG laser confirmed the accuracy of the model, and the model was able to accurately predict that the high Stark sub-level within the H36 ground state manifold has a low laser threshold and long laser wavelength, which was achieved by decreasing the transmission of the output coupler.

  10. Quantitative Lymphoscintigraphy to Predict the Possibility of Lymphedema Development After Breast Cancer Surgery: Retrospective Clinical Study.

    PubMed

    Kim, Paul; Lee, Ju Kang; Lim, Oh Kyung; Park, Heung Kyu; Park, Ki Deok

    2017-12-01

    To predict the probability of lymphedema development in breast cancer patients in the early post-operation stage, we investigated the ability of quantitative lymphoscintigraphic assessment. This retrospective study included 201 patients without lymphedema after unilateral breast cancer surgery. Lymphoscintigraphy was performed between 4 and 8 weeks after surgery to evaluate the lymphatic system in the early postoperative stage. Quantitative lymphoscintigraphy was performed using four methods: ratio of radiopharmaceutical clearance rate of the affected to normal hand; ratio of radioactivity of the affected to normal hand; ratio of radiopharmaceutical uptake rate of the affected to normal axilla (RUA); and ratio of radioactivity of the affected to normal axilla (RRA). During a 1-year follow-up, patients with a circumferential interlimb difference of 2 cm at any measurement location and a 200-mL interlimb volume difference were diagnosed with lymphedema. We investigated the difference in quantitative lymphoscintigraphic assessment between the non-lymphedema and lymphedema groups. Quantitative lymphoscintigraphic assessment revealed that the RUA and RRA were significantly lower in the lymphedema group than in the non-lymphedema group. After adjusting the model for all significant variables (body mass index, N-stage, T-stage, type of surgery, and type of lymph node surgery), RRA was associated with lymphedema (odds ratio=0.14; 95% confidence interval, 0.04-0.46; p=0.001). In patients in the early postoperative stage after unilateral breast cancer surgery, quantitative lymphoscintigraphic assessment can be used to predict the probability of developing lymphedema.

  11. Theoretical prediction of gold vein location in deposits originated by a wall magma intrusion

    NASA Astrophysics Data System (ADS)

    Martin, Pablo; Maass-Artigas, Fernando; Cortés-Vega, Luis

    2016-05-01

    The isotherm time-evolution resulting from the intrusion of a hot dike in a cold rock is analized considering the general case of nonvertical walls. This is applied to the theoretical prediction of the gold veins location due to isothermal evolution. As in previous treatments earth surface effects are considered and the gold veins are determined by the envelope of the isotherms. The locations of the gold veins in the Callao mines of Venezuela are now well predicted. The new treatment is now more elaborated and complex that in the case of vertical walls, performed in previous papers, but it is more adequated to the real cases as the one in El Callao, where the wall is not vertical.

  12. Post-anoxic quantitative MRI changes may predict emergence from coma and functional outcomes at discharge.

    PubMed

    Reynolds, Alexandra S; Guo, Xiaotao; Matthews, Elizabeth; Brodie, Daniel; Rabbani, Leroy E; Roh, David J; Park, Soojin; Claassen, Jan; Elkind, Mitchell S V; Zhao, Binsheng; Agarwal, Sachin

    2017-08-01

    Traditional predictors of neurological prognosis after cardiac arrest are unreliable after targeted temperature management. Absence of pupillary reflexes remains a reliable predictor of poor outcome. Diffusion-weighted imaging has emerged as a potential predictor of recovery, and here we compare imaging characteristics to pupillary exam. We identified 69 patients who had MRIs within seven days of arrest and used a semi-automated algorithm to perform quantitative volumetric analysis of apparent diffusion coefficient (ADC) sequences at various thresholds. Area under receiver operating characteristic curves (ROC-AUC) were estimated to compare predictive values of quantitative MRI with pupillary exam at days 3, 5 and 7 post-arrest, for persistence of coma and functional outcomes at discharge. Cerebral Performance Category scores of 3-4 were considered poor outcome. Excluding patients where life support was withdrawn, ≥2.8% diffusion restriction of the entire brain at an ADC of ≤650×10 -6 m 2 /s was 100% specific and 68% sensitive for failure to wake up from coma before discharge. The ROC-AUC of ADC changes at ≤450×10 -6 mm 2 /s and ≤650×10 -6 mm 2 /s were significantly superior in predicting failure to wake up from coma compared to bilateral absence of pupillary reflexes. Among survivors, >0.01% of diffusion restriction of the entire brain at an ADC ≤450×10 -6 m 2 /s was 100% specific and 46% sensitive for poor functional outcome at discharge. The ROC curve predicting poor functional outcome at ADC ≤450×10 -6 mm 2 /s had an AUC of 0.737 (0.574-0.899, p=0.04). Post-anoxic diffusion changes using quantitative brain MRI may aid in predicting persistent coma and poor functional outcomes at hospital discharge. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Theoretical study of solvent effects on the coil-globule transition

    NASA Astrophysics Data System (ADS)

    Polson, James M.; Opps, Sheldon B.; Abou Risk, Nicholas

    2009-06-01

    The coil-globule transition of a polymer in a solvent has been studied using Monte Carlo simulations of a single chain subject to intramolecular interactions as well as a solvent-mediated effective potential. This solvation potential was calculated using several different theoretical approaches for two simple polymer/solvent models, each employing hard-sphere chains and hard-sphere solvent particles as well as attractive square-well potentials between some interaction sites. For each model, collapse is driven by variation in a parameter which changes the energy mismatch between monomers and solvent particles. The solvation potentials were calculated using two fundamentally different methodologies, each designed to predict the conformational behavior of polymers in solution: (1) the polymer reference interaction site model (PRISM) theory and (2) a many-body solvation potential (MBSP) based on scaled particle theory introduced by Grayce [J. Chem. Phys. 106, 5171 (1997)]. For the PRISM calculations, two well-studied solvation monomer-monomer pair potentials were employed, each distinguished by the closure relation used in its derivation: (i) a hypernetted-chain (HNC)-type potential and (ii) a Percus-Yevick (PY)-type potential. The theoretical predictions were each compared to results obtained from explicit-solvent discontinuous molecular dynamics simulations on the same polymer/solvent model systems [J. Chem. Phys. 125, 194904 (2006)]. In each case, the variation in the coil-globule transition properties with solvent density is mostly qualitatively correct, though the quantitative agreement between the theory and prediction is typically poor. The HNC-type potential yields results that are more qualitatively consistent with simulation. The conformational behavior of the polymer upon collapse predicted by the MBSP approach is quantitatively correct for low and moderate solvent densities but is increasingly less accurate for higher densities. At high solvent densities

  14. MnNiO3 revisited with modern theoretical and experimental methods

    NASA Astrophysics Data System (ADS)

    Dzubak, Allison L.; Mitra, Chandrima; Chance, Michael; Kuhn, Stephen; Jellison, Gerald E.; Sefat, Athena S.; Krogel, Jaron T.; Reboredo, Fernando A.

    2017-11-01

    MnNiO3 is a strongly correlated transition metal oxide that has recently been investigated theoretically for its potential application as an oxygen-evolution photocatalyst. However, there is no experimental report on critical quantities such as the band gap or bulk modulus. Recent theoretical predictions with standard functionals such as LDA+U and HSE show large discrepancies in the band gaps (about 1.23 eV), depending on the nature of the functional used. Hence there is clearly a need for an accurate quantitative prediction of the band gap to gauge its utility as a photocatalyst. In this work, we present a diffusion quantum Monte Carlo study of the bulk properties of MnNiO3 and revisit the synthesis and experimental properties of the compound. We predict quasiparticle band gaps of 2.0(5) eV and 3.8(6) eV for the majority and minority spin channels, respectively, and an equilibrium volume of 92.8 Å3, which compares well to the experimental value of 94.4 Å3. A bulk modulus of 217 GPa is predicted for MnNiO3. We rationalize the difficulty for the formation of ordered ilmenite-type structure with specific sites for Ni and Mn to be potentially due to the formation of antisite defects that form during synthesis, which ultimately affects the physical properties of MnNiO3.

  15. Quantitative structure-activity relationship (QSAR) for insecticides: development of predictive in vivo insecticide activity models.

    PubMed

    Naik, P K; Singh, T; Singh, H

    2009-07-01

    Quantitative structure-activity relationship (QSAR) analyses were performed independently on data sets belonging to two groups of insecticides, namely the organophosphates and carbamates. Several types of descriptors including topological, spatial, thermodynamic, information content, lead likeness and E-state indices were used to derive quantitative relationships between insecticide activities and structural properties of chemicals. A systematic search approach based on missing value, zero value, simple correlation and multi-collinearity tests as well as the use of a genetic algorithm allowed the optimal selection of the descriptors used to generate the models. The QSAR models developed for both organophosphate and carbamate groups revealed good predictability with r(2) values of 0.949 and 0.838 as well as [image omitted] values of 0.890 and 0.765, respectively. In addition, a linear correlation was observed between the predicted and experimental LD(50) values for the test set data with r(2) of 0.871 and 0.788 for both the organophosphate and carbamate groups, indicating that the prediction accuracy of the QSAR models was acceptable. The models were also tested successfully from external validation criteria. QSAR models developed in this study should help further design of novel potent insecticides.

  16. A novel logic-based approach for quantitative toxicology prediction.

    PubMed

    Amini, Ata; Muggleton, Stephen H; Lodhi, Huma; Sternberg, Michael J E

    2007-01-01

    There is a pressing need for accurate in silico methods to predict the toxicity of molecules that are being introduced into the environment or are being developed into new pharmaceuticals. Predictive toxicology is in the realm of structure activity relationships (SAR), and many approaches have been used to derive such SAR. Previous work has shown that inductive logic programming (ILP) is a powerful approach that circumvents several major difficulties, such as molecular superposition, faced by some other SAR methods. The ILP approach reasons with chemical substructures within a relational framework and yields chemically understandable rules. Here, we report a general new approach, support vector inductive logic programming (SVILP), which extends the essentially qualitative ILP-based SAR to quantitative modeling. First, ILP is used to learn rules, the predictions of which are then used within a novel kernel to derive a support-vector generalization model. For a highly heterogeneous dataset of 576 molecules with known fathead minnow fish toxicity, the cross-validated correlation coefficients (R2CV) from a chemical descriptor method (CHEM) and SVILP are 0.52 and 0.66, respectively. The ILP, CHEM, and SVILP approaches correctly predict 55, 58, and 73%, respectively, of toxic molecules. In a set of 165 unseen molecules, the R2 values from the commercial software TOPKAT and SVILP are 0.26 and 0.57, respectively. In all calculations, SVILP showed significant improvements in comparison with the other methods. The SVILP approach has a major advantage in that it uses ILP automatically and consistently to derive rules, mostly novel, describing fragments that are toxicity alerts. The SVILP is a general machine-learning approach and has the potential of tackling many problems relevant to chemoinformatics including in silico drug design.

  17. Quantitative prediction of solvation free energy in octanol of organic compounds.

    PubMed

    Delgado, Eduardo J; Jaña, Gonzalo A

    2009-03-01

    The free energy of solvation, DeltaGS0, in octanol of organic compounds is quantitatively predicted from the molecular structure. The model, involving only three molecular descriptors, is obtained by multiple linear regression analysis from a data set of 147 compounds containing diverse organic functions, namely, halogenated and non-halogenated alkanes, alkenes, alkynes, aromatics, alcohols, aldehydes, ketones, amines, ethers and esters; covering a DeltaGS0 range from about -50 to 0 kJ.mol(-1). The model predicts the free energy of solvation with a squared correlation coefficient of 0.93 and a standard deviation, 2.4 kJ.mol(-1), just marginally larger than the generally accepted value of experimental uncertainty. The involved molecular descriptors have definite physical meaning corresponding to the different intermolecular interactions occurring in the bulk liquid phase. The model is validated with an external set of 36 compounds not included in the training set.

  18. Theoretical prediction and atomic kinetic Monte Carlo simulations of void superlattice self-organization under irradiation.

    PubMed

    Gao, Yipeng; Zhang, Yongfeng; Schwen, Daniel; Jiang, Chao; Sun, Cheng; Gan, Jian; Bai, Xian-Ming

    2018-04-26

    Nano-structured superlattices may have novel physical properties and irradiation is a powerful mean to drive their self-organization. However, the formation mechanism of superlattice under irradiation is still open for debate. Here we use atomic kinetic Monte Carlo simulations in conjunction with a theoretical analysis to understand and predict the self-organization of nano-void superlattices under irradiation, which have been observed in various types of materials for more than 40 years but yet to be well understood. The superlattice is found to be a result of spontaneous precipitation of voids from the matrix, a process similar to phase separation in regular solid solution, with the symmetry dictated by anisotropic materials properties such as one-dimensional interstitial atom diffusion. This discovery challenges the widely accepted empirical rule of the coherency between the superlattice and host matrix crystal lattice. The atomic scale perspective has enabled a new theoretical analysis to successfully predict the superlattice parameters, which are in good agreement with independent experiments. The theory developed in this work can provide guidelines for designing target experiments to tailor desired microstructure under irradiation. It may also be generalized for situations beyond irradiation, such as spontaneous phase separation with reaction.

  19. Accurate experimental and theoretical comparisons between superconductor-insulator-superconductor mixers showing weak and strong quantum effects

    NASA Technical Reports Server (NTRS)

    Mcgrath, W. R.; Richards, P. L.; Face, D. W.; Prober, D. E.; Lloyd, F. L.

    1988-01-01

    A systematic study of the gain and noise in superconductor-insulator-superconductor mixers employing Ta based, Nb based, and Pb-alloy based tunnel junctions was made. These junctions displayed both weak and strong quantum effects at a signal frequency of 33 GHz. The effects of energy gap sharpness and subgap current were investigated and are quantitatively related to mixer performance. Detailed comparisons are made of the mixing results with the predictions of a three-port model approximation to the Tucker theory. Mixer performance was measured with a novel test apparatus which is accurate enough to allow for the first quantitative tests of theoretical noise predictions. It is found that the three-port model of the Tucker theory underestimates the mixer noise temperature by a factor of about 2 for all of the mixers. In addition, predicted values of available mixer gain are in reasonable agreement with experiment when quantum effects are weak. However, as quantum effects become strong, the predicted available gain diverges to infinity, which is in sharp contrast to the experimental results. Predictions of coupled gain do not always show such divergences.

  20. The Adaptation of the Immigrant Second Generation in America: Theoretical Overview and Recent Evidence

    PubMed Central

    Portes, Alejandro; Fernández-Kelly, Patricia; Haller, William

    2013-01-01

    This paper summarises a research program on the new immigrant second generation initiated in the early 1990s and completed in 2006. The four field waves of the Children of Immigrants Longitudinal Study (CILS) are described and the main theoretical models emerging from it are presented and graphically summarised. After considering critical views of this theory, we present the most recent results from this longitudinal research program in the forum of quantitative models predicting downward assimilation in early adulthood and qualitative interviews identifying ways to escape it by disadvantaged children of immigrants. Quantitative results strongly support the predicted effects of exogenous variables identified by segmented assimilation theory and identify the intervening factors during adolescence that mediate their influence on adult outcomes. Qualitative evidence gathered during the last stage of the study points to three factors that can lead to exceptional educational achievement among disadvantaged youths. All three indicate the positive influence of selective acculturation. Implications of these findings for theory and policy are discussed. PMID:23626483

  1. Quantitative structure-activation barrier relationship modeling for Diels-Alder ligations utilizing quantum chemical structural descriptors.

    PubMed

    Nandi, Sisir; Monesi, Alessandro; Drgan, Viktor; Merzel, Franci; Novič, Marjana

    2013-10-30

    In the present study, we show the correlation of quantum chemical structural descriptors with the activation barriers of the Diels-Alder ligations. A set of 72 non-catalysed Diels-Alder reactions were subjected to quantitative structure-activation barrier relationship (QSABR) under the framework of theoretical quantum chemical descriptors calculated solely from the structures of diene and dienophile reactants. Experimental activation barrier data were obtained from literature. Descriptors were computed using Hartree-Fock theory using 6-31G(d) basis set as implemented in Gaussian 09 software. Variable selection and model development were carried out by stepwise multiple linear regression methodology. Predictive performance of the quantitative structure-activation barrier relationship (QSABR) model was assessed by training and test set concept and by calculating leave-one-out cross-validated Q2 and predictive R2 values. The QSABR model can explain and predict 86.5% and 80% of the variances, respectively, in the activation energy barrier training data. Alternatively, a neural network model based on back propagation of errors was developed to assess the nonlinearity of the sought correlations between theoretical descriptors and experimental reaction barriers. A reasonable predictability for the activation barrier of the test set reactions was obtained, which enabled an exploration and interpretation of the significant variables responsible for Diels-Alder interaction between dienes and dienophiles. Thus, studies in the direction of QSABR modelling that provide efficient and fast prediction of activation barriers of the Diels-Alder reactions turn out to be a meaningful alternative to transition state theory based computation.

  2. Theoretical prediction of the energy stability of graphene nanoblisters

    NASA Astrophysics Data System (ADS)

    Glukhova, O. E.; Slepchenkov, M. M.; Barkov, P. V.

    2018-04-01

    The paper presents the results of a theoretical prediction of the energy stability of graphene nanoblisters with various geometrical parameters. As a criterion for the evaluation of the stability of investigated carbon objects we propose to consider the value of local stress of the nanoblister atomic grid. Numerical evaluation of stresses experienced by atoms of the graphene blister framework was carried out by means of an original method for calculation of local stresses that is based on energy approach. Atomistic models of graphene nanoblisters corresponding to the natural experiment data were built for the first time in this work. New physical regularities of the influence of topology on the thermodynamic stability of nanoblisters were established as a result of the analysis of the numerical experiment data. We built the distribution of local stresses for graphene blister structures, whose atomic grid contains a variety of structural defects. We have shown how the concentration and location of defects affect the picture of the distribution of the maximum stresses experienced by the atoms of the nanoblisters.

  3. Quantitative Correlation of in Vivo Properties with in Vitro Assay Results: The in Vitro Binding of a Biotin–DNA Analogue Modifier with Streptavidin Predicts the in Vivo Avidin-Induced Clearability of the Analogue-Modified Antibody

    PubMed Central

    Dou, Shuping; Virostko, John; Greiner, Dale L.; Powers, Alvin C.; Liu, Guozheng

    2016-01-01

    Quantitative prediction of in vivo behavior using an in vitro assay would dramatically accelerate pharmaceutical development. However, studies quantitatively correlating in vivo properties with in vitro assay results are rare because of the difficulty in quantitatively understanding the in vivo behavior of an agent. We now demonstrate such a correlation as a case study based on our quantitative understanding of the in vivo chemistry. In an ongoing pretargeting project, we designed a trifunctional antibody (Ab) that concomitantly carried a biotin and a DNA analogue (hereafter termed MORF). The biotin and the MORF were fused into one structure prior to conjugation to the Ab for the concomitant attachment. Because it was known that avidin-bound Ab molecules leave the circulation rapidly, this design would theoretically allow complete clearance by avidin. The clearability of the trifunctional Ab was determined by calculating the blood MORF concentration ratio of avidin-treated Ab to non-avidin-treated Ab using mice injected with these compounds. In theory, any compromised clearability should be due to the presence of impurities. In vitro, we measured the biotinylated percentage of the Ab-reacting (MORF-biotin)⊃-NH2 modifier, by addition of streptavidin to the radiolabeled (MORF-biotin)⊃-NH2 samples and subsequent high-performance liquid chromatography (HPLC) analysis. On the basis of our previous quantitative understanding, we predicted that the clearability of the Ab would be equal to the biotinylation percentage measured via HPLC. We validated this prediction within a 3% difference. In addition to the high avidin-induced clearability of the trifunctional Ab (up to ~95%) achieved by the design, we were able to predict the required quality of the (MORF-biotin)⊃-NH2 modifier for any given in vivo clearability. This approach may greatly reduce the steps and time currently required in pharmaceutical development in the process of synthesis, chemical analysis, in

  4. Quantitative Prediction of Solvation Free Energy in Octanol of Organic Compounds

    PubMed Central

    Delgado, Eduardo J.; Jaña, Gonzalo A.

    2009-01-01

    The free energy of solvation, ΔGS0, in octanol of organic compunds is quantitatively predicted from the molecular structure. The model, involving only three molecular descriptors, is obtained by multiple linear regression analysis from a data set of 147 compounds containing diverse organic functions, namely, halogenated and non-halogenated alkanes, alkenes, alkynes, aromatics, alcohols, aldehydes, ketones, amines, ethers and esters; covering a ΔGS0 range from about −50 to 0 kJ·mol−1. The model predicts the free energy of solvation with a squared correlation coefficient of 0.93 and a standard deviation, 2.4 kJ·mol−1, just marginally larger than the generally accepted value of experimental uncertainty. The involved molecular descriptors have definite physical meaning corresponding to the different intermolecular interactions occurring in the bulk liquid phase. The model is validated with an external set of 36 compounds not included in the training set. PMID:19399236

  5. A Theoretical Trombone

    ERIC Educational Resources Information Center

    LoPresto, Michael C.

    2014-01-01

    What follows is a description of a theoretical model designed to calculate the playing frequencies of the musical pitches produced by a trombone. The model is based on quantitative treatments that demonstrate the effects of the flaring bell and cup-shaped mouthpiece sections on these frequencies and can be used to calculate frequencies that…

  6. Theoretical foundations for a quantitative approach to paleogenetics. I, II.

    NASA Technical Reports Server (NTRS)

    Holmquist, R.

    1972-01-01

    It is shown that by neglecting the phenomena of multiple hits, back mutation, and chance coincidence errors larger than 100% can be introduced in the calculated value of the average number of nucleotide base differences to be expected between two homologous polynucleotides. Mathematical formulas are derived to correct quantitatively for these effects. It is pointed out that the effects change materially the quantitative aspects of phylogenics, such as the length of the legs of the trees. A number of problems are solved without approximation.-

  7. Human Emotion Experiences Can Be Predicted on Theoretical Grounds: Evidence from Verbal Labeling

    PubMed Central

    Scherer, Klaus R.; Meuleman, Ben

    2013-01-01

    In an effort to demonstrate that the verbal labeling of emotional experiences obeys lawful principles, we tested the feasibility of using an expert system called the Geneva Emotion Analyst (GEA), which generates predictions based on an appraisal theory of emotion. Several thousand respondents participated in an Internet survey that applied GEA to self-reported emotion experiences. Users recalled appraisals of emotion-eliciting events and labeled the experienced emotion with one or two words, generating a massive data set on realistic, intense emotions in everyday life. For a final sample of 5969 respondents we show that GEA achieves a high degree of predictive accuracy by matching a user’s appraisal input to one of 13 theoretically predefined emotion prototypes. The first prediction was correct in 51% of the cases and the overall diagnosis was considered as at least partially correct or appropriate in more than 90% of all cases. These results support a component process model that encourages focused, hypothesis-guided research on elicitation and differentiation, memory storage and retrieval, and categorization and labeling of emotion episodes. We discuss the implications of these results for the study of emotion terms in natural language semantics. PMID:23483988

  8. MnNiO 3 revisited with modern theoretical and experimental methods

    DOE PAGES

    Dzubak, Allison L.; Mitra, Chandrima; Chance, Michael; ...

    2017-11-03

    MnNiO 3 is a strongly correlated transition metal oxide that has recently been investigated theoretically for its potential application as an oxygen-evolution photocatalyst. However, there is no experimental report on critical quantities such as the band gap or bulk modulus. Recent theoretical predictions with standard functionals such as LDA+U and HSE show large discrepancies in the band gaps (about 1.23 eV), depending on the nature of the functional used. Hence there is clearly a need for an accurate quantitative prediction of the band gap to gauge its utility as a photocatalyst. In this work, we present a diffusion quantum Montemore » Carlo study of the bulk properties of MnNiO 3 and revisit the synthesis and experimental properties of the compound. We predict quasiparticle band gaps of 2.0(5) eV and 3.8(6) eV for the majority and minority spin channels, respectively, and an equilibrium volume of 92.8 Å 3, which compares well to the experimental value of 94.4 Å 3. A bulk modulus of 217 GPa is predicted for MnNiO 3. As a result, we rationalize the difficulty for the formation of ordered ilmenite-type structure with specific sites for Ni and Mn to be potentially due to the formation of antisite defects that form during synthesis, which ultimately affects the physical properties of MnNiO 3.« less

  9. MnNiO 3 revisited with modern theoretical and experimental methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dzubak, Allison L.; Mitra, Chandrima; Chance, Michael

    MnNiO 3 is a strongly correlated transition metal oxide that has recently been investigated theoretically for its potential application as an oxygen-evolution photocatalyst. However, there is no experimental report on critical quantities such as the band gap or bulk modulus. Recent theoretical predictions with standard functionals such as LDA+U and HSE show large discrepancies in the band gaps (about 1.23 eV), depending on the nature of the functional used. Hence there is clearly a need for an accurate quantitative prediction of the band gap to gauge its utility as a photocatalyst. In this work, we present a diffusion quantum Montemore » Carlo study of the bulk properties of MnNiO 3 and revisit the synthesis and experimental properties of the compound. We predict quasiparticle band gaps of 2.0(5) eV and 3.8(6) eV for the majority and minority spin channels, respectively, and an equilibrium volume of 92.8 Å 3, which compares well to the experimental value of 94.4 Å 3. A bulk modulus of 217 GPa is predicted for MnNiO 3. As a result, we rationalize the difficulty for the formation of ordered ilmenite-type structure with specific sites for Ni and Mn to be potentially due to the formation of antisite defects that form during synthesis, which ultimately affects the physical properties of MnNiO 3.« less

  10. Empirical Prediction of Aircraft Landing Gear Noise

    NASA Technical Reports Server (NTRS)

    Golub, Robert A. (Technical Monitor); Guo, Yue-Ping

    2005-01-01

    This report documents a semi-empirical/semi-analytical method for landing gear noise prediction. The method is based on scaling laws of the theory of aerodynamic noise generation and correlation of these scaling laws with current available test data. The former gives the method a sound theoretical foundation and the latter quantitatively determines the relations between the parameters of the landing gear assembly and the far field noise, enabling practical predictions of aircraft landing gear noise, both for parametric trends and for absolute noise levels. The prediction model is validated by wind tunnel test data for an isolated Boeing 737 landing gear and by flight data for the Boeing 777 airplane. In both cases, the predictions agree well with data, both in parametric trends and in absolute noise levels.

  11. Theoretical predictions for hot-carrier generation from surface plasmon decay

    PubMed Central

    Sundararaman, Ravishankar; Narang, Prineha; Jermyn, Adam S.; Goddard III, William A.; Atwater, Harry A.

    2014-01-01

    Decay of surface plasmons to hot carriers finds a wide variety of applications in energy conversion, photocatalysis and photodetection. However, a detailed theoretical description of plasmonic hot-carrier generation in real materials has remained incomplete. Here we report predictions for the prompt distributions of excited ‘hot’ electrons and holes generated by plasmon decay, before inelastic relaxation, using a quantized plasmon model with detailed electronic structure. We find that carrier energy distributions are sensitive to the electronic band structure of the metal: gold and copper produce holes hotter than electrons by 1–2 eV, while silver and aluminium distribute energies more equitably between electrons and holes. Momentum-direction distributions for hot carriers are anisotropic, dominated by the plasmon polarization for aluminium and by the crystal orientation for noble metals. We show that in thin metallic films intraband transitions can alter the carrier distributions, producing hotter electrons in gold, but interband transitions remain dominant. PMID:25511713

  12. A Quantitative Model for the Prediction of Sooting Tendency from Molecular Structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    St. John, Peter C.; Kairys, Paul; Das, Dhrubajyoti D.

    Particulate matter emissions negatively affect public health and global climate, yet newer fuel-efficient gasoline direct injection engines tend to produce more soot than their port-fuel injection counterparts. Fortunately, the search for sustainable biomass-based fuel blendstocks provides an opportunity to develop fuels that suppress soot formation in more efficient engine designs. However, as emissions tests are experimentally cumbersome and the search space for potential bioblendstocks is vast, new techniques are needed to estimate the sooting tendency of a diverse range of compounds. In this study, we develop a quantitative structure-activity relationship (QSAR) model of sooting tendency based on the experimental yieldmore » sooting index (YSI), which ranks molecules on a scale from n-hexane, 0, to benzene, 100. The model includes a rigorously defined applicability domain, and the predictive performance is checked using both internal and external validation. Model predictions for compounds in the external test set had a median absolute error of ~3 YSI units. An investigation of compounds that are poorly predicted by the model lends new insight into the complex mechanisms governing soot formation. Predictive models of soot formation can therefore be expected to play an increasingly important role in the screening and development of next-generation biofuels.« less

  13. A Quantitative Model for the Prediction of Sooting Tendency from Molecular Structure

    DOE PAGES

    St. John, Peter C.; Kairys, Paul; Das, Dhrubajyoti D.; ...

    2017-07-24

    Particulate matter emissions negatively affect public health and global climate, yet newer fuel-efficient gasoline direct injection engines tend to produce more soot than their port-fuel injection counterparts. Fortunately, the search for sustainable biomass-based fuel blendstocks provides an opportunity to develop fuels that suppress soot formation in more efficient engine designs. However, as emissions tests are experimentally cumbersome and the search space for potential bioblendstocks is vast, new techniques are needed to estimate the sooting tendency of a diverse range of compounds. In this study, we develop a quantitative structure-activity relationship (QSAR) model of sooting tendency based on the experimental yieldmore » sooting index (YSI), which ranks molecules on a scale from n-hexane, 0, to benzene, 100. The model includes a rigorously defined applicability domain, and the predictive performance is checked using both internal and external validation. Model predictions for compounds in the external test set had a median absolute error of ~3 YSI units. An investigation of compounds that are poorly predicted by the model lends new insight into the complex mechanisms governing soot formation. Predictive models of soot formation can therefore be expected to play an increasingly important role in the screening and development of next-generation biofuels.« less

  14. Biomarkers are used to predict quantitative metabolite concentration profiles in human red blood cells

    DOE PAGES

    Yurkovich, James T.; Yang, Laurence; Palsson, Bernhard O.; ...

    2017-03-06

    Deep-coverage metabolomic profiling has revealed a well-defined development of metabolic decay in human red blood cells (RBCs) under cold storage conditions. A set of extracellular biomarkers has been recently identified that reliably defines the qualitative state of the metabolic network throughout this metabolic decay process. Here, we extend the utility of these biomarkers by using them to quantitatively predict the concentrations of other metabolites in the red blood cell. We are able to accurately predict the concentration profile of 84 of the 91 (92%) measured metabolites ( p < 0.05) in RBC metabolism using only measurements of these five biomarkers.more » The median of prediction errors (symmetric mean absolute percent error) across all metabolites was 13%. Furthermore, the ability to predict numerous metabolite concentrations from a simple set of biomarkers offers the potential for the development of a powerful workflow that could be used to evaluate the metabolic state of a biological system using a minimal set of measurements.« less

  15. Biomarkers are used to predict quantitative metabolite concentration profiles in human red blood cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yurkovich, James T.; Yang, Laurence; Palsson, Bernhard O.

    Deep-coverage metabolomic profiling has revealed a well-defined development of metabolic decay in human red blood cells (RBCs) under cold storage conditions. A set of extracellular biomarkers has been recently identified that reliably defines the qualitative state of the metabolic network throughout this metabolic decay process. Here, we extend the utility of these biomarkers by using them to quantitatively predict the concentrations of other metabolites in the red blood cell. We are able to accurately predict the concentration profile of 84 of the 91 (92%) measured metabolites ( p < 0.05) in RBC metabolism using only measurements of these five biomarkers.more » The median of prediction errors (symmetric mean absolute percent error) across all metabolites was 13%. Furthermore, the ability to predict numerous metabolite concentrations from a simple set of biomarkers offers the potential for the development of a powerful workflow that could be used to evaluate the metabolic state of a biological system using a minimal set of measurements.« less

  16. Genomic Prediction for Quantitative Traits Is Improved by Mapping Variants to Gene Ontology Categories in Drosophila melanogaster

    PubMed Central

    Edwards, Stefan M.; Sørensen, Izel F.; Sarup, Pernille; Mackay, Trudy F. C.; Sørensen, Peter

    2016-01-01

    Predicting individual quantitative trait phenotypes from high-resolution genomic polymorphism data is important for personalized medicine in humans, plant and animal breeding, and adaptive evolution. However, this is difficult for populations of unrelated individuals when the number of causal variants is low relative to the total number of polymorphisms and causal variants individually have small effects on the traits. We hypothesized that mapping molecular polymorphisms to genomic features such as genes and their gene ontology categories could increase the accuracy of genomic prediction models. We developed a genomic feature best linear unbiased prediction (GFBLUP) model that implements this strategy and applied it to three quantitative traits (startle response, starvation resistance, and chill coma recovery) in the unrelated, sequenced inbred lines of the Drosophila melanogaster Genetic Reference Panel. Our results indicate that subsetting markers based on genomic features increases the predictive ability relative to the standard genomic best linear unbiased prediction (GBLUP) model. Both models use all markers, but GFBLUP allows differential weighting of the individual genetic marker relationships, whereas GBLUP weighs the genetic marker relationships equally. Simulation studies show that it is possible to further increase the accuracy of genomic prediction for complex traits using this model, provided the genomic features are enriched for causal variants. Our GFBLUP model using prior information on genomic features enriched for causal variants can increase the accuracy of genomic predictions in populations of unrelated individuals and provides a formal statistical framework for leveraging and evaluating information across multiple experimental studies to provide novel insights into the genetic architecture of complex traits. PMID:27235308

  17. Quantitating Antibody Uptake In Vivo: Conditional Dependence on Antigen Expression Levels

    PubMed Central

    Thurber, Greg M.; Weissleder, Ralph

    2010-01-01

    Purpose Antibodies form an important class of cancer therapeutics, and there is intense interest in using them for imaging applications in diagnosis and monitoring of cancer treatment. Despite the expanding body of knowledge describing pharmacokinetic and pharmacodynamic interactions of antibodies in vivo, discrepancies remain over the effect of antigen expression level on tumoral uptake with some reports indicating a relationship between uptake and expression and others showing no correlation. Procedures Using a cell line with high EpCAM expression and moderate EGFR expression, fluorescent antibodies with similar plasma clearance were imaged in vivo. A mathematical model and mouse xenograft experiments were used to describe the effect of antigen expression on uptake of these high affinity antibodies. Results As predicted by the theoretical model, under subsaturating conditions, uptake of the antibodies in such tumors is similar because localization of both probes is limited by delivery from the vasculature. In a separate experiment, when the tumor is saturated, the uptake becomes dependent on the number of available binding sites. In addition, targeting of small micrometastases is shown to be higher than larger vascularized tumors. Conclusions These results are consistent with the prediction that high affinity antibody uptake is dependent on antigen expression levels for saturating doses and delivery for subsaturating doses. It is imperative for any probe to understand whether quantitative uptake is a measure of biomarker expression or transport to the region of interest. The data provide support for a predictive theoretical model of antibody uptake, enabling it to be used as a starting point for the design of more efficacious therapies and timely quantitative imaging probes. PMID:20809210

  18. Quantitating antibody uptake in vivo: conditional dependence on antigen expression levels.

    PubMed

    Thurber, Greg M; Weissleder, Ralph

    2011-08-01

    Antibodies form an important class of cancer therapeutics, and there is intense interest in using them for imaging applications in diagnosis and monitoring of cancer treatment. Despite the expanding body of knowledge describing pharmacokinetic and pharmacodynamic interactions of antibodies in vivo, discrepancies remain over the effect of antigen expression level on tumoral uptake with some reports indicating a relationship between uptake and expression and others showing no correlation. Using a cell line with high epithelial cell adhesion molecule expression and moderate epidermal growth factor receptor expression, fluorescent antibodies with similar plasma clearance were imaged in vivo. A mathematical model and mouse xenograft experiments were used to describe the effect of antigen expression on uptake of these high-affinity antibodies. As predicted by the theoretical model, under subsaturating conditions, uptake of the antibodies in such tumors is similar because localization of both probes is limited by delivery from the vasculature. In a separate experiment, when the tumor is saturated, the uptake becomes dependent on the number of available binding sites. In addition, targeting of small micrometastases is shown to be higher than larger vascularized tumors. These results are consistent with the prediction that high affinity antibody uptake is dependent on antigen expression levels for saturating doses and delivery for subsaturating doses. It is imperative for any probe to understand whether quantitative uptake is a measure of biomarker expression or transport to the region of interest. The data provide support for a predictive theoretical model of antibody uptake, enabling it to be used as a starting point for the design of more efficacious therapies and timely quantitative imaging probes.

  19. Uniting Cheminformatics and Chemical Theory To Predict the Intrinsic Aqueous Solubility of Crystalline Druglike Molecules

    PubMed Central

    2014-01-01

    We present four models of solution free-energy prediction for druglike molecules utilizing cheminformatics descriptors and theoretically calculated thermodynamic values. We make predictions of solution free energy using physics-based theory alone and using machine learning/quantitative structure–property relationship (QSPR) models. We also develop machine learning models where the theoretical energies and cheminformatics descriptors are used as combined input. These models are used to predict solvation free energy. While direct theoretical calculation does not give accurate results in this approach, machine learning is able to give predictions with a root mean squared error (RMSE) of ∼1.1 log S units in a 10-fold cross-validation for our Drug-Like-Solubility-100 (DLS-100) dataset of 100 druglike molecules. We find that a model built using energy terms from our theoretical methodology as descriptors is marginally less predictive than one built on Chemistry Development Kit (CDK) descriptors. Combining both sets of descriptors allows a further but very modest improvement in the predictions. However, in some cases, this is a statistically significant enhancement. These results suggest that there is little complementarity between the chemical information provided by these two sets of descriptors, despite their different sources and methods of calculation. Our machine learning models are also able to predict the well-known Solubility Challenge dataset with an RMSE value of 0.9–1.0 log S units. PMID:24564264

  20. REVIEW OF NUMERICAL MODELS FOR PREDICTING THE ENERGY DEPOSITION AND RESULTANT THERMAL RESPONSE OF HUMANS EXPOSED TO ELECTROMAGNETIC FIELDS

    EPA Science Inventory

    For humans exposed to electromagnetic (EM) radiation, the resulting thermophysiologic response is not well understood. Because it is unlikely that this information will be determined from quantitative experimentation, it is necessary to develop theoretical models which predict th...

  1. Evaluation of an ensemble of genetic models for prediction of a quantitative trait.

    PubMed

    Milton, Jacqueline N; Steinberg, Martin H; Sebastiani, Paola

    2014-01-01

    Many genetic markers have been shown to be associated with common quantitative traits in genome-wide association studies. Typically these associated genetic markers have small to modest effect sizes and individually they explain only a small amount of the variability of the phenotype. In order to build a genetic prediction model without fitting a multiple linear regression model with possibly hundreds of genetic markers as predictors, researchers often summarize the joint effect of risk alleles into a genetic score that is used as a covariate in the genetic prediction model. However, the prediction accuracy can be highly variable and selecting the optimal number of markers to be included in the genetic score is challenging. In this manuscript we present a strategy to build an ensemble of genetic prediction models from data and we show that the ensemble-based method makes the challenge of choosing the number of genetic markers more amenable. Using simulated data with varying heritability and number of genetic markers, we compare the predictive accuracy and inclusion of true positive and false positive markers of a single genetic prediction model and our proposed ensemble method. The results show that the ensemble of genetic models tends to include a larger number of genetic variants than a single genetic model and it is more likely to include all of the true genetic markers. This increased sensitivity is obtained at the price of a lower specificity that appears to minimally affect the predictive accuracy of the ensemble.

  2. The Dopamine Prediction Error: Contributions to Associative Models of Reward Learning

    PubMed Central

    Nasser, Helen M.; Calu, Donna J.; Schoenbaum, Geoffrey; Sharpe, Melissa J.

    2017-01-01

    Phasic activity of midbrain dopamine neurons is currently thought to encapsulate the prediction-error signal described in Sutton and Barto’s (1981) model-free reinforcement learning algorithm. This phasic signal is thought to contain information about the quantitative value of reward, which transfers to the reward-predictive cue after learning. This is argued to endow the reward-predictive cue with the value inherent in the reward, motivating behavior toward cues signaling the presence of reward. Yet theoretical and empirical research has implicated prediction-error signaling in learning that extends far beyond a transfer of quantitative value to a reward-predictive cue. Here, we review the research which demonstrates the complexity of how dopaminergic prediction errors facilitate learning. After briefly discussing the literature demonstrating that phasic dopaminergic signals can act in the manner described by Sutton and Barto (1981), we consider how these signals may also influence attentional processing across multiple attentional systems in distinct brain circuits. Then, we discuss how prediction errors encode and promote the development of context-specific associations between cues and rewards. Finally, we consider recent evidence that shows dopaminergic activity contains information about causal relationships between cues and rewards that reflect information garnered from rich associative models of the world that can be adapted in the absence of direct experience. In discussing this research we hope to support the expansion of how dopaminergic prediction errors are thought to contribute to the learning process beyond the traditional concept of transferring quantitative value. PMID:28275359

  3. Survival Prediction in Pancreatic Ductal Adenocarcinoma by Quantitative Computed Tomography Image Analysis.

    PubMed

    Attiyeh, Marc A; Chakraborty, Jayasree; Doussot, Alexandre; Langdon-Embry, Liana; Mainarich, Shiana; Gönen, Mithat; Balachandran, Vinod P; D'Angelica, Michael I; DeMatteo, Ronald P; Jarnagin, William R; Kingham, T Peter; Allen, Peter J; Simpson, Amber L; Do, Richard K

    2018-04-01

    Pancreatic cancer is a highly lethal cancer with no established a priori markers of survival. Existing nomograms rely mainly on post-resection data and are of limited utility in directing surgical management. This study investigated the use of quantitative computed tomography (CT) features to preoperatively assess survival for pancreatic ductal adenocarcinoma (PDAC) patients. A prospectively maintained database identified consecutive chemotherapy-naive patients with CT angiography and resected PDAC between 2009 and 2012. Variation in CT enhancement patterns was extracted from the tumor region using texture analysis, a quantitative image analysis tool previously described in the literature. Two continuous survival models were constructed, with 70% of the data (training set) using Cox regression, first based only on preoperative serum cancer antigen (CA) 19-9 levels and image features (model A), and then on CA19-9, image features, and the Brennan score (composite pathology score; model B). The remaining 30% of the data (test set) were reserved for independent validation. A total of 161 patients were included in the analysis. Training and test sets contained 113 and 48 patients, respectively. Quantitative image features combined with CA19-9 achieved a c-index of 0.69 [integrated Brier score (IBS) 0.224] on the test data, while combining CA19-9, imaging, and the Brennan score achieved a c-index of 0.74 (IBS 0.200) on the test data. We present two continuous survival prediction models for resected PDAC patients. Quantitative analysis of CT texture features is associated with overall survival. Further work includes applying the model to an external dataset to increase the sample size for training and to determine its applicability.

  4. Hardening of particle/oil/water suspensions due to capillary bridges: Experimental yield stress and theoretical interpretation.

    PubMed

    Danov, Krassimir D; Georgiev, Mihail T; Kralchevsky, Peter A; Radulova, Gergana M; Gurkov, Theodor D; Stoyanov, Simeon D; Pelan, Eddie G

    2018-01-01

    greater for the suspension with soybean oil despite its lower interfacial tension against water. The result can be explained with the different contact angles of the two oils in agreement with the theoretical predictions. The results could contribute for a better understanding, quantitative prediction and control of the mechanical properties of three-phase capillary suspensions solid/liquid/liquid. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Quantitative computed tomography for the prediction of pulmonary function after lung cancer surgery: a simple method using simulation software.

    PubMed

    Ueda, Kazuhiro; Tanaka, Toshiki; Li, Tao-Sheng; Tanaka, Nobuyuki; Hamano, Kimikazu

    2009-03-01

    The prediction of pulmonary functional reserve is mandatory in therapeutic decision-making for patients with resectable lung cancer, especially those with underlying lung disease. Volumetric analysis in combination with densitometric analysis of the affected lung lobe or segment with quantitative computed tomography (CT) helps to identify residual pulmonary function, although the utility of this modality needs investigation. The subjects of this prospective study were 30 patients with resectable lung cancer. A three-dimensional CT lung model was created with voxels representing normal lung attenuation (-600 to -910 Hounsfield units). Residual pulmonary function was predicted by drawing a boundary line between the lung to be preserved and that to be resected, directly on the lung model. The predicted values were correlated with the postoperative measured values. The predicted and measured values corresponded well (r=0.89, p<0.001). Although the predicted values corresponded with values predicted by simple calculation using a segment-counting method (r=0.98), there were two outliers whose pulmonary functional reserves were predicted more accurately by CT than by segment counting. The measured pulmonary functional reserves were significantly higher than the predicted values in patients with extensive emphysematous areas (<-910 Hounsfield units), but not in patients with chronic obstructive pulmonary disease. Quantitative CT yielded accurate prediction of functional reserve after lung cancer surgery and helped to identify patients whose functional reserves are likely to be underestimated. Hence, this modality should be utilized for patients with marginal pulmonary function.

  6. Predictive microbiology: Quantitative science delivering quantifiable benefits to the meat industry and other food industries.

    PubMed

    McMeekin, T A

    2007-09-01

    Predictive microbiology is considered in the context of the conference theme "chance, innovation and challenge", together with the impact of quantitative approaches on food microbiology, generally. The contents of four prominent texts on predictive microbiology are analysed and the major contributions of two meat microbiologists, Drs. T.A. Roberts and C.O. Gill, to the early development of predictive microbiology are highlighted. These provide a segue into R&D trends in predictive microbiology, including the Refrigeration Index, an example of science-based, outcome-focussed food safety regulation. Rapid advances in technologies and systems for application of predictive models are indicated and measures to judge the impact of predictive microbiology are suggested in terms of research outputs and outcomes. The penultimate section considers the future of predictive microbiology and advances that will become possible when data on population responses are combined with data derived from physiological and molecular studies in a systems biology approach. Whilst the emphasis is on science and technology for food safety management, it is suggested that decreases in foodborne illness will also arise from minimising human error by changing the food safety culture.

  7. A theoretical physicist's journey into biology: from quarks and strings to cells and whales.

    PubMed

    West, Geoffrey B

    2014-10-08

    Biology will almost certainly be the predominant science of the twenty-first century but, for it to become successfully so, it will need to embrace some of the quantitative, analytic, predictive culture that has made physics so successful. This includes the search for underlying principles, systemic thinking at all scales, the development of coarse-grained models, and closer ongoing collaboration between theorists and experimentalists. This article presents a personal, slightly provocative, perspective of a theoretical physicist working in close collaboration with biologists at the interface between the physical and biological sciences.

  8. Quantitative prediction of perceptual decisions during near-threshold fear detection

    NASA Astrophysics Data System (ADS)

    Pessoa, Luiz; Padmala, Srikanth

    2005-04-01

    A fundamental goal of cognitive neuroscience is to explain how mental decisions originate from basic neural mechanisms. The goal of the present study was to investigate the neural correlates of perceptual decisions in the context of emotional perception. To probe this question, we investigated how fluctuations in functional MRI (fMRI) signals were correlated with behavioral choice during a near-threshold fear detection task. fMRI signals predicted behavioral choice independently of stimulus properties and task accuracy in a network of brain regions linked to emotional processing: posterior cingulate cortex, medial prefrontal cortex, right inferior frontal gyrus, and left insula. We quantified the link between fMRI signals and behavioral choice in a whole-brain analysis by determining choice probabilities by means of signal-detection theory methods. Our results demonstrate that voxel-wise fMRI signals can reliably predict behavioral choice in a quantitative fashion (choice probabilities ranged from 0.63 to 0.78) at levels comparable to neuronal data. We suggest that the conscious decision that a fearful face has been seen is represented across a network of interconnected brain regions that prepare the organism to appropriately handle emotionally challenging stimuli and that regulate the associated emotional response. decision making | emotion | functional MRI

  9. Theoretical Insight into Dispersion of Silica Nanoparticles in Polymer Melts.

    PubMed

    Wei, Zhaoyang; Hou, Yaqi; Ning, Nanying; Zhang, Liqun; Tian, Ming; Mi, Jianguo

    2015-07-30

    Silica nanoparticles dispersed in polystyrene, poly(methyl methacrylate), and poly(ethylene oxide) melts have been investigated using a density functional approach. The polymers are regarded as coarse-grained semiflexible chains, and the segment sizes are represented by their Kuhn lengths. The particle-particle and particle-polymer interactions are calculated with the Hamaker theory to reflect the relationship between particles and polymer melts. The effects of particle volume fraction and size on the particle dispersion have been quantitatively determined to evaluate their dispersion/aggregation behavior in these polymer melts. It is shown that theoretical predictions are generally in good agreement with the corresponding experimental results, providing the reasonable verification of particle dispersion/agglomeration and polymer depletion.

  10. Super H-mode: theoretical prediction and initial observations of a new high performance regime for tokamak operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snyder, Philip B.; Solomon, Wayne M.; Burrell, Keith H.

    2015-07-21

    A new “Super H-mode” regime is predicted, which enables pedestal height and predicted fusion performance substantially higher than for H-mode operation. This new regime is predicted to exist by the EPED pedestal model, which calculates criticality constraints for peeling-ballooning and kinetic ballooning modes, and combines them to predict the pedestal height and width. EPED usually predicts a single (“H-mode”) pedestal solution for each set of input parameters, however, in strongly shaped plasmas above a critical density, multiple pedestal solutions are found, including the standard “Hmode” solution, and a “Super H-Mode” solution at substantially larger pedestal height and width. The Supermore » H-mode regime is predicted to be accessible by controlling the trajectory of the density, and to increase fusion performance for ITER, as well as for DEMO designs with strong shaping. A set of experiments on DIII-D has identified the predicted Super H-mode regime, and finds pedestal height and width, and their variation with density, in good agreement with theoretical predictions from the EPED model. Finally, the very high pedestal enables operation at high global beta and high confinement, including the highest normalized beta achieved on DIII-D with a quiescent edge.« less

  11. Reward rate optimization in two-alternative decision making: empirical tests of theoretical predictions.

    PubMed

    Simen, Patrick; Contreras, David; Buck, Cara; Hu, Peter; Holmes, Philip; Cohen, Jonathan D

    2009-12-01

    The drift-diffusion model (DDM) implements an optimal decision procedure for stationary, 2-alternative forced-choice tasks. The height of a decision threshold applied to accumulating information on each trial determines a speed-accuracy tradeoff (SAT) for the DDM, thereby accounting for a ubiquitous feature of human performance in speeded response tasks. However, little is known about how participants settle on particular tradeoffs. One possibility is that they select SATs that maximize a subjective rate of reward earned for performance. For the DDM, there exist unique, reward-rate-maximizing values for its threshold and starting point parameters in free-response tasks that reward correct responses (R. Bogacz, E. Brown, J. Moehlis, P. Holmes, & J. D. Cohen, 2006). These optimal values vary as a function of response-stimulus interval, prior stimulus probability, and relative reward magnitude for correct responses. We tested the resulting quantitative predictions regarding response time, accuracy, and response bias under these task manipulations and found that grouped data conformed well to the predictions of an optimally parameterized DDM.

  12. Toxicity challenges in environmental chemicals: Prediction of human plasma protein binding through quantitative structure-activity relationship (QSAR) models

    EPA Science Inventory

    The present study explores the merit of utilizing available pharmaceutical data to construct a quantitative structure-activity relationship (QSAR) for prediction of the fraction of a chemical unbound to plasma protein (Fub) in environmentally relevant compounds. Independent model...

  13. A priori Prediction of Neoadjuvant Chemotherapy Response and Survival in Breast Cancer Patients using Quantitative Ultrasound

    PubMed Central

    Tadayyon, Hadi; Sannachi, Lakshmanan; Gangeh, Mehrdad J.; Kim, Christina; Ghandi, Sonal; Trudeau, Maureen; Pritchard, Kathleen; Tran, William T.; Slodkowska, Elzbieta; Sadeghi-Naini, Ali; Czarnota, Gregory J.

    2017-01-01

    Quantitative ultrasound (QUS) can probe tissue structure and analyze tumour characteristics. Using a 6-MHz ultrasound system, radiofrequency data were acquired from 56 locally advanced breast cancer patients prior to their neoadjuvant chemotherapy (NAC) and QUS texture features were computed from regions of interest in tumour cores and their margins as potential predictive and prognostic indicators. Breast tumour molecular features were also collected and used for analysis. A multiparametric QUS model was constructed, which demonstrated a response prediction accuracy of 88% and ability to predict patient 5-year survival rates (p = 0.01). QUS features demonstrated superior performance in comparison to molecular markers and the combination of QUS and molecular markers did not improve response prediction. This study demonstrates, for the first time, that non-invasive QUS features in the core and margin of breast tumours can indicate breast cancer response to neoadjuvant chemotherapy (NAC) and predict five-year recurrence-free survival. PMID:28401902

  14. A priori Prediction of Neoadjuvant Chemotherapy Response and Survival in Breast Cancer Patients using Quantitative Ultrasound.

    PubMed

    Tadayyon, Hadi; Sannachi, Lakshmanan; Gangeh, Mehrdad J; Kim, Christina; Ghandi, Sonal; Trudeau, Maureen; Pritchard, Kathleen; Tran, William T; Slodkowska, Elzbieta; Sadeghi-Naini, Ali; Czarnota, Gregory J

    2017-04-12

    Quantitative ultrasound (QUS) can probe tissue structure and analyze tumour characteristics. Using a 6-MHz ultrasound system, radiofrequency data were acquired from 56 locally advanced breast cancer patients prior to their neoadjuvant chemotherapy (NAC) and QUS texture features were computed from regions of interest in tumour cores and their margins as potential predictive and prognostic indicators. Breast tumour molecular features were also collected and used for analysis. A multiparametric QUS model was constructed, which demonstrated a response prediction accuracy of 88% and ability to predict patient 5-year survival rates (p = 0.01). QUS features demonstrated superior performance in comparison to molecular markers and the combination of QUS and molecular markers did not improve response prediction. This study demonstrates, for the first time, that non-invasive QUS features in the core and margin of breast tumours can indicate breast cancer response to neoadjuvant chemotherapy (NAC) and predict five-year recurrence-free survival.

  15. Theoretical predicting of permeability evolution in damaged rock under compressive stress

    NASA Astrophysics Data System (ADS)

    Vu, M. N.; Nguyen, S. T.; To, Q. D.; Dao, N. H.

    2017-05-01

    This paper outlines an analytical model of crack growth induced permeability changes. A theoretical solution of effective permeability of cracked porous media is derived. The fluid flow obeys Poisseuille's law along the crack and Darcy's law in the porous matrix. This solution exhibits a percolation threshold for any type of crack distribution apart from a parallel crack distribution. The physical behaviour of fluid flow through a cracked porous material is well reproduced by the proposed model. The presence of this effective permeability coupling to analytical expression of crack growth under compression enables the modelling of the permeability variation due to stress-induced cracking in a porous rock. This incorporation allows the prediction of the permeability change of a porous rock embedding an anisotropic crack distribution from any initial crack density, that is, lower, around or upper to percolation threshold. The interaction between cracks is not explicitly taken into account. The model is well applicable both to micro- and macrocracks.

  16. Detailed Quantitative Classifications of Galaxy Morphology

    NASA Astrophysics Data System (ADS)

    Nair, Preethi

    2018-01-01

    Understanding the physical processes responsible for the growth of galaxies is one of the key challenges in extragalactic astronomy. The assembly history of a galaxy is imprinted in a galaxy’s detailed morphology. The bulge-to-total ratio of galaxies, the presence or absence of bars, rings, spiral arms, tidal tails etc, all have implications for the past merger, star formation, and feedback history of a galaxy. However, current quantitative galaxy classification schemes are only useful for broad binning. They cannot classify or exploit the wide variety of galaxy structures seen in nature. Therefore, comparisons of observations with theoretical predictions of secular structure formation have only been conducted on small samples of visually classified galaxies. However large samples are needed to disentangle the complex physical processes of galaxy formation. With the advent of large surveys, like the Sloan Digital Sky Survey (SDSS) and the upcoming Large Synoptic Survey Telescope (LSST) and WFIRST, the problem of statistics will be resolved. However, the need for a robust quantitative classification scheme will still remain. Here I will present early results on promising machine learning algorithms that are providing detailed classifications, identifying bars, rings, multi-armed spiral galaxies, and Hubble type.

  17. QTest: Quantitative Testing of Theories of Binary Choice.

    PubMed

    Regenwetter, Michel; Davis-Stober, Clintin P; Lim, Shiau Hong; Guo, Ying; Popova, Anna; Zwilling, Chris; Cha, Yun-Shil; Messner, William

    2014-01-01

    The goal of this paper is to make modeling and quantitative testing accessible to behavioral decision researchers interested in substantive questions. We provide a novel, rigorous, yet very general, quantitative diagnostic framework for testing theories of binary choice. This permits the nontechnical scholar to proceed far beyond traditionally rather superficial methods of analysis, and it permits the quantitatively savvy scholar to triage theoretical proposals before investing effort into complex and specialized quantitative analyses. Our theoretical framework links static algebraic decision theory with observed variability in behavioral binary choice data. The paper is supplemented with a custom-designed public-domain statistical analysis package, the QTest software. We illustrate our approach with a quantitative analysis using published laboratory data, including tests of novel versions of "Random Cumulative Prospect Theory." A major asset of the approach is the potential to distinguish decision makers who have a fixed preference and commit errors in observed choices from decision makers who waver in their preferences.

  18. QTest: Quantitative Testing of Theories of Binary Choice

    PubMed Central

    Regenwetter, Michel; Davis-Stober, Clintin P.; Lim, Shiau Hong; Guo, Ying; Popova, Anna; Zwilling, Chris; Cha, Yun-Shil; Messner, William

    2014-01-01

    The goal of this paper is to make modeling and quantitative testing accessible to behavioral decision researchers interested in substantive questions. We provide a novel, rigorous, yet very general, quantitative diagnostic framework for testing theories of binary choice. This permits the nontechnical scholar to proceed far beyond traditionally rather superficial methods of analysis, and it permits the quantitatively savvy scholar to triage theoretical proposals before investing effort into complex and specialized quantitative analyses. Our theoretical framework links static algebraic decision theory with observed variability in behavioral binary choice data. The paper is supplemented with a custom-designed public-domain statistical analysis package, the QTest software. We illustrate our approach with a quantitative analysis using published laboratory data, including tests of novel versions of “Random Cumulative Prospect Theory.” A major asset of the approach is the potential to distinguish decision makers who have a fixed preference and commit errors in observed choices from decision makers who waver in their preferences. PMID:24999495

  19. Novel Application of Quantitative Single-Photon Emission Computed Tomography/Computed Tomography to Predict Early Response to Methimazole in Graves' Disease

    PubMed Central

    Kim, Hyun Joo; Bang, Ji-In; Kim, Ji-Young; Moon, Jae Hoon; So, Young

    2017-01-01

    Objective Since Graves' disease (GD) is resistant to antithyroid drugs (ATDs), an accurate quantitative thyroid function measurement is required for the prediction of early responses to ATD. Quantitative parameters derived from the novel technology, single-photon emission computed tomography/computed tomography (SPECT/CT), were investigated for the prediction of achievement of euthyroidism after methimazole (MMI) treatment in GD. Materials and Methods A total of 36 GD patients (10 males, 26 females; mean age, 45.3 ± 13.8 years) were enrolled for this study, from April 2015 to January 2016. They underwent quantitative thyroid SPECT/CT 20 minutes post-injection of 99mTc-pertechnetate (5 mCi). Association between the time to biochemical euthyroidism after MMI treatment and %uptake, standardized uptake value (SUV), functional thyroid mass (SUVmean × thyroid volume) from the SPECT/CT, and clinical/biochemical variables, were investigated. Results GD patients had a significantly greater %uptake (6.9 ± 6.4%) than historical control euthyroid patients (n = 20, 0.8 ± 0.5%, p < 0.001) from the same quantitative SPECT/CT protocol. Euthyroidism was achieved in 14 patients at 156 ± 62 days post-MMI treatment, but 22 patients had still not achieved euthyroidism by the last follow-up time-point (208 ± 80 days). In the univariate Cox regression analysis, the initial MMI dose (p = 0.014), %uptake (p = 0.015), and functional thyroid mass (p = 0.016) were significant predictors of euthyroidism in response to MMI treatment. However, only %uptake remained significant in a multivariate Cox regression analysis (p = 0.034). A %uptake cutoff of 5.0% dichotomized the faster responding versus the slower responding GD patients (p = 0.006). Conclusion A novel parameter of thyroid %uptake from quantitative SPECT/CT is a predictive indicator of an early response to MMI in GD patients. PMID:28458607

  20. Genetic programming based quantitative structure-retention relationships for the prediction of Kovats retention indices.

    PubMed

    Goel, Purva; Bapat, Sanket; Vyas, Renu; Tambe, Amruta; Tambe, Sanjeev S

    2015-11-13

    The development of quantitative structure-retention relationships (QSRR) aims at constructing an appropriate linear/nonlinear model for the prediction of the retention behavior (such as Kovats retention index) of a solute on a chromatographic column. Commonly, multi-linear regression and artificial neural networks are used in the QSRR development in the gas chromatography (GC). In this study, an artificial intelligence based data-driven modeling formalism, namely genetic programming (GP), has been introduced for the development of quantitative structure based models predicting Kovats retention indices (KRI). The novelty of the GP formalism is that given an example dataset, it searches and optimizes both the form (structure) and the parameters of an appropriate linear/nonlinear data-fitting model. Thus, it is not necessary to pre-specify the form of the data-fitting model in the GP-based modeling. These models are also less complex, simple to understand, and easy to deploy. The effectiveness of GP in constructing QSRRs has been demonstrated by developing models predicting KRIs of light hydrocarbons (case study-I) and adamantane derivatives (case study-II). In each case study, two-, three- and four-descriptor models have been developed using the KRI data available in the literature. The results of these studies clearly indicate that the GP-based models possess an excellent KRI prediction accuracy and generalization capability. Specifically, the best performing four-descriptor models in both the case studies have yielded high (>0.9) values of the coefficient of determination (R(2)) and low values of root mean squared error (RMSE) and mean absolute percent error (MAPE) for training, test and validation set data. The characteristic feature of this study is that it introduces a practical and an effective GP-based method for developing QSRRs in gas chromatography that can be gainfully utilized for developing other types of data-driven models in chromatography science

  1. Combining quantitative trait loci analysis with physiological models to predict genotype-specific transpiration rates.

    PubMed

    Reuning, Gretchen A; Bauerle, William L; Mullen, Jack L; McKay, John K

    2015-04-01

    Transpiration is controlled by evaporative demand and stomatal conductance (gs ), and there can be substantial genetic variation in gs . A key parameter in empirical models of transpiration is minimum stomatal conductance (g0 ), a trait that can be measured and has a large effect on gs and transpiration. In Arabidopsis thaliana, g0 exhibits both environmental and genetic variation, and quantitative trait loci (QTL) have been mapped. We used this information to create a genetically parameterized empirical model to predict transpiration of genotypes. For the parental lines, this worked well. However, in a recombinant inbred population, the predictions proved less accurate. When based only upon their genotype at a single g0 QTL, genotypes were less distinct than our model predicted. Follow-up experiments indicated that both genotype by environment interaction and a polygenic inheritance complicate the application of genetic effects into physiological models. The use of ecophysiological or 'crop' models for predicting transpiration of novel genetic lines will benefit from incorporating further knowledge of the genetic control and degree of independence of core traits/parameters underlying gs variation. © 2014 John Wiley & Sons Ltd.

  2. A theoretical trombone

    NASA Astrophysics Data System (ADS)

    LoPresto, Michael C.

    2014-09-01

    What follows is a description of a theoretical model designed to calculate the playing frequencies of the musical pitches produced by a trombone. The model is based on quantitative treatments that demonstrate the effects of the flaring bell and cup-shaped mouthpiece sections on these frequencies and can be used to calculate frequencies that compare well to both the desired frequencies of the musical pitches and those actually played on a real trombone.

  3. Prediction of enzyme classes from 3D structure: a general model and examples of experimental-theoretic scoring of peptide mass fingerprints of Leishmania proteins.

    PubMed

    Concu, Riccardo; Dea-Ayuela, Maria A; Perez-Montoto, Lazaro G; Bolas-Fernández, Francisco; Prado-Prado, Francisco J; Podda, Gianni; Uriarte, Eugenio; Ubeira, Florencio M; González-Díaz, Humberto

    2009-09-01

    The number of protein and peptide structures included in Protein Data Bank (PDB) and Gen Bank without functional annotation has increased. Consequently, there is a high demand for theoretical models to predict these functions. Here, we trained and validated, with an external set, a Markov Chain Model (MCM) that classifies proteins by their possible mechanism of action according to Enzyme Classification (EC) number. The methodology proposed is essentially new, and enables prediction of all EC classes with a single equation without the need for an equation for each class or nonlinear models with multiple outputs. In addition, the model may be used to predict whether one peptide presents a positive or negative contribution of the activity of the same EC class. The model predicts the first EC number for 106 out of 151 (70.2%) oxidoreductases, 178/178 (100%) transferases, 223/223 (100%) hydrolases, 64/85 (75.3%) lyases, 74/74 (100%) isomerases, and 100/100 (100%) ligases, as well as 745/811 (91.9%) nonenzymes. It is important to underline that this method may help us predict new enzyme proteins or select peptide candidates that improve enzyme activity, which may be of interest for the prediction of new drugs or drug targets. To illustrate the model's application, we report the 2D-Electrophoresis (2DE) isolation from Leishmania infantum as well as MADLI TOF Mass Spectra characterization and theoretical study of the Peptide Mass Fingerprints (PMFs) of a new protein sequence. The theoretical study focused on MASCOT, BLAST alignment, and alignment-free QSAR prediction of the contribution of 29 peptides found in the PMF of the new protein to specific enzyme action. This combined strategy may be used to identify and predict peptides of prokaryote and eukaryote parasites and their hosts as well as other superior organisms, which may be of interest in drug development or target identification.

  4. The costs and benefits of occasional sex: theoretical predictions and a case study.

    PubMed

    D'Souza, Thomas G; Michiels, Nico K

    2010-01-01

    Theory predicts that occasional sexual reproduction in predominantly parthenogenetic organisms offers all the advantages of obligate sexuality without paying its full costs. However, empirical examples identifying and evaluating the costs and benefits of rare sex are scarce. After reviewing the theoretical perspective on rare sex, we present our findings of potential costs and benefits of occasional sex in polyploid, sperm-dependent parthenogens of the planarian flatworm Schmidtea polychroa. Despite costs associated with the production of less fertile tetraploids as sexual intermediates, the benefits of rare sex prevail in S. polychroa and may be sufficiently strong to prevent extinction of parthenogenetic populations. This offers an explanation for the dominance of parthenogenesis in S. polychroa. We discuss the enigmatic question why not all organisms show a mixed reproduction mode.

  5. Theoretical magnetograms based on quantitative simulation of a magnetospheric substorm

    NASA Technical Reports Server (NTRS)

    Chen, C.-K.; Wolf, R. A.; Karty, J. L.; Harel, M.

    1982-01-01

    Substorm currents derived from the Rice University computer simulation of the September 19, 1976 substorm event are used to compute theoretical magnetograms as a function of universal time for various stations, integrating the Biot-Savart law over a maze of about 2700 wires and bands that carry the ring, Birkeland and horizontal ionospheric currents. A comparison of theoretical results with corresponding observations leads to a claim of general agreement, especially for stations at high and middle magnetic latitudes. Model results suggest that the ground magnetic field perturbations arise from complicated combinations of different kinds of currents, and that magnetic field disturbances due to different but related currents cancel each other out despite the inapplicability of Fukushima's (1973) theorem. It is also found that the dawn-dusk asymmetry in the horizontal magnetic field disturbance component at low latitudes is due to a net downward Birkeland current at noon, a net upward current at midnight, and, generally, antisunward-flowing electrojets.

  6. Prediction of trabecular bone qualitative properties using scanning quantitative ultrasound

    NASA Astrophysics Data System (ADS)

    Qin, Yi-Xian; Lin, Wei; Mittra, Erik; Xia, Yi; Cheng, Jiqi; Judex, Stefan; Rubin, Clint; Müller, Ralph

    2013-11-01

    Microgravity induced bone loss represents a critical health problem in astronauts, particularly occurred in weight-supporting skeleton, which leads to osteopenia and increase of fracture risk. Lack of suitable evaluation modality makes it difficult for monitoring skeletal status in long term space mission and increases potential risk of complication. Such disuse osteopenia and osteoporosis compromise trabecular bone density, and architectural and mechanical properties. While X-ray based imaging would not be practical in space, quantitative ultrasound may provide advantages to characterize bone density and strength through wave propagation in complex trabecular structure. This study used a scanning confocal acoustic diagnostic and navigation system (SCAN) to evaluate trabecular bone quality in 60 cubic trabecular samples harvested from adult sheep. Ultrasound image based SCAN measurements in structural and strength properties were validated by μCT and compressive mechanical testing. This result indicated a moderately strong negative correlations observed between broadband ultrasonic attenuation (BUA) and μCT-determined bone volume fraction (BV/TV, R2=0.53). Strong correlations were observed between ultrasound velocity (UV) and bone's mechanical strength and structural parameters, i.e., bulk Young's modulus (R2=0.67) and BV/TV (R2=0.85). The predictions for bone density and mechanical strength were significantly improved by using a linear combination of both BUA and UV, yielding R2=0.92 for BV/TV and R2=0.71 for bulk Young's modulus. These results imply that quantitative ultrasound can characterize trabecular structural and mechanical properties through measurements of particular ultrasound parameters, and potentially provide an excellent estimation for bone's structural integrity.

  7. Comparative Analysis of Predictive Models for Liver Toxicity Using ToxCast Assays and Quantitative Structure-Activity Relationships (MCBIOS)

    EPA Science Inventory

    Comparative Analysis of Predictive Models for Liver Toxicity Using ToxCast Assays and Quantitative Structure-Activity Relationships Jie Liu1,2, Richard Judson1, Matthew T. Martin1, Huixiao Hong3, Imran Shah1 1National Center for Computational Toxicology (NCCT), US EPA, RTP, NC...

  8. Consideration of the Aluminum Distribution in Zeolites in Theoretical and Experimental Catalysis Research

    DOE PAGES

    Knott, Brandon C.; Nimlos, Claire T.; Robichaud, David J.; ...

    2017-12-11

    Research efforts in zeolite catalysis have become increasingly cognizant of the diversity in structure and function resulting from the distribution of framework aluminum atoms, through emerging reports of catalytic phenomena that fall outside those recognizable as the shape-selective ones emblematic of its earlier history. Molecular-level descriptions of how active-site distributions affect catalysis are an aspirational goal articulated frequently in experimental and theoretical research, yet they are limited by imprecise knowledge of the structure and behavior of the zeolite materials under interrogation. In experimental research, higher precision can result from more reliable control of structure during synthesis and from more robustmore » and quantitative structural and kinetic characterization probes. In theoretical research, construction of models with specific aluminum locations and distributions seldom capture the heterogeneity inherent to the materials studied by experiment. In this Perspective, we discuss research findings that appropriately frame the challenges in developing more predictive synthesis-structure-function relations for zeolites, highlighting studies on ZSM-5 zeolites that are among the most structurally complex molecular sieve frameworks and the most widely studied because of their versatility in commercial applications. We discuss research directions to address these challenges and forge stronger connections between zeolite structure, composition, and active sites to catalytic function. Such connections promise to aid in bridging the findings of theoretical and experimental catalysis research, and transforming zeolite active site design from an empirical endeavor into a more predictable science founded on validated models.« less

  9. Consideration of the Aluminum Distribution in Zeolites in Theoretical and Experimental Catalysis Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knott, Brandon C.; Nimlos, Claire T.; Robichaud, David J.

    Research efforts in zeolite catalysis have become increasingly cognizant of the diversity in structure and function resulting from the distribution of framework aluminum atoms, through emerging reports of catalytic phenomena that fall outside those recognizable as the shape-selective ones emblematic of its earlier history. Molecular-level descriptions of how active-site distributions affect catalysis are an aspirational goal articulated frequently in experimental and theoretical research, yet they are limited by imprecise knowledge of the structure and behavior of the zeolite materials under interrogation. In experimental research, higher precision can result from more reliable control of structure during synthesis and from more robustmore » and quantitative structural and kinetic characterization probes. In theoretical research, construction of models with specific aluminum locations and distributions seldom capture the heterogeneity inherent to the materials studied by experiment. In this Perspective, we discuss research findings that appropriately frame the challenges in developing more predictive synthesis-structure-function relations for zeolites, highlighting studies on ZSM-5 zeolites that are among the most structurally complex molecular sieve frameworks and the most widely studied because of their versatility in commercial applications. We discuss research directions to address these challenges and forge stronger connections between zeolite structure, composition, and active sites to catalytic function. Such connections promise to aid in bridging the findings of theoretical and experimental catalysis research, and transforming zeolite active site design from an empirical endeavor into a more predictable science founded on validated models.« less

  10. Predictive values of semi-quantitative procalcitonin test and common biomarkers for the clinical outcomes of community-acquired pneumonia.

    PubMed

    Ugajin, Motoi; Yamaki, Kenichi; Hirasawa, Natsuko; Yagi, Takeo

    2014-04-01

    The semi-quantitative serum procalcitonin test (Brahms PCT-Q) is available conveniently in clinical practice. However, there are few data on the relationship between results for this semi-quantitative procalcitonin test and clinical outcomes of community-acquired pneumonia (CAP). We investigated the usefulness of this procalcitonin test for predicting the clinical outcomes of CAP in comparison with severity scoring systems and the blood urea nitrogen/serum albumin (B/A) ratio, which has been reported to be a simple but reliable prognostic indicator in our prior CAP study. This retrospective study included data from subjects who were hospitalized for CAP from August 2010 through October 2012 and who were administered the semi-quantitative serum procalcitonin test on admission. The demographic characteristics; laboratory biomarkers; microbiological test results; Pneumonia Severity Index scores; confusion, urea nitrogen, breathing frequency, blood pressure, ≥ 65 years of age (CURB-65) scale scores; and age, dehydration, respiratory failure, orientation disturbance, pressure (A-DROP) scale scores on hospital admission were retrieved from their medical charts. The outcomes were mortality within 28 days of hospital admission and the need for intensive care. Of the 213 subjects with CAP who were enrolled in the study, 20 died within 28 days of hospital admission, and 32 required intensive care. Mortality did not differ significantly among subjects with different semi-quantitative serum procalcitonin levels; however, subjects with serum procalcitonin levels ≥ 10.0 ng/mL were more likely to require intensive care than those with lower levels (P < .001). The elevation of semi-quantitative serum procalcitonin levels was more frequently observed in subjects with proven etiology, especially pneumococcal pneumonia. Using the receiver operating characteristic curves for mortality, the area under the curve was 0.86 for Pneumonia Severity Index class, 0.81 for B/A ratio, 0

  11. Quantitative analysis and predictive engineering of self-rolling of nanomembranes under anisotropic mismatch strain

    NASA Astrophysics Data System (ADS)

    Chen, Cheng; Song, Pengfei; Meng, Fanchao; Li, Xiao; Liu, Xinyu; Song, Jun

    2017-12-01

    The present work presents a quantitative modeling framework for investigating the self-rolling of nanomembranes under different lattice mismatch strain anisotropy. The effect of transverse mismatch strain on the roll-up direction and curvature has been systematically studied employing both analytical modeling and numerical simulations. The bidirectional nature of the self-rolling of nanomembranes and the critical role of transverse strain in affecting the rolling behaviors have been demonstrated. Two fabrication strategies, i.e., third-layer deposition and corner geometry engineering, have been proposed to predictively manipulate the bidirectional rolling competition of strained nanomembranes, so as to achieve controlled, unidirectional roll-up. In particular for the strategy of corner engineering, microfabrication experiments have been performed to showcase its practical application and effectiveness. Our study offers new mechanistic knowledge towards understanding and predictive engineering of self-rolling of nanomembranes with improved roll-up yield.

  12. Effect of genetic architecture on the prediction accuracy of quantitative traits in samples of unrelated individuals.

    PubMed

    Morgante, Fabio; Huang, Wen; Maltecca, Christian; Mackay, Trudy F C

    2018-06-01

    Predicting complex phenotypes from genomic data is a fundamental aim of animal and plant breeding, where we wish to predict genetic merits of selection candidates; and of human genetics, where we wish to predict disease risk. While genomic prediction models work well with populations of related individuals and high linkage disequilibrium (LD) (e.g., livestock), comparable models perform poorly for populations of unrelated individuals and low LD (e.g., humans). We hypothesized that low prediction accuracies in the latter situation may occur when the genetics architecture of the trait departs from the infinitesimal and additive architecture assumed by most prediction models. We used simulated data for 10,000 lines based on sequence data from a population of unrelated, inbred Drosophila melanogaster lines to evaluate this hypothesis. We show that, even in very simplified scenarios meant as a stress test of the commonly used Genomic Best Linear Unbiased Predictor (G-BLUP) method, using all common variants yields low prediction accuracy regardless of the trait genetic architecture. However, prediction accuracy increases when predictions are informed by the genetic architecture inferred from mapping the top variants affecting main effects and interactions in the training data, provided there is sufficient power for mapping. When the true genetic architecture is largely or partially due to epistatic interactions, the additive model may not perform well, while models that account explicitly for interactions generally increase prediction accuracy. Our results indicate that accounting for genetic architecture can improve prediction accuracy for quantitative traits.

  13. Phase noise optimization in temporal phase-shifting digital holography with partial coherence light sources and its application in quantitative cell imaging.

    PubMed

    Remmersmann, Christian; Stürwald, Stephan; Kemper, Björn; Langehanenberg, Patrik; von Bally, Gert

    2009-03-10

    In temporal phase-shifting-based digital holographic microscopy, high-resolution phase contrast imaging requires optimized conditions for hologram recording and phase retrieval. To optimize the phase resolution, for the example of a variable three-step algorithm, a theoretical analysis on statistical errors, digitalization errors, uncorrelated errors, and errors due to a misaligned temporal phase shift is carried out. In a second step the theoretically predicted results are compared to the measured phase noise obtained from comparative experimental investigations with several coherent and partially coherent light sources. Finally, the applicability for noise reduction is demonstrated by quantitative phase contrast imaging of pancreas tumor cells.

  14. Clusters of DNA induced by ionizing radiation: formation of short DNA fragments. I. Theoretical modeling

    NASA Technical Reports Server (NTRS)

    Holley, W. R.; Chatterjee, A.

    1996-01-01

    We have developed a general theoretical model for the interaction of ionizing radiation with chromatin. Chromatin is modeled as a 30-nm-diameter solenoidal fiber comprised of 20 turns of nucleosomes, 6 nucleosomes per turn. Charged-particle tracks are modeled by partitioning the energy deposition between primary track core, resulting from glancing collisions with 100 eV or less per event, and delta rays due to knock-on collisions involving energy transfers >100 eV. A Monte Carlo simulation incorporates damages due to the following molecular mechanisms: (1) ionization of water molecules leading to the formation of OH, H, eaq, etc.; (2) OH attack on sugar molecules leading to strand breaks: (3) OH attack on bases; (4) direct ionization of the sugar molecules leading to strand breaks; (5) direct ionization of the bases. Our calculations predict significant clustering of damage both locally, over regions up to 40 bp and over regions extending to several kilobase pairs. A characteristic feature of the regional damage predicted by our model is the production of short fragments of DNA associated with multiple nearby strand breaks. The shapes of the spectra of DNA fragment lengths depend on the symmetries or approximate symmetries of the chromatin structure. Such fragments have subsequently been detected experimentally and are reported in an accompanying paper (B. Rydberg, Radiat, Res. 145, 200-209, 1996) after exposure to both high- and low-LET radiation. The overall measured yields agree well quantitatively with the theoretical predictions. Our theoretical results predict the existence of a strong peak at about 85 bp, which represents the revolution period about the nucleosome. Other peaks at multiples of about 1,000 bp correspond to the periodicity of the particular solenoid model of chromatin used in these calculations. Theoretical results in combination with experimental data on fragmentation spectra may help determine the consensus or average structure of the

  15. Predicting total organic halide formation from drinking water chlorination using quantitative structure-property relationships.

    PubMed

    Luilo, G B; Cabaniss, S E

    2011-10-01

    Chlorinating water which contains dissolved organic matter (DOM) produces disinfection byproducts, the majority of unknown structure. Hence, the total organic halide (TOX) measurement is used as a surrogate for toxic disinfection byproducts. This work derives a robust quantitative structure-property relationship (QSPR) for predicting the TOX formation potential of model compounds. Literature data for 49 compounds were used to train the QSPR in moles of chlorine per mole of compound (Cp) (mol-Cl/mol-Cp). The resulting QSPR has four descriptors, calibration [Formula: see text] of 0.72 and standard deviation of estimation of 0.43 mol-Cl/mol-Cp. Internal and external validation indicate that the QSPR has good predictive power and low bias (‰<‰1%). Applying this QSPR to predict TOX formation by DOM surrogates - tannic acid, two model fulvic acids and two agent-based model assemblages - gave a predicted TOX range of 136-184 µg-Cl/mg-C, consistent with experimental data for DOM, which ranged from 78 to 192 µg-Cl/mg-C. However, the limited structural variation in the training data may limit QSPR applicability; studies of more sulfur-containing compounds, heterocyclic compounds and high molecular weight compounds could lead to a more widely applicable QSPR.

  16. Quantitative Prediction of Computational Quality (so the S and C Folks will Accept it)

    NASA Technical Reports Server (NTRS)

    Hemsch, Michael J.; Luckring, James M.; Morrison, Joseph H.

    2004-01-01

    Our choice of title may seem strange but we mean each word. In this talk, we are not going to be concerned with computations made "after the fact", i.e. those for which data are available and which are being conducted for explanation and insight. Here we are interested in preventing S&C design problems by finding them through computation before data are available. For such a computation to have any credibility with those who absorb the risk, it is necessary to quantitatively PREDICT the quality of the computational results.

  17. Theoretical framework for quantitatively estimating ultrasound beam intensities using infrared thermography.

    PubMed

    Myers, Matthew R; Giridhar, Dushyanth

    2011-06-01

    In the characterization of high-intensity focused ultrasound (HIFU) systems, it is desirable to know the intensity field within a tissue phantom. Infrared (IR) thermography is a potentially useful method for inferring this intensity field from the heating pattern within the phantom. However, IR measurements require an air layer between the phantom and the camera, making inferences about the thermal field in the absence of the air complicated. For example, convection currents can arise in the air layer and distort the measurements relative to the phantom-only situation. Quantitative predictions of intensity fields based upon IR temperature data are also complicated by axial and radial diffusion of heat. In this paper, mathematical expressions are derived for use with IR temperature data acquired at times long enough that noise is a relatively small fraction of the temperature trace, but small enough that convection currents have not yet developed. The relations were applied to simulated IR data sets derived from computed pressure and temperature fields. The simulation was performed in a finite-element geometry involving a HIFU transducer sonicating upward in a phantom toward an air interface, with an IR camera mounted atop an air layer, looking down at the heated interface. It was found that, when compared to the intensity field determined directly from acoustic propagation simulations, intensity profiles could be obtained from the simulated IR temperature data with an accuracy of better than 10%, at pre-focal, focal, and post-focal locations. © 2011 Acoustical Society of America

  18. Quantitative interpretation of heavy ion effects: Comparison of different systems and endpoints

    NASA Astrophysics Data System (ADS)

    Kiefer, J.

    For a quantitative interpretation of biological heavy ion action the following parameters have to be taken into account: variations of energy depositions in microscopical sites, the dependence of primary lesion formation on local energy density and changes in repairability. They can be studied in objects of different size and with different sensitivities. Results on survival and mutation induction in yeast and in mammalian cells will be compared with theoretical predictions. It is shown that shouldered survival curves of diploid yeast can be adequately described if the final slope is adjusted according to the varying production of primary lesions. This is not the case for mammalian cells where the experiments show a rapid loss of the shoulder with LET, contrary to theoretical expectations. This behaviour is interpreted to mean that the repairability of heavy ion lesions is different in the two systems. Mutation induction is theoretically expected to decrease with higher LET. This is found in yeast but not in mammalian cells where it actually increases. These results suggest a higher rate of misrepair in mammalian cells.

  19. Prediction of trabecular bone qualitative properties using scanning quantitative ultrasound

    PubMed Central

    Qin, Yi-Xian; Lin, Wei; Mittra, Erik; Xia, Yi; Cheng, Jiqi; Judex, Stefan; Rubin, Clint; Müller, Ralph

    2012-01-01

    Microgravity induced bone loss represents a critical health problem in astronauts, particularly occurred in weight-supporting skeleton, which leads to osteopenia and increase of fracture risk. Lack of suitable evaluation modality makes it difficult for monitoring skeletal status in long term space mission and increases potential risk of complication. Such disuse osteopenia and osteoporosis compromise trabecular bone density, and architectural and mechanical properties. While X-ray based imaging would not be practical in space, quantitative ultrasound may provide advantages to characterize bone density and strength through wave propagation in complex trabecular structure. This study used a scanning confocal acoustic diagnostic and navigation system (SCAN) to evaluate trabecular bone quality in 60 cubic trabecular samples harvested from adult sheep. Ultrasound image based SCAN measurements in structural and strength properties were validated by μCT and compressive mechanical testing. This result indicated a moderately strong negative correlations observed between broadband ultrasonic attenuation (BUA) and μCT-determined bone volume fraction (BV/TV, R2=0.53). Strong correlations were observed between ultrasound velocity (UV) and bone’s mechanical strength and structural parameters, i.e., bulk Young’s modulus (R2=0.67) and BV/TV (R2=0.85). The predictions for bone density and mechanical strength were significantly improved by using a linear combination of both BUA and UV, yielding R2=0.92 for BV/TV and R2=0.71 for bulk Young’s modulus. These results imply that quantitative ultrasound can characterize trabecular structural and mechanical properties through measurements of particular ultrasound parameters, and potentially provide an excellent estimation for bone’s structural integrity. PMID:23976803

  20. Toxicity of ionic liquids: database and prediction via quantitative structure-activity relationship method.

    PubMed

    Zhao, Yongsheng; Zhao, Jihong; Huang, Ying; Zhou, Qing; Zhang, Xiangping; Zhang, Suojiang

    2014-08-15

    A comprehensive database on toxicity of ionic liquids (ILs) is established. The database includes over 4000 pieces of data. Based on the database, the relationship between IL's structure and its toxicity has been analyzed qualitatively. Furthermore, Quantitative Structure-Activity relationships (QSAR) model is conducted to predict the toxicities (EC50 values) of various ILs toward the Leukemia rat cell line IPC-81. Four parameters selected by the heuristic method (HM) are used to perform the studies of multiple linear regression (MLR) and support vector machine (SVM). The squared correlation coefficient (R(2)) and the root mean square error (RMSE) of training sets by two QSAR models are 0.918 and 0.959, 0.258 and 0.179, respectively. The prediction R(2) and RMSE of QSAR test sets by MLR model are 0.892 and 0.329, by SVM model are 0.958 and 0.234, respectively. The nonlinear model developed by SVM algorithm is much outperformed MLR, which indicates that SVM model is more reliable in the prediction of toxicity of ILs. This study shows that increasing the relative number of O atoms of molecules leads to decrease in the toxicity of ILs. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. A theoretical study of alpha star populations in loaded nuclear emulsions

    USGS Publications Warehouse

    Senftle, F.E.; Farley, T.A.; Stieff, L.R.

    1954-01-01

    This theoretical study of the alpha star populations in loaded emulsions was undertaken in an effort to find a quantitative method for the analysis of less than microgram amounts of thorium in the presence of larger amounts of uranium. Analytical expressions for each type of star from each of the significantly contributing members of the uranium and thorium series as well as summation formulas for the whole series have been computed. The analysis for thorium may be made by determining the abundance of five-branched stars in a loaded nuclear emulsion and comparing of observed and predicted star populations. The comparison may also be used to check the half-lives of several members of the uranium and thorium series. ?? 1954.

  2. The role of quantitative estrogen receptor status in predicting tumor response at surgery in breast cancer patients treated with neoadjuvant chemotherapy.

    PubMed

    Raphael, Jacques; Gandhi, Sonal; Li, Nim; Lu, Fang-I; Trudeau, Maureen

    2017-07-01

    Estrogen receptor (ER) negative (-) breast cancer (BC) patients have better tumor response rates than ER-positive (+) patients after neoadjuvant chemotherapy (NCT). We conducted a retrospective review using the institutional database "Biomatrix" to assess the value of quantitative ER status in predicting tumor response at surgery and to identify potential predictors of survival outcomes. Univariate followed by multivariable regression analyses were conducted to assess the association between quantitative ER and tumor response assessed as tumor size reduction and pathologic complete response (pCR). Predictors of recurrence-free survival (RFS) were identified using a cox proportional hazards model (CPH). A log-rank test was used to compare RFS between groups if a significant predictor was identified. 304 patients were included with a median follow-up of 43.3 months (Q1-Q3 28.7-61.1) and a mean age of 49.7 years (SD 10.9). Quantitative ER was inversely associated with tumor size reduction and pCR (OR 0.99, 95% CI 0.99-1.00, p = 0.027 and 0.98 95% CI 0.97-0.99, p < 0.0001, respectively). A cut-off of 60 and 80% predicted best the association with tumor size reduction and pCR, respectively. pCR was shown to be an independent predictor of RFS (HR 0.17, 95% CI 0.07-0.43, p = 0.0002) in all patients. At 5 years, 93% of patients with pCR and 72% of patients with residual tumor were recurrence-free, respectively (p = 0.0012). Quantitative ER status is inversely associated with tumor response in BC patients treated with NCT. A cut-off of 60 and 80% predicts best the association with tumor size reduction and pCR, respectively. Therefore, patients with an ER status higher than the cut-off might benefit from a neoadjuvant endocrine therapy approach. Patients with pCR had better survival outcomes independently of their tumor phenotype. Further prospective studies are needed to validate the clinical utility of quantitative ER as a predictive marker of tumor response.

  3. Genome-Assisted Prediction of Quantitative Traits Using the R Package sommer.

    PubMed

    Covarrubias-Pazaran, Giovanny

    2016-01-01

    Most traits of agronomic importance are quantitative in nature, and genetic markers have been used for decades to dissect such traits. Recently, genomic selection has earned attention as next generation sequencing technologies became feasible for major and minor crops. Mixed models have become a key tool for fitting genomic selection models, but most current genomic selection software can only include a single variance component other than the error, making hybrid prediction using additive, dominance and epistatic effects unfeasible for species displaying heterotic effects. Moreover, Likelihood-based software for fitting mixed models with multiple random effects that allows the user to specify the variance-covariance structure of random effects has not been fully exploited. A new open-source R package called sommer is presented to facilitate the use of mixed models for genomic selection and hybrid prediction purposes using more than one variance component and allowing specification of covariance structures. The use of sommer for genomic prediction is demonstrated through several examples using maize and wheat genotypic and phenotypic data. At its core, the program contains three algorithms for estimating variance components: Average information (AI), Expectation-Maximization (EM) and Efficient Mixed Model Association (EMMA). Kernels for calculating the additive, dominance and epistatic relationship matrices are included, along with other useful functions for genomic analysis. Results from sommer were comparable to other software, but the analysis was faster than Bayesian counterparts in the magnitude of hours to days. In addition, ability to deal with missing data, combined with greater flexibility and speed than other REML-based software was achieved by putting together some of the most efficient algorithms to fit models in a gentle environment such as R.

  4. Proposal for a quantitative index of flood disasters.

    PubMed

    Feng, Lihua; Luo, Gaoyuan

    2010-07-01

    Drawing on calculations of wind scale and earthquake magnitude, this paper develops a new quantitative method for measuring flood magnitude and disaster intensity. Flood magnitude is the quantitative index that describes the scale of a flood; the flood's disaster intensity is the quantitative index describing the losses caused. Both indices have numerous theoretical and practical advantages with definable concepts and simple applications, which lend them key practical significance.

  5. Experimentally validated quantitative linear model for the device physics of elastomeric microfluidic valves

    NASA Astrophysics Data System (ADS)

    Kartalov, Emil P.; Scherer, Axel; Quake, Stephen R.; Taylor, Clive R.; Anderson, W. French

    2007-03-01

    A systematic experimental study and theoretical modeling of the device physics of polydimethylsiloxane "pushdown" microfluidic valves are presented. The phase space is charted by 1587 dimension combinations and encompasses 45-295μm lateral dimensions, 16-39μm membrane thickness, and 1-28psi closing pressure. Three linear models are developed and tested against the empirical data, and then combined into a fourth-power-polynomial superposition. The experimentally validated final model offers a useful quantitative prediction for a valve's properties as a function of its dimensions. Typical valves (80-150μm width) are shown to behave like thin springs.

  6. Bridging the gap between theoretical ecology and real ecosystems: modeling invertebrate community composition in streams.

    PubMed

    Schuwirth, Nele; Reichert, Peter

    2013-02-01

    For the first time, we combine concepts of theoretical food web modeling, the metabolic theory of ecology, and ecological stoichiometry with the use of functional trait databases to predict the coexistence of invertebrate taxa in streams. We developed a mechanistic model that describes growth, death, and respiration of different taxa dependent on various environmental influence factors to estimate survival or extinction. Parameter and input uncertainty is propagated to model results. Such a model is needed to test our current quantitative understanding of ecosystem structure and function and to predict effects of anthropogenic impacts and restoration efforts. The model was tested using macroinvertebrate monitoring data from a catchment of the Swiss Plateau. Even without fitting model parameters, the model is able to represent key patterns of the coexistence structure of invertebrates at sites varying in external conditions (litter input, shading, water quality). This confirms the suitability of the model concept. More comprehensive testing and resulting model adaptations will further increase the predictive accuracy of the model.

  7. Preoperative Cerebral Oxygen Extraction Fraction Imaging Generated from 7T MR Quantitative Susceptibility Mapping Predicts Development of Cerebral Hyperperfusion following Carotid Endarterectomy.

    PubMed

    Nomura, J-I; Uwano, I; Sasaki, M; Kudo, K; Yamashita, F; Ito, K; Fujiwara, S; Kobayashi, M; Ogasawara, K

    2017-12-01

    Preoperative hemodynamic impairment in the affected cerebral hemisphere is associated with the development of cerebral hyperperfusion following carotid endarterectomy. Cerebral oxygen extraction fraction images generated from 7T MR quantitative susceptibility mapping correlate with oxygen extraction fraction images on positron-emission tomography. The present study aimed to determine whether preoperative oxygen extraction fraction imaging generated from 7T MR quantitative susceptibility mapping could identify patients at risk for cerebral hyperperfusion following carotid endarterectomy. Seventy-seven patients with unilateral internal carotid artery stenosis (≥70%) underwent preoperative 3D T2*-weighted imaging using a multiple dipole-inversion algorithm with a 7T MR imager. Quantitative susceptibility mapping images were then obtained, and oxygen extraction fraction maps were generated. Quantitative brain perfusion single-photon emission CT was also performed before and immediately after carotid endarterectomy. ROIs were automatically placed in the bilateral middle cerebral artery territories in all images using a 3D stereotactic ROI template, and affected-to-contralateral ratios in the ROIs were calculated on quantitative susceptibility mapping-oxygen extraction fraction images. Ten patients (13%) showed post-carotid endarterectomy hyperperfusion (cerebral blood flow increases of ≥100% compared with preoperative values in the ROIs on brain perfusion SPECT). Multivariate analysis showed that a high quantitative susceptibility mapping-oxygen extraction fraction ratio was significantly associated with the development of post-carotid endarterectomy hyperperfusion (95% confidence interval, 33.5-249.7; P = .002). Sensitivity, specificity, and positive- and negative-predictive values of the quantitative susceptibility mapping-oxygen extraction fraction ratio for the prediction of the development of post-carotid endarterectomy hyperperfusion were 90%, 84%, 45%, and 98

  8. Quantitative Predictive Models for Systemic Toxicity (SOT)

    EPA Science Inventory

    Models to identify systemic and specific target organ toxicity were developed to help transition the field of toxicology towards computational models. By leveraging multiple data sources to incorporate read-across and machine learning approaches, a quantitative model of systemic ...

  9. Establishment of quantitative retention-activity model by optimized microemulsion liquid chromatography.

    PubMed

    Xu, Liyuan; Gao, Haoshi; Li, Liangxing; Li, Yinnong; Wang, Liuyun; Gao, Chongkai; Li, Ning

    2016-12-23

    The effective permeability coefficient is of theoretical and practical importance in evaluation of the bioavailability of drug candidates. However, most methods currently used to measure this coefficient are expensive and time-consuming. In this paper, we addressed these problems by proposing a new measurement method which is based on the microemulsion liquid chromatography. First, the parallel artificial membrane permeability assays model was used to determine the effective permeability of drug so that quantitative retention-activity relationships could be established, which were used to optimize the microemulsion liquid chromatography. The most effective microemulsion system used a mobile phase of 6.0% (w/w) Brij35, 6.6% (w/w) butanol, 0.8% (w/w) octanol, and 86.6% (w/w) phosphate buffer (pH 7.4). Next, support vector machine and back-propagation neural networks are employed to develop a quantitative retention-activity relationships model associated with the optimal microemulsion system, and used to improve the prediction ability. Finally, an adequate correlation between experimental value and predicted value is computed to verify the performance of the optimal model. The results indicate that the microemulsion liquid chromatography can serve as a possible alternative to the PAMPA method for determination of high-throughput permeability and simulation of biological processes. Copyright © 2016. Published by Elsevier B.V.

  10. Quantitative sonoelastography for the in vivo assessment of skeletal muscle viscoelasticity

    NASA Astrophysics Data System (ADS)

    Hoyt, Kenneth; Kneezel, Timothy; Castaneda, Benjamin; Parker, Kevin J.

    2008-08-01

    A novel quantitative sonoelastography technique for assessing the viscoelastic properties of skeletal muscle tissue was developed. Slowly propagating shear wave interference patterns (termed crawling waves) were generated using a two-source configuration vibrating normal to the surface. Theoretical models predict crawling wave displacement fields, which were validated through phantom studies. In experiments, a viscoelastic model was fit to dispersive shear wave speed sonoelastographic data using nonlinear least-squares techniques to determine frequency-independent shear modulus and viscosity estimates. Shear modulus estimates derived using the viscoelastic model were in agreement with that obtained by mechanical testing on phantom samples. Preliminary sonoelastographic data acquired in healthy human skeletal muscles confirm that high-quality quantitative elasticity data can be acquired in vivo. Studies on relaxed muscle indicate discernible differences in both shear modulus and viscosity estimates between different skeletal muscle groups. Investigations into the dynamic viscoelastic properties of (healthy) human skeletal muscles revealed that voluntarily contracted muscles exhibit considerable increases in both shear modulus and viscosity estimates as compared to the relaxed state. Overall, preliminary results are encouraging and quantitative sonoelastography may prove clinically feasible for in vivo characterization of the dynamic viscoelastic properties of human skeletal muscle.

  11. Quantitative genetic methods depending on the nature of the phenotypic trait.

    PubMed

    de Villemereuil, Pierre

    2018-01-24

    A consequence of the assumptions of the infinitesimal model, one of the most important theoretical foundations of quantitative genetics, is that phenotypic traits are predicted to be most often normally distributed (so-called Gaussian traits). But phenotypic traits, especially those interesting for evolutionary biology, might be shaped according to very diverse distributions. Here, I show how quantitative genetics tools have been extended to account for a wider diversity of phenotypic traits using first the threshold model and then more recently using generalized linear mixed models. I explore the assumptions behind these models and how they can be used to study the genetics of non-Gaussian complex traits. I also comment on three recent methodological advances in quantitative genetics that widen our ability to study new kinds of traits: the use of "modular" hierarchical modeling (e.g., to study survival in the context of capture-recapture approaches for wild populations); the use of aster models to study a set of traits with conditional relationships (e.g., life-history traits); and, finally, the study of high-dimensional traits, such as gene expression. © 2018 New York Academy of Sciences.

  12. Prediction of neonatal respiratory morbidity by quantitative ultrasound lung texture analysis: a multicenter study.

    PubMed

    Palacio, Montse; Bonet-Carne, Elisenda; Cobo, Teresa; Perez-Moreno, Alvaro; Sabrià, Joan; Richter, Jute; Kacerovsky, Marian; Jacobsson, Bo; García-Posada, Raúl A; Bugatto, Fernando; Santisteve, Ramon; Vives, Àngels; Parra-Cordero, Mauro; Hernandez-Andrade, Edgar; Bartha, José Luis; Carretero-Lucena, Pilar; Tan, Kai Lit; Cruz-Martínez, Rogelio; Burke, Minke; Vavilala, Suseela; Iruretagoyena, Igor; Delgado, Juan Luis; Schenone, Mauro; Vilanova, Josep; Botet, Francesc; Yeo, George S H; Hyett, Jon; Deprest, Jan; Romero, Roberto; Gratacos, Eduard

    2017-08-01

    Prediction of neonatal respiratory morbidity may be useful to plan delivery in complicated pregnancies. The limited predictive performance of the current diagnostic tests together with the risks of an invasive procedure restricts the use of fetal lung maturity assessment. The objective of the study was to evaluate the performance of quantitative ultrasound texture analysis of the fetal lung (quantusFLM) to predict neonatal respiratory morbidity in preterm and early-term (<39.0 weeks) deliveries. This was a prospective multicenter study conducted in 20 centers worldwide. Fetal lung ultrasound images were obtained at 25.0-38.6 weeks of gestation within 48 hours of delivery, stored in Digital Imaging and Communication in Medicine format, and analyzed with quantusFLM. Physicians were blinded to the analysis. At delivery, perinatal outcomes and the occurrence of neonatal respiratory morbidity, defined as either respiratory distress syndrome or transient tachypnea of the newborn, were registered. The performance of the ultrasound texture analysis test to predict neonatal respiratory morbidity was evaluated. A total of 883 images were collected, but 17.3% were discarded because of poor image quality or exclusion criteria, leaving 730 observations for the final analysis. The prevalence of neonatal respiratory morbidity was 13.8% (101 of 730). The quantusFLM predicted neonatal respiratory morbidity with a sensitivity, specificity, positive and negative predictive values of 74.3% (75 of 101), 88.6% (557 of 629), 51.0% (75 of 147), and 95.5% (557 of 583), respectively. Accuracy was 86.5% (632 of 730) and positive and negative likelihood ratios were 6.5 and 0.3, respectively. The quantusFLM predicted neonatal respiratory morbidity with an accuracy similar to that previously reported for other tests with the advantage of being a noninvasive technique. Copyright © 2017. Published by Elsevier Inc.

  13. Theoretical predictions for α -decay chains of 118 290 -298Og isotopes using a finite-range nucleon-nucleon interaction

    NASA Astrophysics Data System (ADS)

    Ismail, M.; Adel, A.

    2018-04-01

    The α -decay half-lives of the recently synthesized superheavy nuclei (SHN) are investigated by employing the density dependent cluster model. A realistic nucleon-nucleon (NN ) interaction with a finite-range exchange part is used to calculate the microscopic α -nucleus potential in the well-established double-folding model. The calculated potential is then implemented to find both the assault frequency and the penetration probability of the α particle by means of the Wentzel-Kramers-Brillouin (WKB) approximation in combination with the Bohr-Sommerfeld quantization condition. The calculated values of α -decay half-lives of the recently synthesized Og isotopes and its decay products are in good agreement with the experimental data. Moreover, the calculated values of α -decay half-lives have been compared with those values evaluated using other theoretical models, and it was found that our theoretical values match well with their counterparts. The competition between α decay and spontaneous fission is investigated and predictions for possible decay modes for the unknown nuclei 118 290 -298Og are presented. We studied the behavior of the α -decay half-lives of Og isotopes and their decay products as a function of the mass number of the parent nuclei. We found that the behavior of the curves is governed by proton and neutron magic numbers found from previous studies. The proton numbers Z =114 , 116, 108, 106 and the neutron numbers N =172 , 164, 162, 158 show some magic character. We hope that the theoretical prediction of α -decay chains provides a new perspective to experimentalists.

  14. Qualitative and quantitative prediction of volatile compounds from initial amino acid profiles in Korean rice wine (makgeolli) model.

    PubMed

    Kang, Bo-Sik; Lee, Jang-Eun; Park, Hyun-Jin

    2014-06-01

    In Korean rice wine (makgeolli) model, we tried to develop a prediction model capable of eliciting a quantitative relationship between initial amino acids in makgeolli mash and major aromatic compounds, such as fusel alcohols, their acetate esters, and ethyl esters of fatty acids, in makgeolli brewed. Mass-spectrometry-based electronic nose (MS-EN) was used to qualitatively discriminate between makgeollis made from makgeolli mashes with different amino acid compositions. Following this measurement, headspace solid-phase microextraction coupled to gas chromatography-mass spectrometry (GC-MS) combined with partial least-squares regression (PLSR) method was employed to quantitatively correlate amino acid composition of makgeolli mash with major aromatic compounds evolved during makgeolli fermentation. In qualitative prediction with MS-EN analysis, the makgeollis were well discriminated according to the volatile compounds derived from amino acids of makgeolli mash. Twenty-seven ion fragments with mass-to-charge ratio (m/z) of 55 to 98 amu were responsible for the discrimination. In GC-MS combined with PLSR method, a quantitative approach between the initial amino acids of makgeolli mash and the fusel compounds of makgeolli demonstrated that coefficient of determination (R(2)) of most of the fusel compounds ranged from 0.77 to 0.94 in good correlation, except for 2-phenylethanol (R(2) = 0.21), whereas R(2) for ethyl esters of MCFAs including ethyl caproate, ethyl caprylate, and ethyl caprate was 0.17 to 0.40 in poor correlation. The amino acids have been known to affect the aroma in alcoholic beverages. In this study, we demonstrated that an electronic nose qualitatively differentiated Korean rice wines (makgeollis) by their volatile compounds evolved from amino acids with rapidity and reproducibility and successively, a quantitative correlation with acceptable R2 between amino acids and fusel compounds could be established via HS-SPME GC-MS combined with partial least

  15. Symbolic interactionism as a theoretical perspective for multiple method research.

    PubMed

    Benzies, K M; Allen, M N

    2001-02-01

    Qualitative and quantitative research rely on different epistemological assumptions about the nature of knowledge. However, the majority of nurse researchers who use multiple method designs do not address the problem of differing theoretical perspectives. Traditionally, symbolic interactionism has been viewed as one perspective underpinning qualitative research, but it is also the basis for quantitative studies. Rooted in social psychology, symbolic interactionism has a rich intellectual heritage that spans more than a century. Underlying symbolic interactionism is the major assumption that individuals act on the basis of the meaning that things have for them. The purpose of this paper is to present symbolic interactionism as a theoretical perspective for multiple method designs with the aim of expanding the dialogue about new methodologies. Symbolic interactionism can serve as a theoretical perspective for conceptually clear and soundly implemented multiple method research that will expand the understanding of human health behaviour.

  16. Novel Uses of In Vitro Data to Develop Quantitative Biological Activity Relationship Models for in Vivo Carcinogenicity Prediction.

    PubMed

    Pradeep, Prachi; Povinelli, Richard J; Merrill, Stephen J; Bozdag, Serdar; Sem, Daniel S

    2015-04-01

    The availability of large in vitro datasets enables better insight into the mode of action of chemicals and better identification of potential mechanism(s) of toxicity. Several studies have shown that not all in vitro assays can contribute as equal predictors of in vivo carcinogenicity for development of hybrid Quantitative Structure Activity Relationship (QSAR) models. We propose two novel approaches for the use of mechanistically relevant in vitro assay data in the identification of relevant biological descriptors and development of Quantitative Biological Activity Relationship (QBAR) models for carcinogenicity prediction. We demonstrate that in vitro assay data can be used to develop QBAR models for in vivo carcinogenicity prediction via two case studies corroborated with firm scientific rationale. The case studies demonstrate the similarities between QBAR and QSAR modeling in: (i) the selection of relevant descriptors to be used in the machine learning algorithm, and (ii) the development of a computational model that maps chemical or biological descriptors to a toxic endpoint. The results of both the case studies show: (i) improved accuracy and sensitivity which is especially desirable under regulatory requirements, and (ii) overall adherence with the OECD/REACH guidelines. Such mechanism based models can be used along with QSAR models for prediction of mechanistically complex toxic endpoints. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. A Novel Information-Theoretic Approach for Variable Clustering and Predictive Modeling Using Dirichlet Process Mixtures

    PubMed Central

    Chen, Yun; Yang, Hui

    2016-01-01

    In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering. PMID:27966581

  18. A Novel Information-Theoretic Approach for Variable Clustering and Predictive Modeling Using Dirichlet Process Mixtures.

    PubMed

    Chen, Yun; Yang, Hui

    2016-12-14

    In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering.

  19. Predicting Protein Function by Genomic Context: Quantitative Evaluation and Qualitative Inferences

    PubMed Central

    Huynen, Martijn; Snel, Berend; Lathe, Warren; Bork, Peer

    2000-01-01

    Various new methods have been proposed to predict functional interactions between proteins based on the genomic context of their genes. The types of genomic context that they use are Type I: the fusion of genes; Type II: the conservation of gene-order or co-occurrence of genes in potential operons; and Type III: the co-occurrence of genes across genomes (phylogenetic profiles). Here we compare these types for their coverage, their correlations with various types of functional interaction, and their overlap with homology-based function assignment. We apply the methods to Mycoplasma genitalium, the standard benchmarking genome in computational and experimental genomics. Quantitatively, conservation of gene order is the technique with the highest coverage, applying to 37% of the genes. By combining gene order conservation with gene fusion (6%), the co-occurrence of genes in operons in absence of gene order conservation (8%), and the co-occurrence of genes across genomes (11%), significant context information can be obtained for 50% of the genes (the categories overlap). Qualitatively, we observe that the functional interactions between genes are stronger as the requirements for physical neighborhood on the genome are more stringent, while the fraction of potential false positives decreases. Moreover, only in cases in which gene order is conserved in a substantial fraction of the genomes, in this case six out of twenty-five, does a single type of functional interaction (physical interaction) clearly dominate (>80%). In other cases, complementary function information from homology searches, which is available for most of the genes with significant genomic context, is essential to predict the type of interaction. Using a combination of genomic context and homology searches, new functional features can be predicted for 10% of M. genitalium genes. PMID:10958638

  20. A quantitative dynamic systems model of health-related quality of life among older adults

    PubMed Central

    Roppolo, Mattia; Kunnen, E Saskia; van Geert, Paul L; Mulasso, Anna; Rabaglietti, Emanuela

    2015-01-01

    Health-related quality of life (HRQOL) is a person-centered concept. The analysis of HRQOL is highly relevant in the aged population, which is generally suffering from health decline. Starting from a conceptual dynamic systems model that describes the development of HRQOL in individuals over time, this study aims to develop and test a quantitative dynamic systems model, in order to reveal the possible dynamic trends of HRQOL among older adults. The model is tested in different ways: first, with a calibration procedure to test whether the model produces theoretically plausible results, and second, with a preliminary validation procedure using empirical data of 194 older adults. This first validation tested the prediction that given a particular starting point (first empirical data point), the model will generate dynamic trajectories that lead to the observed endpoint (second empirical data point). The analyses reveal that the quantitative model produces theoretically plausible trajectories, thus providing support for the calibration procedure. Furthermore, the analyses of validation show a good fit between empirical and simulated data. In fact, no differences were found in the comparison between empirical and simulated final data for the same subgroup of participants, whereas the comparison between different subgroups of people resulted in significant differences. These data provide an initial basis of evidence for the dynamic nature of HRQOL during the aging process. Therefore, these data may give new theoretical and applied insights into the study of HRQOL and its development with time in the aging population. PMID:26604722

  1. Quantitative Protein Topography Analysis and High-Resolution Structure Prediction Using Hydroxyl Radical Labeling and Tandem-Ion Mass Spectrometry (MS)*

    PubMed Central

    Kaur, Parminder; Kiselar, Janna; Yang, Sichun; Chance, Mark R.

    2015-01-01

    Hydroxyl radical footprinting based MS for protein structure assessment has the goal of understanding ligand induced conformational changes and macromolecular interactions, for example, protein tertiary and quaternary structure, but the structural resolution provided by typical peptide-level quantification is limiting. In this work, we present experimental strategies using tandem-MS fragmentation to increase the spatial resolution of the technique to the single residue level to provide a high precision tool for molecular biophysics research. Overall, in this study we demonstrated an eightfold increase in structural resolution compared with peptide level assessments. In addition, to provide a quantitative analysis of residue based solvent accessibility and protein topography as a basis for high-resolution structure prediction; we illustrate strategies of data transformation using the relative reactivity of side chains as a normalization strategy and predict side-chain surface area from the footprinting data. We tested the methods by examination of Ca+2-calmodulin showing highly significant correlations between surface area and side-chain contact predictions for individual side chains and the crystal structure. Tandem ion based hydroxyl radical footprinting-MS provides quantitative high-resolution protein topology information in solution that can fill existing gaps in structure determination for large proteins and macromolecular complexes. PMID:25687570

  2. Validation of PCR methods for quantitation of genetically modified plants in food.

    PubMed

    Hübner, P; Waiblinger, H U; Pietsch, K; Brodmann, P

    2001-01-01

    For enforcement of the recently introduced labeling threshold for genetically modified organisms (GMOs) in food ingredients, quantitative detection methods such as quantitative competitive (QC-PCR) and real-time PCR are applied by official food control laboratories. The experiences of 3 European food control laboratories in validating such methods were compared to describe realistic performance characteristics of quantitative PCR detection methods. The limit of quantitation (LOQ) of GMO-specific, real-time PCR was experimentally determined to reach 30-50 target molecules, which is close to theoretical prediction. Starting PCR with 200 ng genomic plant DNA, the LOQ depends primarily on the genome size of the target plant and ranges from 0.02% for rice to 0.7% for wheat. The precision of quantitative PCR detection methods, expressed as relative standard deviation (RSD), varied from 10 to 30%. Using Bt176 corn containing test samples and applying Bt176 specific QC-PCR, mean values deviated from true values by -7to 18%, with an average of 2+/-10%. Ruggedness of real-time PCR detection methods was assessed in an interlaboratory study analyzing commercial, homogeneous food samples. Roundup Ready soybean DNA contents were determined in the range of 0.3 to 36%, relative to soybean DNA, with RSDs of about 25%. Taking the precision of quantitative PCR detection methods into account, suitable sample plans and sample sizes for GMO analysis are suggested. Because quantitative GMO detection methods measure GMO contents of samples in relation to reference material (calibrants), high priority must be given to international agreements and standardization on certified reference materials.

  3. Quantitative spatiotemporal analysis of antibody fragment diffusion and endocytic consumption in tumor spheroids.

    PubMed

    Thurber, Greg M; Wittrup, K Dane

    2008-05-01

    Antibody-based cancer treatment depends upon distribution of the targeting macromolecule throughout tumor tissue, and spatial heterogeneity could significantly limit efficacy in many cases. Antibody distribution in tumor tissue is a function of drug dosage, antigen concentration, binding affinity, antigen internalization, drug extravasation from blood vessels, diffusion in the tumor extracellular matrix, and systemic clearance rates. We have isolated the effects of a subset of these variables by live-cell microscopic imaging of single-chain antibody fragments against carcinoembryonic antigen in LS174T tumor spheroids. The measured rates of scFv penetration and retention were compared with theoretical predictions based on simple scaling criteria. The theory predicts that antibody dose must be large enough to drive a sufficient diffusive flux of antibody to overcome cellular internalization, and exposure time must be long enough to allow penetration to the spheroid center. The experimental results in spheroids are quantitatively consistent with these predictions. Therefore, simple scaling criteria can be applied to accurately predict antibody and antibody fragment penetration distance in tumor tissue.

  4. Quantitative Spatiotemporal Analysis of Antibody Fragment Diffusion and Endocytic Consumption in Tumor Spheroids

    PubMed Central

    Thurber, Greg M.; Wittrup, K. Dane

    2010-01-01

    Antibody-based cancer treatment depends upon distribution of the targeting macromolecule throughout tumor tissue, and spatial heterogeneity could significantly limit efficacy in many cases. Antibody distribution in tumor tissue is a function of drug dosage, antigen concentration, binding affinity, antigen internalization, drug extravasation from blood vessels, diffusion in the tumor extracellular matrix, and systemic clearance rates. We have isolated the effects of a subset of these variables by live-cell microscopic imaging of single-chain antibody fragments against carcinoembryonic antigen in LS174T tumor spheroids. The measured rates of scFv penetration and retention were compared with theoretical predictions based on simple scaling criteria. The theory predicts that antibody dose must be large enough to drive a sufficient diffusive flux of antibody to overcome cellular internalization, and exposure time must be long enough to allow penetration to the spheroid center. The experimental results in spheroids are quantitatively consistent with these predictions. Therefore, simple scaling criteria can be applied to accurately predict antibody and antibody fragment penetration distance in tumor tissue. PMID:18451160

  5. Theoretical predictions of vibration-rotation-tunneling dynamics of the weakly bound trimer (H 2O) 2HCl

    NASA Astrophysics Data System (ADS)

    Struniewicz, Cezary; Korona, Tatiana; Moszynski, Robert; Milet, Anne

    2001-08-01

    In this Letter we report a theoretical study of the vibration-rotation-tunneling (VRT) states of the (H 2O) 2HCl trimer. Five degrees of freedom are considered: two angles corresponding to the torsional (flipping) motions of the free, non-hydrogen-bonded, hydrogen atoms in the complex, and three angles describing the overall rotation of the trimer in the space. A two-dimensional potential energy surface is generated ab initio by symmetry-adapted perturbation theory (SAPT). Tunneling splittings, frequencies of the intermolecular vibrations, and vibrational line strengths of spectroscopic transitions are predicted.

  6. Novel quantitative pigmentation phenotyping enhances genetic association, epistasis, and prediction of human eye colour.

    PubMed

    Wollstein, Andreas; Walsh, Susan; Liu, Fan; Chakravarthy, Usha; Rahu, Mati; Seland, Johan H; Soubrane, Gisèle; Tomazzoli, Laura; Topouzis, Fotis; Vingerling, Johannes R; Vioque, Jesus; Böhringer, Stefan; Fletcher, Astrid E; Kayser, Manfred

    2017-02-27

    Success of genetic association and the prediction of phenotypic traits from DNA are known to depend on the accuracy of phenotype characterization, amongst other parameters. To overcome limitations in the characterization of human iris pigmentation, we introduce a fully automated approach that specifies the areal proportions proposed to represent differing pigmentation types, such as pheomelanin, eumelanin, and non-pigmented areas within the iris. We demonstrate the utility of this approach using high-resolution digital eye imagery and genotype data from 12 selected SNPs from over 3000 European samples of seven populations that are part of the EUREYE study. In comparison to previous quantification approaches, (1) we achieved an overall improvement in eye colour phenotyping, which provides a better separation of manually defined eye colour categories. (2) Single nucleotide polymorphisms (SNPs) known to be involved in human eye colour variation showed stronger associations with our approach. (3) We found new and confirmed previously noted SNP-SNP interactions. (4) We increased SNP-based prediction accuracy of quantitative eye colour. Our findings exemplify that precise quantification using the perceived biological basis of pigmentation leads to enhanced genetic association and prediction of eye colour. We expect our approach to deliver new pigmentation genes when applied to genome-wide association testing.

  7. Novel quantitative pigmentation phenotyping enhances genetic association, epistasis, and prediction of human eye colour

    PubMed Central

    Wollstein, Andreas; Walsh, Susan; Liu, Fan; Chakravarthy, Usha; Rahu, Mati; Seland, Johan H.; Soubrane, Gisèle; Tomazzoli, Laura; Topouzis, Fotis; Vingerling, Johannes R.; Vioque, Jesus; Böhringer, Stefan; Fletcher, Astrid E.; Kayser, Manfred

    2017-01-01

    Success of genetic association and the prediction of phenotypic traits from DNA are known to depend on the accuracy of phenotype characterization, amongst other parameters. To overcome limitations in the characterization of human iris pigmentation, we introduce a fully automated approach that specifies the areal proportions proposed to represent differing pigmentation types, such as pheomelanin, eumelanin, and non-pigmented areas within the iris. We demonstrate the utility of this approach using high-resolution digital eye imagery and genotype data from 12 selected SNPs from over 3000 European samples of seven populations that are part of the EUREYE study. In comparison to previous quantification approaches, (1) we achieved an overall improvement in eye colour phenotyping, which provides a better separation of manually defined eye colour categories. (2) Single nucleotide polymorphisms (SNPs) known to be involved in human eye colour variation showed stronger associations with our approach. (3) We found new and confirmed previously noted SNP-SNP interactions. (4) We increased SNP-based prediction accuracy of quantitative eye colour. Our findings exemplify that precise quantification using the perceived biological basis of pigmentation leads to enhanced genetic association and prediction of eye colour. We expect our approach to deliver new pigmentation genes when applied to genome-wide association testing. PMID:28240252

  8. Predicting the stochastic guiding of kinesin-driven microtubules in microfabricated tracks: a statistical-mechanics-based modeling approach.

    PubMed

    Lin, Chih-Tin; Meyhofer, Edgar; Kurabayashi, Katsuo

    2010-01-01

    Directional control of microtubule shuttles via microfabricated tracks is key to the development of controlled nanoscale mass transport by kinesin motor molecules. Here we develop and test a model to quantitatively predict the stochastic behavior of microtubule guiding when they mechanically collide with the sidewalls of lithographically patterned tracks. By taking into account appropriate probability distributions of microscopic states of the microtubule system, the model allows us to theoretically analyze the roles of collision conditions and kinesin surface densities in determining how the motion of microtubule shuttles is controlled. In addition, we experimentally observe the statistics of microtubule collision events and compare our theoretical prediction with experimental data to validate our model. The model will direct the design of future hybrid nanotechnology devices that integrate nanoscale transport systems powered by kinesin-driven molecular shuttles.

  9. The Incremental Value of Subjective and Quantitative Assessment of 18F-FDG PET for the Prediction of Pathologic Complete Response to Preoperative Chemoradiotherapy in Esophageal Cancer.

    PubMed

    van Rossum, Peter S N; Fried, David V; Zhang, Lifei; Hofstetter, Wayne L; van Vulpen, Marco; Meijer, Gert J; Court, Laurence E; Lin, Steven H

    2016-05-01

    A reliable prediction of a pathologic complete response (pathCR) to chemoradiotherapy before surgery for esophageal cancer would enable investigators to study the feasibility and outcome of an organ-preserving strategy after chemoradiotherapy. So far no clinical parameters or diagnostic studies are able to accurately predict which patients will achieve a pathCR. The aim of this study was to determine whether subjective and quantitative assessment of baseline and postchemoradiation (18)F-FDG PET can improve the accuracy of predicting pathCR to preoperative chemoradiotherapy in esophageal cancer beyond clinical predictors. This retrospective study was approved by the institutional review board, and the need for written informed consent was waived. Clinical parameters along with subjective and quantitative parameters from baseline and postchemoradiation (18)F-FDG PET were derived from 217 esophageal adenocarcinoma patients who underwent chemoradiotherapy followed by surgery. The associations between these parameters and pathCR were studied in univariable and multivariable logistic regression analysis. Four prediction models were constructed and internally validated using bootstrapping to study the incremental predictive values of subjective assessment of (18)F-FDG PET, conventional quantitative metabolic features, and comprehensive (18)F-FDG PET texture/geometry features, respectively. The clinical benefit of (18)F-FDG PET was determined using decision-curve analysis. A pathCR was found in 59 (27%) patients. A clinical prediction model (corrected c-index, 0.67) was improved by adding (18)F-FDG PET-based subjective assessment of response (corrected c-index, 0.72). This latter model was slightly improved by the addition of 1 conventional quantitative metabolic feature only (i.e., postchemoradiation total lesion glycolysis; corrected c-index, 0.73), and even more by subsequently adding 4 comprehensive (18)F-FDG PET texture/geometry features (corrected c-index, 0

  10. Assessment of quantitative structure-activity relationship of toxicity prediction models for Korean chemical substance control legislation

    PubMed Central

    Kim, Kwang-Yon; Shin, Seong Eun; No, Kyoung Tai

    2015-01-01

    Objectives For successful adoption of legislation controlling registration and assessment of chemical substances, it is important to obtain sufficient toxicological experimental evidence and other related information. It is also essential to obtain a sufficient number of predicted risk and toxicity results. Particularly, methods used in predicting toxicities of chemical substances during acquisition of required data, ultimately become an economic method for future dealings with new substances. Although the need for such methods is gradually increasing, the-required information about reliability and applicability range has not been systematically provided. Methods There are various representative environmental and human toxicity models based on quantitative structure-activity relationships (QSAR). Here, we secured the 10 representative QSAR-based prediction models and its information that can make predictions about substances that are expected to be regulated. We used models that predict and confirm usability of the information expected to be collected and submitted according to the legislation. After collecting and evaluating each predictive model and relevant data, we prepared methods quantifying the scientific validity and reliability, which are essential conditions for using predictive models. Results We calculated predicted values for the models. Furthermore, we deduced and compared adequacies of the models using the Alternative non-testing method assessed for Registration, Evaluation, Authorization, and Restriction of Chemicals Substances scoring system, and deduced the applicability domains for each model. Additionally, we calculated and compared inclusion rates of substances expected to be regulated, to confirm the applicability. Conclusions We evaluated and compared the data, adequacy, and applicability of our selected QSAR-based toxicity prediction models, and included them in a database. Based on this data, we aimed to construct a system that can be used

  11. Hierarchical representations of the five-factor model of personality in predicting job performance: integrating three organizing frameworks with two theoretical perspectives.

    PubMed

    Judge, Timothy A; Rodell, Jessica B; Klinger, Ryan L; Simon, Lauren S; Crawford, Eean R

    2013-11-01

    Integrating 2 theoretical perspectives on predictor-criterion relationships, the present study developed and tested a hierarchical framework in which each five-factor model (FFM) personality trait comprises 2 DeYoung, Quilty, and Peterson (2007) facets, which in turn comprise 6 Costa and McCrae (1992) NEO facets. Both theoretical perspectives-the bandwidth-fidelity dilemma and construct correspondence-suggest that lower order traits would better predict facets of job performance (task performance and contextual performance). They differ, however, as to the relative merits of broad and narrow traits in predicting a broad criterion (overall job performance). We first meta-analyzed the relationship of the 30 NEO facets to overall job performance and its facets. Overall, 1,176 correlations from 410 independent samples (combined N = 406,029) were coded and meta-analyzed. We then formed the 10 DeYoung et al. facets from the NEO facets, and 5 broad traits from those facets. Overall, results provided support for the 6-2-1 framework in general and the importance of the NEO facets in particular. (c) 2013 APA, all rights reserved.

  12. A Novel Quasi-One-Dimensional Topological Insulator in Bismuth Iodide β-Bi4I4: Theoretical Prediction and Experimental Confirmation

    NASA Astrophysics Data System (ADS)

    Yazyev, Oleg V.; Autès, Gabriel; Isaeva, Anna; Moreschini, Luca; Johannsen, Jens C.; Pisoni, Andrea; Filatova, Taisia G.; Kuznetsov, Alexey N.; Forró, László; van den Broek, Wouter; Kim, Yeongkwan; Denlinger, Jonathan D.; Rotenberg, Eli; Bostwick, Aaron; Grioni, Marco

    2015-03-01

    A new strong Z2 topological insulator is theoretically predicted and experimentally confirmed in the β-phase of quasi-one-dimensional bismuth iodide Bi4I4. According to our first-principles calculations the material is characterized by Z2 invariants (1;110) making it the first representative of this topological class. Importantly, the electronic structure of β-Bi4I4 is in proximity with both the weak topological insulator phase (0;001) and the trivial phase (0;000), suggesting that a high degree of control over the topological electronic properties of this material can be achieved. Experimentally produced samples of this material appears to be practically defect-free, which results in a low concentration of intrinsic charge carriers. By using angle-resolved photoemission spectroscopy (ARPES) on the (001) surface we confirm the theoretical predictions of a highly anisotropic band structure with a small band gap hosting topological surface states centered at the M point, at the boundary of the surface Brillouin zone. We acknowledge support from Swiss NSF, ERC project ``TopoMat'', NCCR-MARVEL, DFG and US DoE. G.A., A.I., L.M. and J.C.J. contributed equally to this work.

  13. Quantitative Adverse Outcome Pathways and Their Application to Predictive Toxicology

    EPA Science Inventory

    A quantitative adverse outcome pathway (qAOP) consists of one or more biologically based, computational models describing key event relationships linking a molecular initiating event (MIE) to an adverse outcome. A qAOP provides quantitative, dose–response, and time-course p...

  14. The prediction of candidate genes for cervix related cancer through gene ontology and graph theoretical approach.

    PubMed

    Hindumathi, V; Kranthi, T; Rao, S B; Manimaran, P

    2014-06-01

    With rapidly changing technology, prediction of candidate genes has become an indispensable task in recent years mainly in the field of biological research. The empirical methods for candidate gene prioritization that succors to explore the potential pathway between genetic determinants and complex diseases are highly cumbersome and labor intensive. In such a scenario predicting potential targets for a disease state through in silico approaches are of researcher's interest. The prodigious availability of protein interaction data coupled with gene annotation renders an ease in the accurate determination of disease specific candidate genes. In our work we have prioritized the cervix related cancer candidate genes by employing Csaba Ortutay and his co-workers approach of identifying the candidate genes through graph theoretical centrality measures and gene ontology. With the advantage of the human protein interaction data, cervical cancer gene sets and the ontological terms, we were able to predict 15 novel candidates for cervical carcinogenesis. The disease relevance of the anticipated candidate genes was corroborated through a literature survey. Also the presence of the drugs for these candidates was detected through Therapeutic Target Database (TTD) and DrugMap Central (DMC) which affirms that they may be endowed as potential drug targets for cervical cancer.

  15. Quantitative prediction of shrimp disease incidence via the profiles of gut eukaryotic microbiota.

    PubMed

    Xiong, Jinbo; Yu, Weina; Dai, Wenfang; Zhang, Jinjie; Qiu, Qiongfen; Ou, Changrong

    2018-04-01

    One common notion is emerging that gut eukaryotes are commensal or beneficial, rather than detrimental. To date, however, surprisingly few studies have been taken to discern the factors that govern the assembly of gut eukaryotes, despite growing interest in the dysbiosis of gut microbiota-disease relationship. Herein, we firstly explored how the gut eukaryotic microbiotas were assembled over shrimp postlarval to adult stages and a disease progression. The gut eukaryotic communities changed markedly as healthy shrimp aged, and converged toward an adult-microbiota configuration. However, the adult-like stability was distorted by disease exacerbation. A null model untangled that the deterministic processes that governed the gut eukaryotic assembly tended to be more important over healthy shrimp development, whereas this trend was inverted as the disease progressed. After ruling out the baseline of gut eukaryotes over shrimp ages, we identified disease-discriminatory taxa (species level afforded the highest accuracy of prediction) that characteristic of shrimp health status. The profiles of these taxa contributed an overall 92.4% accuracy in predicting shrimp health status. Notably, this model can accurately diagnose the onset of shrimp disease. Interspecies interaction analysis depicted how the disease-discriminatory taxa interacted with one another in sustaining shrimp health. Taken together, our findings offer novel insights into the underlying ecological processes that govern the assembly of gut eukaryotes over shrimp postlarval to adult stages and a disease progression. Intriguingly, the established model can quantitatively and accurately predict the incidences of shrimp disease.

  16. Prediction of Safety Margin and Optimization of Dosing Protocol for a Novel Antibiotic using Quantitative Systems Pharmacology Modeling.

    PubMed

    Woodhead, Jeffrey L; Paech, Franziska; Maurer, Martina; Engelhardt, Marc; Schmitt-Hoffmann, Anne H; Spickermann, Jochen; Messner, Simon; Wind, Mathias; Witschi, Anne-Therese; Krähenbühl, Stephan; Siler, Scott Q; Watkins, Paul B; Howell, Brett A

    2018-06-07

    Elevations of liver enzymes have been observed in clinical trials with BAL30072, a novel antibiotic. In vitro assays have identified potential mechanisms for the observed hepatotoxicity, including electron transport chain (ETC) inhibition and reactive oxygen species (ROS) generation. DILIsym, a quantitative systems pharmacology (QSP) model of drug-induced liver injury, has been used to predict the likelihood that each mechanism explains the observed toxicity. DILIsym was also used to predict the safety margin for a novel BAL30072 dosing scheme; it was predicted to be low. DILIsym was then used to recommend potential modifications to this dosing scheme; weight-adjusted dosing and a requirement to assay plasma alanine aminotransferase (ALT) daily and stop dosing as soon as ALT increases were observed improved the predicted safety margin of BAL30072 and decreased the predicted likelihood of severe injury. This research demonstrates a potential application for QSP modeling in improving the safety profile of candidate drugs. © 2018 The Authors. Clinical and Translational Science published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  17. Quantitative power Doppler ultrasound measures of peripheral joint synovitis in poor prognosis early rheumatoid arthritis predict radiographic progression.

    PubMed

    Sreerangaiah, Dee; Grayer, Michael; Fisher, Benjamin A; Ho, Meilien; Abraham, Sonya; Taylor, Peter C

    2016-01-01

    To assess the value of quantitative vascular imaging by power Doppler US (PDUS) as a tool that can be used to stratify patient risk of joint damage in early seropositive RA while still biologic naive but on synthetic DMARD treatment. Eighty-five patients with seropositive RA of <3 years duration had clinical, laboratory and imaging assessments at 0 and 12 months. Imaging assessments consisted of radiographs of the hands and feet, two-dimensional (2D) high-frequency and PDUS imaging of 10 MCP joints that were scored for erosions and vascularity and three-dimensional (3D) PDUS of MCP joints and wrists that were scored for vascularity. Severe deterioration on radiographs and ultrasonography was seen in 45 and 28% of patients, respectively. The 3D power Doppler volume and 2D vascularity scores were the most useful US predictors of deterioration. These variables were modelled in two equations that estimate structural damage over 12 months. The equations had a sensitivity of 63.2% and specificity of 80.9% for predicting radiographic structural damage and a sensitivity of 54.2% and specificity of 96.7% for predicting structural damage on ultrasonography. In seropositive early RA, quantitative vascular imaging by PDUS has clinical utility in predicting which patients will derive benefit from early use of biologic therapy. © The Author 2015. Published by Oxford University Press on behalf of the British Society for Rheumatology. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. Elevated carbon dioxide is predicted to promote coexistence among competing species in a trait-based model

    DOE PAGES

    Ali, Ashehad A.; Medlyn, Belinda E.; Aubier, Thomas G.; ...

    2015-10-06

    Differential species responses to atmospheric CO 2 concentration (C a) could lead to quantitative changes in competition among species and community composition, with flow-on effects for ecosystem function. However, there has been little theoretical analysis of how elevated C a (eC a) will affect plant competition, or how composition of plant communities might change. Such theoretical analysis is needed for developing testable hypotheses to frame experimental research. Here, we investigated theoretically how plant competition might change under eC a by implementing two alternative competition theories, resource use theory and resource capture theory, in a plant carbon and nitrogen cycling model.more » The model makes several novel predictions for the impact of eC a on plant community composition. Using resource use theory, the model predicts that eC a is unlikely to change species dominance in competition, but is likely to increase coexistence among species. Using resource capture theory, the model predicts that eC a may increase community evenness. Collectively, both theories suggest that eC a will favor coexistence and hence that species diversity should increase with eC a. Our theoretical analysis leads to a novel hypothesis for the impact of eC a on plant community composition. In this study, the hypothesis has potential to help guide the design and interpretation of eC a experiments.« less

  19. Job Embeddedness Demonstrates Incremental Validity When Predicting Turnover Intentions for Australian University Employees

    PubMed Central

    Heritage, Brody; Gilbert, Jessica M.; Roberts, Lynne D.

    2016-01-01

    Job embeddedness is a construct that describes the manner in which employees can be enmeshed in their jobs, reducing their turnover intentions. Recent questions regarding the properties of quantitative job embeddedness measures, and their predictive utility, have been raised. Our study compared two competing reflective measures of job embeddedness, examining their convergent, criterion, and incremental validity, as a means of addressing these questions. Cross-sectional quantitative data from 246 Australian university employees (146 academic; 100 professional) was gathered. Our findings indicated that the two compared measures of job embeddedness were convergent when total scale scores were examined. Additionally, job embeddedness was capable of demonstrating criterion and incremental validity, predicting unique variance in turnover intention. However, this finding was not readily apparent with one of the compared job embeddedness measures, which demonstrated comparatively weaker evidence of validity. We discuss the theoretical and applied implications of these findings, noting that job embeddedness has a complementary place among established determinants of turnover intention. PMID:27199817

  20. A Quantitative Approach to Assessing System Evolvability

    NASA Technical Reports Server (NTRS)

    Christian, John A., III

    2004-01-01

    When selecting a system from multiple candidates, the customer seeks the one that best meets his or her needs. Recently the desire for evolvable systems has become more important and engineers are striving to develop systems that accommodate this need. In response to this search for evolvability, we present a historical perspective on evolvability, propose a refined definition of evolvability, and develop a quantitative method for measuring this property. We address this quantitative methodology from both a theoretical and practical perspective. This quantitative model is then applied to the problem of evolving a lunar mission to a Mars mission as a case study.

  1. Predicting chemical degradation during storage from two successive concentration ratios: Theoretical investigation.

    PubMed

    Peleg, Micha; Normand, Mark D

    2015-09-01

    When a vitamin's, pigment's or other food component's chemical degradation follows a known fixed order kinetics, and its rate constant's temperature-dependence follows a two parameter model, then, at least theoretically, it is possible to extract these two parameters from two successive experimental concentration ratios determined during the food's non-isothermal storage. This requires numerical solution of two simultaneous equations, themselves the numerical solutions of two differential rate equations, with a program especially developed for the purpose. Once calculated, these parameters can be used to reconstruct the entire degradation curve for the particular temperature history and predict the degradation curves for other temperature histories. The concept and computation method were tested with simulated degradation under rising and/or falling oscillating temperature conditions, employing the exponential model to characterize the rate constant's temperature-dependence. In computer simulations, the method's predictions were robust against minor errors in the two concentration ratios. The program to do the calculations was posted as freeware on the Internet. The temperature profile can be entered as an algebraic expression that can include 'If' statements, or as an imported digitized time-temperature data file, to be converted into an Interpolating Function by the program. The numerical solution of the two simultaneous equations requires close initial guesses of the exponential model's parameters. Programs were devised to obtain these initial values by matching the two experimental concentration ratios with a generated degradation curve whose parameters can be varied manually with sliders on the screen. These programs too were made available as freeware on the Internet and were tested with published data on vitamin A. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Continuously growing rodent molars result from a predictable quantitative evolutionary change over 50 million years

    PubMed Central

    Mushegyan, Vagan; Eronen, Jussi T.; Lawing, A. Michelle; Sharir, Amnon; Janis, Christine; Jernvall, Jukka; Klein, Ophir D.

    2015-01-01

    Summary The fossil record is widely informative about evolution, but fossils are not systematically used to study the evolution of stem cell-driven renewal. Here, we examined evolution of the continuous growth (hypselodonty) of rodent molar teeth, which is fuelled by the presence of dental stem cells. We studied occurrences of 3500 North American rodent fossils, ranging from 50 million years ago (mya) to 2 mya. We examined changes in molar height to determine if evolution of hypselodonty shows distinct patterns in the fossil record, and we found that hypselodont taxa emerged through intermediate forms of increasing crown height. Next, we designed a Markov simulation model, which replicated molar height increases throughout the Cenozoic, and, moreover, evolution of hypselodonty. Thus, by extension, the retention of the adult stem-cell niche appears to be a predictable quantitative rather than a stochastic qualitative process. Our analyses predict that hypselodonty will eventually become the dominant phenotype. PMID:25921530

  3. Semi-quantitative prediction of a multiple API solid dosage form with a combination of vibrational spectroscopy methods.

    PubMed

    Hertrampf, A; Sousa, R M; Menezes, J C; Herdling, T

    2016-05-30

    Quality control (QC) in the pharmaceutical industry is a key activity in ensuring medicines have the required quality, safety and efficacy for their intended use. QC departments at pharmaceutical companies are responsible for all release testing of final products but also all incoming raw materials. Near-infrared spectroscopy (NIRS) and Raman spectroscopy are important techniques for fast and accurate identification and qualification of pharmaceutical samples. Tablets containing two different active pharmaceutical ingredients (API) [bisoprolol, hydrochlorothiazide] in different commercially available dosages were analysed using Raman- and NIR Spectroscopy. The goal was to define multivariate models based on each vibrational spectroscopy to discriminate between different dosages (identity) and predict their dosage (semi-quantitative). Furthermore the combination of spectroscopic techniques was investigated. Therefore, two different multiblock techniques based on PLS have been applied: multiblock PLS (MB-PLS) and sequential-orthogonalised PLS (SO-PLS). NIRS showed better results compared to Raman spectroscopy for both identification and quantitation. The multiblock techniques investigated showed that each spectroscopy contains information not present or captured with the other spectroscopic technique, thus demonstrating that there is a potential benefit in their combined use for both identification and quantitation purposes. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Quantitative Trait Loci Differentiating the Outbreeding Mimulus Guttatus from the Inbreeding M. Platycalyx

    PubMed Central

    Lin, J. Z.; Ritland, K.

    1997-01-01

    Theoretical predictions about the evolution of selfing depend on the genetic architecture of loci controlling selfing (monogenic vs. polygenic determination, large vs. small effect of alleles, dominance vs. recessiveness), and studies of such architecture are lacking. We inferred the genetic basis of mating system differences between the outbreeding Mimulus guttatus and the inbreeding M. platycalyx by quantitative trait locus (QTL) mapping using random amplified polymorphic DNA and isozyme markers. One to three QTL were detected for each of five mating system characters, and each QTL explained 7.6-28.6% of the phenotypic variance. Taken together, QTL accounted for up to 38% of the variation in mating system characters, and a large proportion of variation was unaccounted for. Inferred QTL often affected more than one trait, contributing to the genetic correlation between those traits. These results are consistent with the hypothesis that quantitative variation in plant mating system characters is primarily controlled by loci with small effect. PMID:9215912

  5. Quantitative biology: where modern biology meets physical sciences.

    PubMed

    Shekhar, Shashank; Zhu, Lian; Mazutis, Linas; Sgro, Allyson E; Fai, Thomas G; Podolski, Marija

    2014-11-05

    Quantitative methods and approaches have been playing an increasingly important role in cell biology in recent years. They involve making accurate measurements to test a predefined hypothesis in order to compare experimental data with predictions generated by theoretical models, an approach that has benefited physicists for decades. Building quantitative models in experimental biology not only has led to discoveries of counterintuitive phenomena but has also opened up novel research directions. To make the biological sciences more quantitative, we believe a two-pronged approach needs to be taken. First, graduate training needs to be revamped to ensure biology students are adequately trained in physical and mathematical sciences and vice versa. Second, students of both the biological and the physical sciences need to be provided adequate opportunities for hands-on engagement with the methods and approaches necessary to be able to work at the intersection of the biological and physical sciences. We present the annual Physiology Course organized at the Marine Biological Laboratory (Woods Hole, MA) as a case study for a hands-on training program that gives young scientists the opportunity not only to acquire the tools of quantitative biology but also to develop the necessary thought processes that will enable them to bridge the gap between these disciplines. © 2014 Shekhar, Zhu, Mazutis, Sgro, Fai, and Podolski. This article is distributed by The American Society for Cell Biology under license from the author(s). Two months after publication it is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  6. Quantitative structure-property relationships for prediction of boiling point, vapor pressure, and melting point.

    PubMed

    Dearden, John C

    2003-08-01

    Boiling point, vapor pressure, and melting point are important physicochemical properties in the modeling of the distribution and fate of chemicals in the environment. However, such data often are not available, and therefore must be estimated. Over the years, many attempts have been made to calculate boiling points, vapor pressures, and melting points by using quantitative structure-property relationships, and this review examines and discusses the work published in this area, and concentrates particularly on recent studies. A number of software programs are commercially available for the calculation of boiling point, vapor pressure, and melting point, and these have been tested for their predictive ability with a test set of 100 organic chemicals.

  7. Quantitative fetal fibronectin testing in combination with cervical length measurement in the prediction of spontaneous preterm delivery in symptomatic women.

    PubMed

    Bruijn, Mmc; Vis, J Y; Wilms, F F; Oudijk, M A; Kwee, A; Porath, M M; Oei, G; Scheepers, Hcj; Spaanderman, Mea; Bloemenkamp, Kwm; Haak, M C; Bolte, A C; Vandenbussche, Fpha; Woiski, M D; Bax, C J; Cornette, Jmj; Duvekot, J J; Nij Bijvanck, Bwa; van Eyck, J; Franssen, Mtm; Sollie, K M; van der Post, Jam; Bossuyt, Pmm; Opmeer, B C; Kok, M; Mol, Bwj; van Baaren, G-J

    2016-11-01

    To evaluate whether in symptomatic women, the combination of quantitative fetal fibronectin (fFN) testing and cervical length (CL) improves the prediction of preterm delivery (PTD) within 7 days compared with qualitative fFN and CL. Post hoc analysis of frozen fFN samples of a nationwide cohort study. Ten perinatal centres in the Netherlands. Symptomatic women between 24 and 34 weeks of gestation. The risk of PTD <7 days was estimated in predefined CL and fFN strata. We used logistic regression to develop a model including quantitative fFN and CL, and one including qualitative fFN (threshold 50 ng/ml) and CL. We compared the models' capacity to identify women at low risk (<5%) for delivery within 7 days using a reclassification table. Spontaneous delivery within 7 days after study entry. We studied 350 women, of whom 69 (20%) delivered within 7 days. The risk of PTD in <7 days ranged from 2% in the lowest fFN group (<10 ng/ml) to 71% in the highest group (>500 ng/ml). Multivariable logistic regression showed an increasing risk of PTD in <7 days with rising fFN concentration [10-49 ng/ml: odds ratio (OR) 1.3, 95% confidence interval (95% CI) 0.23-7.0; 50-199 ng/ml: OR 3.2, 95% CI 0.79-13; 200-499 ng/ml: OR 9.0, 95% CI 2.3-35; >500 ng/ml: OR 39, 95% CI 9.4-164] and shortening of the CL (OR 0.86 per mm, 95% CI 0.82-0.90). Use of quantitative fFN instead of qualitative fFN resulted in reclassification of 18 (5%) women from high to low risk, of whom one (6%) woman delivered within 7 days. In symptomatic women, quantitative fFN testing does not improve the prediction of PTD within 7 days compared with qualitative fFN testing in combination with CL measurement in terms of reclassification from high to low (<5%) risk, but it adds value across the risk range. Quantitative fFN testing adds value to qualitative fFN testing with CL measurement in the prediction of PTD. © 2015 Royal College of Obstetricians and Gynaecologists.

  8. WPC Quantitative Precipitation Forecasts - Day 1

    Science.gov Websites

    to all federal, state, and local government web resources and services. Quantitative Precipitation Prediction Center 5830 University Research Court College Park, Maryland 20740 Weather Prediction Center Web

  9. Near infrared spectroscopy as an on-line method to quantitatively determine glycogen and predict ultimate pH in pre rigor bovine M. longissimus dorsi.

    PubMed

    Lomiwes, D; Reis, M M; Wiklund, E; Young, O A; North, M

    2010-12-01

    The potential of near infrared (NIR) spectroscopy as an on-line method to quantify glycogen and predict ultimate pH (pH(u)) of pre rigor beef M. longissimus dorsi (LD) was assessed. NIR spectra (538 to 1677 nm) of pre rigor LD from steers, cows and bulls were collected early post mortem and measurements were made for pre rigor glycogen concentration and pH(u). Spectral and measured data were combined to develop models to quantify glycogen and predict the pH(u) of pre rigor LD. NIR spectra and pre rigor predicted values obtained from quantitative models were shown to be poorly correlated against glycogen and pH(u) (r(2)=0.23 and 0.20, respectively). Qualitative models developed to categorize each muscle according to their pH(u) were able to correctly categorize 42% of high pH(u) samples. Optimum qualitative and quantitative models derived from NIR spectra found low correlation between predicted values and reference measurements. Copyright © 2010 The American Meat Science Association. Published by Elsevier Ltd.. All rights reserved.

  10. Theoretical predictions for spatially-focused heating of magnetic nanoparticles guided by magnetic particle imaging field gradients

    NASA Astrophysics Data System (ADS)

    Dhavalikar, Rohan; Rinaldi, Carlos

    2016-12-01

    Magnetic nanoparticles in alternating magnetic fields (AMFs) transfer some of the field's energy to their surroundings in the form of heat, a property that has attracted significant attention for use in cancer treatment through hyperthermia and in developing magnetic drug carriers that can be actuated to release their cargo externally using magnetic fields. To date, most work in this field has focused on the use of AMFs that actuate heat release by nanoparticles over large regions, without the ability to select specific nanoparticle-loaded regions for heating while leaving other nanoparticle-loaded regions unaffected. In parallel, magnetic particle imaging (MPI) has emerged as a promising approach to image the distribution of magnetic nanoparticle tracers in vivo, with sub-millimeter spatial resolution. The underlying principle in MPI is the application of a selection magnetic field gradient, which defines a small region of low bias field, superimposed with an AMF (of lower frequency and amplitude than those normally used to actuate heating by the nanoparticles) to obtain a signal which is proportional to the concentration of particles in the region of low bias field. Here we extend previous models for estimating the energy dissipation rates of magnetic nanoparticles in uniform AMFs to provide theoretical predictions of how the selection magnetic field gradient used in MPI can be used to selectively actuate heating by magnetic nanoparticles in the low bias field region of the selection magnetic field gradient. Theoretical predictions are given for the spatial decay in energy dissipation rate under magnetic field gradients representative of those that can be achieved with current MPI technology. These results underscore the potential of combining MPI and higher amplitude/frequency actuation AMFs to achieve selective magnetic fluid hyperthermia (MFH) guided by MPI.

  11. Quantitative and predictive model of kinetic regulation by E. coli TPP riboswitches

    PubMed Central

    Guedich, Sondés; Puffer-Enders, Barbara; Baltzinger, Mireille; Hoffmann, Guillaume; Da Veiga, Cyrielle; Jossinet, Fabrice; Thore, Stéphane; Bec, Guillaume; Ennifar, Eric; Burnouf, Dominique; Dumas, Philippe

    2016-01-01

    ABSTRACT Riboswitches are non-coding elements upstream or downstream of mRNAs that, upon binding of a specific ligand, regulate transcription and/or translation initiation in bacteria, or alternative splicing in plants and fungi. We have studied thiamine pyrophosphate (TPP) riboswitches regulating translation of thiM operon and transcription and translation of thiC operon in E. coli, and that of THIC in the plant A. thaliana. For all, we ascertained an induced-fit mechanism involving initial binding of the TPP followed by a conformational change leading to a higher-affinity complex. The experimental values obtained for all kinetic and thermodynamic parameters of TPP binding imply that the regulation by A. thaliana riboswitch is governed by mass-action law, whereas it is of kinetic nature for the two bacterial riboswitches. Kinetic regulation requires that the RNA polymerase pauses after synthesis of each riboswitch aptamer to leave time for TPP binding, but only when its concentration is sufficient. A quantitative model of regulation highlighted how the pausing time has to be linked to the kinetic rates of initial TPP binding to obtain an ON/OFF switch in the correct concentration range of TPP. We verified the existence of these pauses and the model prediction on their duration. Our analysis also led to quantitative estimates of the respective efficiency of kinetic and thermodynamic regulations, which shows that kinetically regulated riboswitches react more sharply to concentration variation of their ligand than thermodynamically regulated riboswitches. This rationalizes the interest of kinetic regulation and confirms empirical observations that were obtained by numerical simulations. PMID:26932506

  12. Quantitative and predictive model of kinetic regulation by E. coli TPP riboswitches.

    PubMed

    Guedich, Sondés; Puffer-Enders, Barbara; Baltzinger, Mireille; Hoffmann, Guillaume; Da Veiga, Cyrielle; Jossinet, Fabrice; Thore, Stéphane; Bec, Guillaume; Ennifar, Eric; Burnouf, Dominique; Dumas, Philippe

    2016-01-01

    Riboswitches are non-coding elements upstream or downstream of mRNAs that, upon binding of a specific ligand, regulate transcription and/or translation initiation in bacteria, or alternative splicing in plants and fungi. We have studied thiamine pyrophosphate (TPP) riboswitches regulating translation of thiM operon and transcription and translation of thiC operon in E. coli, and that of THIC in the plant A. thaliana. For all, we ascertained an induced-fit mechanism involving initial binding of the TPP followed by a conformational change leading to a higher-affinity complex. The experimental values obtained for all kinetic and thermodynamic parameters of TPP binding imply that the regulation by A. thaliana riboswitch is governed by mass-action law, whereas it is of kinetic nature for the two bacterial riboswitches. Kinetic regulation requires that the RNA polymerase pauses after synthesis of each riboswitch aptamer to leave time for TPP binding, but only when its concentration is sufficient. A quantitative model of regulation highlighted how the pausing time has to be linked to the kinetic rates of initial TPP binding to obtain an ON/OFF switch in the correct concentration range of TPP. We verified the existence of these pauses and the model prediction on their duration. Our analysis also led to quantitative estimates of the respective efficiency of kinetic and thermodynamic regulations, which shows that kinetically regulated riboswitches react more sharply to concentration variation of their ligand than thermodynamically regulated riboswitches. This rationalizes the interest of kinetic regulation and confirms empirical observations that were obtained by numerical simulations.

  13. Molecular and Cellular Quantitative Microscopy: theoretical investigations, technological developments and applications to neurobiology

    NASA Astrophysics Data System (ADS)

    Esposito, Alessandro

    2006-05-01

    This PhD project aims at the development and evaluation of microscopy techniques for the quantitative detection of molecular interactions and cellular features. The primarily investigated techniques are Fαrster Resonance Energy Transfer imaging and Fluorescence Lifetime Imaging Microscopy. These techniques have the capability to quantitatively probe the biochemical environment of fluorophores. An automated microscope capable of unsupervised operation has been developed that enables the investigation of molecular and cellular properties at high throughput levels and the analysis of cellular heterogeneity. State-of-the-art Förster Resonance Energy Transfer imaging, Fluorescence Lifetime Imaging Microscopy, Confocal Laser Scanning Microscopy and the newly developed tools have been combined with cellular and molecular biology techniques for the investigation of protein-protein interactions, oligomerization and post-translational modifications of α-Synuclein and Tau, two proteins involved in Parkinson’s and Alzheimer’s disease, respectively. The high inter-disciplinarity of this project required the merging of the expertise of both the Molecular Biophysics Group at the Debye Institute - Utrecht University and the Cell Biophysics Group at the European Neuroscience Institute - Gαttingen University. This project was conducted also with the support and the collaboration of the Center for the Molecular Physiology of the Brain (Göttingen), particularly with the groups associated with the Molecular Quantitative Microscopy and Parkinson’s Disease and Aggregopathies areas. This work demonstrates that molecular and cellular quantitative microscopy can be used in combination with high-throughput screening as a powerful tool for the investigation of the molecular mechanisms of complex biological phenomena like those occurring in neurodegenerative diseases.

  14. Theoretical geology

    NASA Astrophysics Data System (ADS)

    Mikeš, Daniel

    2010-05-01

    Theoretical geology Present day geology is mostly empirical of nature. I claim that geology is by nature complex and that the empirical approach is bound to fail. Let's consider the input to be the set of ambient conditions and the output to be the sedimentary rock record. I claim that the output can only be deduced from the input if the relation from input to output be known. The fundamental question is therefore the following: Can one predict the output from the input or can one predict the behaviour of a sedimentary system? If one can, than the empirical/deductive method has changes, if one can't than that method is bound to fail. The fundamental problem to solve is therefore the following: How to predict the behaviour of a sedimentary system? It is interesting to observe that this question is never asked and many a study is conducted by the empirical/deductive method; it seems that the empirical method has been accepted as being appropriate without question. It is, however, easy to argument that a sedimentary system is by nature complex and that several input parameters vary at the same time and that they can create similar output in the rock record. It follows trivially from these first principles that in such a case the deductive solution cannot be unique. At the same time several geological methods depart precisely from the assumption, that one particular variable is the dictator/driver and that the others are constant, even though the data do not support such an assumption. The method of "sequence stratigraphy" is a typical example of such a dogma. It can be easily argued that all the interpretation resulting from a method that is built on uncertain or wrong assumptions is erroneous. Still, this method has survived for many years, nonwithstanding all the critics it has received. This is just one example of the present day geological world and is not unique. Even the alternative methods criticising sequence stratigraphy actually depart from the same

  15. Quantitative dual-probe microdialysis: mathematical model and analysis.

    PubMed

    Chen, Kevin C; Höistad, Malin; Kehr, Jan; Fuxe, Kjell; Nicholson, Charles

    2002-04-01

    Steady-state microdialysis is a widely used technique to monitor the concentration changes and distributions of substances in tissues. To obtain more information about brain tissue properties from microdialysis, a dual-probe approach was applied to infuse and sample the radiotracer, [3H]mannitol, simultaneously both in agar gel and in the rat striatum. Because the molecules released by one probe and collected by the other must diffuse through the interstitial space, the concentration profile exhibits dynamic behavior that permits the assessment of the diffusion characteristics in the brain extracellular space and the clearance characteristics. In this paper a mathematical model for dual-probe microdialysis was developed to study brain interstitial diffusion and clearance processes. Theoretical expressions for the spatial distribution of the infused tracer in the brain extracellular space and the temporal concentration at the probe outlet were derived. A fitting program was developed using the simplex algorithm, which finds local minima of the standard deviations between experiments and theory by adjusting the relevant parameters. The theoretical curves accurately fitted the experimental data and generated realistic diffusion parameters, implying that the mathematical model is capable of predicting the interstitial diffusion behavior of [3H]mannitol and that it will be a valuable quantitative tool in dual-probe microdialysis.

  16. Investigation of a redox-sensitive predictive model of mouse embryonic stem cells differentiation using quantitative nuclease protection assays and glutathione redox status

    EPA Science Inventory

    Investigation of a redox-sensitive predictive model of mouse embryonic stem cell differentiation via quantitative nuclease protection assays and glutathione redox status Chandler KJ,Hansen JM, Knudsen T,and Hunter ES 1. U.S. Environmental Protection Agency, Research Triangl...

  17. Theoretical and experimental prediction of the redox potentials of metallocene compounds

    NASA Astrophysics Data System (ADS)

    Li, Ya-Ping; Liu, Hai-Bo; Liu, Tao; Yu, Zhang-Yu

    2017-11-01

    The standard redox electrode potential ( E°) values of metallocene compounds are obtained theoretically with density functional theory (DFT) method at B3LYP/6-311++G( d, p) level and experimentally with cyclic voltammetry (CV). The theoretical E° values of metallocene compounds are in good agreement with experimental ones. We investigate the substituent effects on the redox properties of metallocene compounds. Among the four metallocene compounds, the E° values is largest for titanocene dichloride and smallest for ferrocene.

  18. Quantitative Earthquake Prediction on Global and Regional Scales

    NASA Astrophysics Data System (ADS)

    Kossobokov, Vladimir G.

    2006-03-01

    The Earth is a hierarchy of volumes of different size. Driven by planetary convection these volumes are involved into joint and relative movement. The movement is controlled by a wide variety of processes on and around the fractal mesh of boundary zones, and does produce earthquakes. This hierarchy of movable volumes composes a large non-linear dynamical system. Prediction of such a system in a sense of extrapolation of trajectory into the future is futile. However, upon coarse-graining the integral empirical regularities emerge opening possibilities of prediction in a sense of the commonly accepted consensus definition worked out in 1976 by the US National Research Council. Implications of the understanding hierarchical nature of lithosphere and its dynamics based on systematic monitoring and evidence of its unified space-energy similarity at different scales help avoiding basic errors in earthquake prediction claims. They suggest rules and recipes of adequate earthquake prediction classification, comparison and optimization. The approach has already led to the design of reproducible intermediate-term middle-range earthquake prediction technique. Its real-time testing aimed at prediction of the largest earthquakes worldwide has proved beyond any reasonable doubt the effectiveness of practical earthquake forecasting. In the first approximation, the accuracy is about 1-5 years and 5-10 times the anticipated source dimension. Further analysis allows reducing spatial uncertainty down to 1-3 source dimensions, although at a cost of additional failures-to-predict. Despite of limited accuracy a considerable damage could be prevented by timely knowledgeable use of the existing predictions and earthquake prediction strategies. The December 26, 2004 Indian Ocean Disaster seems to be the first indication that the methodology, designed for prediction of M8.0+ earthquakes can be rescaled for prediction of both smaller magnitude earthquakes (e.g., down to M5.5+ in Italy) and

  19. Design and prediction of new acetylcholinesterase inhibitor via quantitative structure activity relationship of huprines derivatives.

    PubMed

    Zhang, Shuqun; Hou, Bo; Yang, Huaiyu; Zuo, Zhili

    2016-05-01

    Acetylcholinesterase (AChE) is an important enzyme in the pathogenesis of Alzheimer's disease (AD). Comparative quantitative structure-activity relationship (QSAR) analyses on some huprines inhibitors against AChE were carried out using comparative molecular field analysis (CoMFA), comparative molecular similarity indices analysis (CoMSIA), and hologram QSAR (HQSAR) methods. Three highly predictive QSAR models were constructed successfully based on the training set. The CoMFA, CoMSIA, and HQSAR models have values of r (2) = 0.988, q (2) = 0.757, ONC = 6; r (2) = 0.966, q (2) = 0.645, ONC = 5; and r (2) = 0.957, q (2) = 0.736, ONC = 6. The predictabilities were validated using an external test sets, and the predictive r (2) values obtained by the three models were 0.984, 0.973, and 0.783, respectively. The analysis was performed by combining the CoMFA and CoMSIA field distributions with the active sites of the AChE to further understand the vital interactions between huprines and the protease. On the basis of the QSAR study, 14 new potent molecules have been designed and six of them are predicted to be more active than the best active compound 24 described in the literature. The final QSAR models could be helpful in design and development of novel active AChE inhibitors.

  20. Quantitative modeling and optimization of magnetic tweezers.

    PubMed

    Lipfert, Jan; Hao, Xiaomin; Dekker, Nynke H

    2009-06-17

    Magnetic tweezers are a powerful tool to manipulate single DNA or RNA molecules and to study nucleic acid-protein interactions in real time. Here, we have modeled the magnetic fields of permanent magnets in magnetic tweezers and computed the forces exerted on superparamagnetic beads from first principles. For simple, symmetric geometries the magnetic fields can be calculated semianalytically using the Biot-Savart law. For complicated geometries and in the presence of an iron yoke, we employ a finite-element three-dimensional PDE solver to numerically solve the magnetostatic problem. The theoretical predictions are in quantitative agreement with direct Hall-probe measurements of the magnetic field and with measurements of the force exerted on DNA-tethered beads. Using these predictive theories, we systematically explore the effects of magnet alignment, magnet spacing, magnet size, and of adding an iron yoke to the magnets on the forces that can be exerted on tethered particles. We find that the optimal configuration for maximal stretching forces is a vertically aligned pair of magnets, with a minimal gap between the magnets and minimal flow cell thickness. Following these principles, we present a configuration that allows one to apply > or = 40 pN stretching forces on approximately 1-microm tethered beads.

  1. Quantitative Modeling and Optimization of Magnetic Tweezers

    PubMed Central

    Lipfert, Jan; Hao, Xiaomin; Dekker, Nynke H.

    2009-01-01

    Abstract Magnetic tweezers are a powerful tool to manipulate single DNA or RNA molecules and to study nucleic acid-protein interactions in real time. Here, we have modeled the magnetic fields of permanent magnets in magnetic tweezers and computed the forces exerted on superparamagnetic beads from first principles. For simple, symmetric geometries the magnetic fields can be calculated semianalytically using the Biot-Savart law. For complicated geometries and in the presence of an iron yoke, we employ a finite-element three-dimensional PDE solver to numerically solve the magnetostatic problem. The theoretical predictions are in quantitative agreement with direct Hall-probe measurements of the magnetic field and with measurements of the force exerted on DNA-tethered beads. Using these predictive theories, we systematically explore the effects of magnet alignment, magnet spacing, magnet size, and of adding an iron yoke to the magnets on the forces that can be exerted on tethered particles. We find that the optimal configuration for maximal stretching forces is a vertically aligned pair of magnets, with a minimal gap between the magnets and minimal flow cell thickness. Following these principles, we present a configuration that allows one to apply ≥40 pN stretching forces on ≈1-μm tethered beads. PMID:19527664

  2. A Quantitative Theoretical Framework For Protein-Induced Fluorescence Enhancement-Förster-Type Resonance Energy Transfer (PIFE-FRET).

    PubMed

    Lerner, Eitan; Ploetz, Evelyn; Hohlbein, Johannes; Cordes, Thorben; Weiss, Shimon

    2016-07-07

    Single-molecule, protein-induced fluorescence enhancement (PIFE) serves as a molecular ruler at molecular distances inaccessible to other spectroscopic rulers such as Förster-type resonance energy transfer (FRET) or photoinduced electron transfer. In order to provide two simultaneous measurements of two distances on different molecular length scales for the analysis of macromolecular complexes, we and others recently combined measurements of PIFE and FRET (PIFE-FRET) on the single molecule level. PIFE relies on steric hindrance of the fluorophore Cy3, which is covalently attached to a biomolecule of interest, to rotate out of an excited-state trans isomer to the cis isomer through a 90° intermediate. In this work, we provide a theoretical framework that accounts for relevant photophysical and kinetic parameters of PIFE-FRET, show how this framework allows the extraction of the fold-decrease in isomerization mobility from experimental data, and show how these results provide information on changes in the accessible volume of Cy3. The utility of this model is then demonstrated for experimental results on PIFE-FRET measurement of different protein-DNA interactions. The proposed model and extracted parameters could serve as a benchmark to allow quantitative comparison of PIFE effects in different biological systems.

  3. Local lung deposition of ultrafine particles in healthy adults: experimental results and theoretical predictions.

    PubMed

    Sturm, Robert

    2016-11-01

    Ultrafine particles (UFP) of biogenic and anthropogenic origin occur in high numbers in the ambient atmosphere. In addition, aerosols containing ultrafine powders are used for the inhalation therapy of various diseases. All these facts make it necessary to obtain comprehensive knowledge regarding the exact behavior of UFP in the respiratory tract. Theoretical simulations of local UFP deposition are based on previously conducted inhalation experiments, where particles with various sizes (0.04, 0.06, 0.08, and 0.10 µm) were administered to the respiratory tract by application of the aerosol bolus technique. By the sequential change of the lung penetration depth of the inspired bolus, different volumetric lung regions could be generated and particle deposition in these regions could be evaluated. The model presented in this contribution adopted all parameters used in the experiments. Besides the obligatory comparison between practical and theoretical data, also advanced modeling predictions including the effect of varying functional residual capacity (FRC) and respiratory flow rate were conducted. Validation of the UFP deposition model shows that highest deposition fractions occur in those volumetric lung regions corresponding to the small and partly alveolated airways of the tracheobronchial tree. Particle deposition proximal to the trachea is increased in female probands with respect to male subjects. Decrease of both the FRC and the respiratory flow rate results in an enhancement of UFP deposition. The study comes to the conclusion that deposition of UFP taken up via bolus inhalation is influenced by a multitude of factors, among which lung morphometry and breathing conditions play a superior role.

  4. Local lung deposition of ultrafine particles in healthy adults: experimental results and theoretical predictions

    PubMed Central

    2016-01-01

    Background Ultrafine particles (UFP) of biogenic and anthropogenic origin occur in high numbers in the ambient atmosphere. In addition, aerosols containing ultrafine powders are used for the inhalation therapy of various diseases. All these facts make it necessary to obtain comprehensive knowledge regarding the exact behavior of UFP in the respiratory tract. Methods Theoretical simulations of local UFP deposition are based on previously conducted inhalation experiments, where particles with various sizes (0.04, 0.06, 0.08, and 0.10 µm) were administered to the respiratory tract by application of the aerosol bolus technique. By the sequential change of the lung penetration depth of the inspired bolus, different volumetric lung regions could be generated and particle deposition in these regions could be evaluated. The model presented in this contribution adopted all parameters used in the experiments. Besides the obligatory comparison between practical and theoretical data, also advanced modeling predictions including the effect of varying functional residual capacity (FRC) and respiratory flow rate were conducted. Results Validation of the UFP deposition model shows that highest deposition fractions occur in those volumetric lung regions corresponding to the small and partly alveolated airways of the tracheobronchial tree. Particle deposition proximal to the trachea is increased in female probands with respect to male subjects. Decrease of both the FRC and the respiratory flow rate results in an enhancement of UFP deposition. Conclusions The study comes to the conclusion that deposition of UFP taken up via bolus inhalation is influenced by a multitude of factors, among which lung morphometry and breathing conditions play a superior role. PMID:27942511

  5. Suitability of frequency modulated thermal wave imaging for skin cancer detection-A theoretical prediction.

    PubMed

    Bhowmik, Arka; Repaka, Ramjee; Mulaveesala, Ravibabu; Mishra, Subhash C

    2015-07-01

    A theoretical study on the quantification of surface thermal response of cancerous human skin using the frequency modulated thermal wave imaging (FMTWI) technique has been presented in this article. For the first time, the use of the FMTWI technique for the detection and the differentiation of skin cancer has been demonstrated in this article. A three dimensional multilayered skin has been considered with the counter-current blood vessels in individual skin layers along with different stages of cancerous lesions based on geometrical, thermal and physical parameters available in the literature. Transient surface thermal responses of melanoma during FMTWI of skin cancer have been obtained by integrating the heat transfer model for biological tissue along with the flow model for blood vessels. It has been observed from the numerical results that, flow of blood in the subsurface region leads to a substantial alteration on the surface thermal response of the human skin. The alteration due to blood flow further causes a reduction in the performance of the thermal imaging technique during the thermal evaluation of earliest melanoma stages (small volume) compared to relatively large volume. Based on theoretical study, it has been predicted that the method is suitable for detection and differentiation of melanoma with comparatively large volume than the earliest development stages (small volume). The study has also performed phase based image analysis of the raw thermograms to resolve the different stages of melanoma volume. The phase images have been found to be clearly individuate the different development stages of melanoma compared to raw thermograms. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Using leg muscles as shock absorbers: theoretical predictions and experimental results of drop landing performance.

    PubMed

    Minetti, A E; Ardigò, L P; Susta, D; Cotelli, F

    1998-12-01

    The use of muscles as power dissipators is investigated in this study, both from the modellistic and the experimental points of view. Theoretical predictions of the drop landing manoeuvre for a range of initial conditions have been obtained by accounting for the mechanical characteristics of knee extensor muscles, the limb geometry and assuming maximum neural activation. Resulting dynamics have been represented in the phase plane (vertical displacement versus speed) to better classify the damping performance. Predictions of safe landing in sedentary subjects were associated to dropping from a maximum (feet) height of 1.6-2.0 m (about 11 m on the moon). Athletes can extend up to 2.6-3.0 m, while for obese males (m = 100 kg, standard stature) the limit should reduce to 0.9-1.3 m. These results have been calculated by including in the model the estimated stiffness of the 'global elastic elements' acting below the squat position. Experimental landings from a height of 0.4, 0.7, 1.1 m (sedentary males (SM) and male (AM) and female (AF) athletes from the alpine ski national team) showed dynamics similar to the model predictions. While the peak power (for a drop height of about 0.7 m) was similar in SM and AF (AM shows a +40% increase, about 33 W/kg), AF stopped the downward movement after a time interval (0.219 +/- 0.030 s) from touch-down 20% significantly shorter than SM. Landing strategy and the effect of anatomical constraints are discussed in the paper.

  7. On the complex relationship between energy expenditure and longevity: Reconciling the contradictory empirical results with a simple theoretical model.

    PubMed

    Hou, Chen; Amunugama, Kaushalya

    2015-07-01

    The relationship between energy expenditure and longevity has been a central theme in aging studies. Empirical studies have yielded controversial results, which cannot be reconciled by existing theories. In this paper, we present a simple theoretical model based on first principles of energy conservation and allometric scaling laws. The model takes into considerations the energy tradeoffs between life history traits and the efficiency of the energy utilization, and offers quantitative and qualitative explanations for a set of seemingly contradictory empirical results. We show that oxidative metabolism can affect cellular damage and longevity in different ways in animals with different life histories and under different experimental conditions. Qualitative data and the linearity between energy expenditure, cellular damage, and lifespan assumed in previous studies are not sufficient to understand the complexity of the relationships. Our model provides a theoretical framework for quantitative analyses and predictions. The model is supported by a variety of empirical studies, including studies on the cellular damage profile during ontogeny; the intra- and inter-specific correlations between body mass, metabolic rate, and lifespan; and the effects on lifespan of (1) diet restriction and genetic modification of growth hormone, (2) the cold and exercise stresses, and (3) manipulations of antioxidant. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  8. Assessing the performance of quantitative image features on early stage prediction of treatment effectiveness for ovary cancer patients: a preliminary investigation

    NASA Astrophysics Data System (ADS)

    Zargari, Abolfazl; Du, Yue; Thai, Theresa C.; Gunderson, Camille C.; Moore, Kathleen; Mannel, Robert S.; Liu, Hong; Zheng, Bin; Qiu, Yuchen

    2018-02-01

    The objective of this study is to investigate the performance of global and local features to better estimate the characteristics of highly heterogeneous metastatic tumours, for accurately predicting the treatment effectiveness of the advanced stage ovarian cancer patients. In order to achieve this , a quantitative image analysis scheme was developed to estimate a total of 103 features from three different groups including shape and density, Wavelet, and Gray Level Difference Method (GLDM) features. Shape and density features are global features, which are directly applied on the entire target image; wavelet and GLDM features are local features, which are applied on the divided blocks of the target image. To assess the performance, the new scheme was applied on a retrospective dataset containing 120 recurrent and high grade ovary cancer patients. The results indicate that the three best performed features are skewness, root-mean-square (rms) and mean of local GLDM texture, indicating the importance of integrating local features. In addition, the averaged predicting performance are comparable among the three different categories. This investigation concluded that the local features contains at least as copious tumour heterogeneity information as the global features, which may be meaningful on improving the predicting performance of the quantitative image markers for the diagnosis and prognosis of ovary cancer patients.

  9. Predicting epidermal growth factor receptor gene amplification status in glioblastoma multiforme by quantitative enhancement and necrosis features deriving from conventional magnetic resonance imaging.

    PubMed

    Dong, Fei; Zeng, Qiang; Jiang, Biao; Yu, Xinfeng; Wang, Weiwei; Xu, Jingjing; Yu, Jinna; Li, Qian; Zhang, Minming

    2018-05-01

    To study whether some of the quantitative enhancement and necrosis features in preoperative conventional MRI (cMRI) had a predictive value for epidermal growth factor receptor (EGFR) gene amplification status in glioblastoma multiforme (GBM).Fifty-five patients with pathologically determined GBMs who underwent cMRI were retrospectively reviewed. The following cMRI features were quantitatively measured and recorded: long and short diameters of the enhanced portion (LDE and SDE), maximum and minimum thickness of the enhanced portion (MaxTE and MinTE), and long and short diameters of the necrotic portion (LDN and SDN). Univariate analysis of each feature and a decision tree model fed with all the features were performed. Area under the receiver operating characteristic (ROC) curve (AUC) was used to assess the performance of features, and predictive accuracy was used to assess the performance of the model.For single feature, MinTE showed the best performance in differentiating EGFR gene amplification negative (wild-type) (nEGFR) GBM from EGFR gene amplification positive (pEGFR) GBM, and it got an AUC of 0.68 with a cut-off value of 2.6 mm. The decision tree model included 2 features MinTE and SDN, and got an accuracy of 0.83 in validation dataset.Our results suggest that quantitative measurement of the features MinTE and SDN in preoperative cMRI had a high accuracy for predicting EGFR gene amplification status in GBM.

  10. A Quantitative Structure Activity Relationship for acute oral toxicity of pesticides on rats: Validation, domain of application and prediction.

    PubMed

    Hamadache, Mabrouk; Benkortbi, Othmane; Hanini, Salah; Amrane, Abdeltif; Khaouane, Latifa; Si Moussa, Cherif

    2016-02-13

    Quantitative Structure Activity Relationship (QSAR) models are expected to play an important role in the risk assessment of chemicals on humans and the environment. In this study, we developed a validated QSAR model to predict acute oral toxicity of 329 pesticides to rats because a few QSAR models have been devoted to predict the Lethal Dose 50 (LD50) of pesticides on rats. This QSAR model is based on 17 molecular descriptors, and is robust, externally predictive and characterized by a good applicability domain. The best results were obtained with a 17/9/1 Artificial Neural Network model trained with the Quasi Newton back propagation (BFGS) algorithm. The prediction accuracy for the external validation set was estimated by the Q(2)ext and the root mean square error (RMS) which are equal to 0.948 and 0.201, respectively. 98.6% of external validation set is correctly predicted and the present model proved to be superior to models previously published. Accordingly, the model developed in this study provides excellent predictions and can be used to predict the acute oral toxicity of pesticides, particularly for those that have not been tested as well as new pesticides. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Sediment sorting along tidal sand waves: A comparison between field observations and theoretical predictions

    NASA Astrophysics Data System (ADS)

    Van Oyen, Tomas; Blondeaux, Paolo; Van den Eynde, Dries

    2013-07-01

    A site-by-site comparison between field observations and theoretical predictions of sediment sorting patterns along tidal sand waves is performed for ten locations in the North Sea. At each site, the observed grain size distribution along the bottom topography and the geometry of the bed forms is described in detail and the procedure used to obtain the model parameters is summarized. The model appears to accurately describe the wavelength of the observed sand waves for the majority of the locations; still providing a reliable estimate for the other sites. In addition, it is found that for seven out of the ten locations, the qualitative sorting process provided by the model agrees with the observed grain size distribution. A discussion of the site-by-site comparison is provided which, taking into account uncertainties in the field data, indicates that the model grasps the major part of the key processes controlling the phenomenon.

  12. Prediction of Emergent Heart Failure Death by Semi-Quantitative Triage Risk Stratification

    PubMed Central

    Van Spall, Harriette G. C.; Atzema, Clare; Schull, Michael J.; Newton, Gary E.; Mak, Susanna; Chong, Alice; Tu, Jack V.; Stukel, Thérèse A.; Lee, Douglas S.

    2011-01-01

    Objectives Generic triage risk assessments are widely used in the emergency department (ED), but have not been validated for prediction of short-term risk among patients with acute heart failure (HF). Our objective was to evaluate the Canadian Triage Acuity Scale (CTAS) for prediction of early death among HF patients. Methods We included patients presenting with HF to an ED in Ontario from Apr 2003 to Mar 2007. We used the National Ambulatory Care Reporting System and vital statistics databases to examine care and outcomes. Results Among 68,380 patients (76±12 years, 49.4% men), early mortality was stratified with death rates of 9.9%, 1.9%, 0.9%, and 0.5% at 1-day, and 17.2%, 5.9%, 3.8%, and 2.5% at 7-days, for CTAS 1, 2, 3, and 4–5, respectively. Compared to lower acuity (CTAS 4–5) patients, adjusted odds ratios (aOR) for 1-day death were 1.32 (95%CI; 0.93–1.88; p = 0.12) for CTAS 3, 2.41 (95%CI; 1.71–3.40; p<0.001) for CTAS 2, and highest for CTAS 1: 9.06 (95%CI; 6.28–13.06; p<0.001). Predictors of triage-critical (CTAS 1) status included oxygen saturation <90% (aOR 5.92, 95%CI; 3.09–11.81; p<0.001), respiratory rate >24 breaths/minute (aOR 1.96, 95%CI; 1.05–3.67; p = 0.034), and arrival by paramedic (aOR 3.52, 95%CI; 1.70–8.02; p = 0.001). While age/sex-adjusted CTAS score provided good discrimination for ED (c-statistic = 0.817) and 1-day (c-statistic = 0.724) death, mortality prediction was improved further after accounting for cardiac and non-cardiac co-morbidities (c-statistics 0.882 and 0.810, respectively; both p<0.001). Conclusions A semi-quantitative triage acuity scale assigned at ED presentation and based largely on respiratory factors predicted emergent death among HF patients. PMID:21853068

  13. Theoretical performance of foil journal bearings

    NASA Technical Reports Server (NTRS)

    Carpino, M.; Peng, J.-P.

    1991-01-01

    A modified forward iteration approach for the coupled solution of foil bearings is presented. The method is used to predict the steady state theoretical performance of a journal type gas bearing constructed from an inextensible shell supported by an elastic foundation. Bending effects are treated as negligible. Finite element methods are used to predict both the foil deflections and the pressure distribution in the gas film.

  14. Quantitative measures of meniscus extrusion predict incident radiographic knee osteoarthritis--data from the Osteoarthritis Initiative.

    PubMed

    Emmanuel, K; Quinn, E; Niu, J; Guermazi, A; Roemer, F; Wirth, W; Eckstein, F; Felson, D

    2016-02-01

    To test the hypothesis that quantitative measures of meniscus extrusion predict incident radiographic knee osteoarthritis (KOA), prior to the advent of radiographic disease. 206 knees with incident radiographic KOA (Kellgren Lawrence Grade (KLG) 0 or 1 at baseline, developing KLG 2 or greater with a definite osteophyte and joint space narrowing (JSN) grade ≥1 by year 4) were matched to 232 control knees not developing incident KOA. Manual segmentation of the central five slices of the medial and lateral meniscus was performed on coronal 3T DESS MRI and quantitative meniscus position was determined. Cases and controls were compared using conditional logistic regression adjusting for age, sex, BMI, race and clinical site. Sensitivity analyses of early (year [Y] 1/2) and late (Y3/4) incidence was performed. Mean medial extrusion distance was significantly greater for incident compared to non-incident knees (1.56 mean ± 1.12 mm SD vs 1.29 ± 0.99 mm; +21%, P < 0.01), so was the percent extrusion area of the medial meniscus (25.8 ± 15.8% vs 22.0 ± 13.5%; +17%, P < 0.05). This finding was consistent for knees restricted to medial incidence. No significant differences were observed for the lateral meniscus in incident medial KOA, or for the tibial plateau coverage between incident and non-incident knees. Restricting the analysis to medial incident KOA at Y1/2 differences were attenuated, but reached significance for extrusion distance, whereas no significant differences were observed at incident KOA in Y3/4. Greater medial meniscus extrusion predicts incident radiographic KOA. Early onset KOA showed greater differences for meniscus position between incident and non-incident knees than late onset KOA. Copyright © 2015 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.

  15. Quantitative approaches to energy and glucose homeostasis: machine learning and modelling for precision understanding and prediction

    PubMed Central

    Murphy, Kevin G.; Jones, Nick S.

    2018-01-01

    Obesity is a major global public health problem. Understanding how energy homeostasis is regulated, and can become dysregulated, is crucial for developing new treatments for obesity. Detailed recording of individual behaviour and new imaging modalities offer the prospect of medically relevant models of energy homeostasis that are both understandable and individually predictive. The profusion of data from these sources has led to an interest in applying machine learning techniques to gain insight from these large, relatively unstructured datasets. We review both physiological models and machine learning results across a diverse range of applications in energy homeostasis, and highlight how modelling and machine learning can work together to improve predictive ability. We collect quantitative details in a comprehensive mathematical supplement. We also discuss the prospects of forecasting homeostatic behaviour and stress the importance of characterizing stochasticity within and between individuals in order to provide practical, tailored forecasts and guidance to combat the spread of obesity. PMID:29367240

  16. Insight into the localized surface plasmon resonance property of core-satellite nanostructures: Theoretical prediction and experimental validation.

    PubMed

    Song, Dongxing; Jing, Dengwei

    2017-11-01

    Regulation of the localized surface plasmon resonance (LSPR) of nanoparticles by changing the dielectric constant of the surrounding medium has been exploited in many practical applications. In this study, using Ag-nanodot-decorated SiO 2 nanoparticles (Ag-decorated SiO 2 NPs) with different solvents, we investigated the potential of using such core-satellite nanostructures as a liquid sensor for the determination of melamine. The dielectric constant effect of the surrounding medium on the LSPR property was given particular attention. It was found that colloids with water as solvent display a LSPR shift of 14nm, and this value was 18nm for ethanol. For colloids with methanol and glycol as solvents, the peak shifts are negligible. Finite-difference time-domain (FDTD) simulations were used to assign the LSPR peaks of Ag-decorated SiO 2 NPs and to monitor the effect of the substrate and solvent on the LSPR properties. In the calculations, the wavelength positions of the LSPR peaks for Ag-decorated SiO 2 NPs in various solvents were successfully predicted in the order methanolquantitatively explain the LSPR peak shift of Ag-decorated SiO 2 NPs in the presence of various concentrations of melamine. Our work is expected to be valuable for theoretical guidance in design of new materials and devices based on LSPR effects. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Quantitative collision induced mass spectrometry of substituted piperazines - A correlative analysis between theory and experiment

    NASA Astrophysics Data System (ADS)

    Ivanova, Bojidarka; Spiteller, Michael

    2017-12-01

    The present paper deals with quantitative kinetics and thermodynamics of collision induced dissociation (CID) reactions of piperazines under different experimental conditions together with a systematic description of effect of counter-ions on common MS fragment reactions of piperazines; and intra-molecular effect of quaternary cyclization of substituted piperazines yielding to quaternary salts. There are discussed quantitative model equations of rate constants as well as free Gibbs energies of series of m-independent CID fragment processes in GP, which have been evidenced experimentally. Both kinetic and thermodynamic parameters are also predicted by computational density functional theory (DFT) and ab initio both static and dynamic methods. The paper examines validity of Maxwell-Boltzmann distribution to non-Boltzmann CID processes in quantitatively as well. The experiments conducted within the latter framework yield to an excellent correspondence with theoretical quantum chemical modeling. The important property of presented model equations of reaction kinetics is the applicability in predicting unknown and assigning of known mass spectrometric (MS) patterns. The nature of "GP" continuum of CID-MS coupled scheme of measurements with electrospray ionization (ESI) source is discussed, performing parallel computations in gas-phase (GP) and polar continuum at different temperatures and ionic strengths. The effect of pressure is presented. The study contributes significantly to methodological and phenomenological developments of CID-MS and its analytical implementations for quantitative and structural analyses. It also demonstrates great prospective of a complementary application of experimental CID-MS and computational quantum chemistry studying chemical reactivity, among others. To a considerable extend this work underlies the place of computational quantum chemistry to the field of experimental analytical chemistry in particular highlighting the structural analysis.

  18. Another look at retroactive and proactive interference: a quantitative analysis of conversion processes.

    PubMed

    Blank, Hartmut

    2005-02-01

    Traditionally, the causes of interference phenomena were sought in "real" or "hard" memory processes such as unlearning, response competition, or inhibition, which serve to reduce the accessibility of target items. I propose an alternative approach which does not deny the influence of such processes but highlights a second, equally important, source of interference-the conversion (Tulving, 1983) of accessible memory information into memory performance. Conversion is conceived as a problem-solving-like activity in which the rememberer tries to find solutions to a memory task. Conversion-based interference effects are traced to different conversion processes in the experimental and control conditions of interference designs. I present a simple theoretical model that quantitatively predicts the resulting amount of interference. In two paired-associate learning experiments using two different types of memory tests, these predictions were corroborated. Relations of the present approach to traditional accounts of interference phenomena and implications for eyewitness testimony are discussed.

  19. Aircraft Noise Prediction Program theoretical manual: Propeller aerodynamics and noise

    NASA Technical Reports Server (NTRS)

    Zorumski, W. E. (Editor); Weir, D. S. (Editor)

    1986-01-01

    The prediction sequence used in the aircraft noise prediction program (ANOPP) is described. The elements of the sequence are called program modules. The first group of modules analyzes the propeller geometry, the aerodynamics, including both potential and boundary-layer flow, the propeller performance, and the surface loading distribution. This group of modules is based entirely on aerodynamic strip theory. The next group of modules deals with the first group. Predictions of periodic thickness and loading noise are determined with time-domain methods. Broadband noise is predicted by a semiempirical method. Near-field predictions of fuselage surface pressrues include the effects of boundary layer refraction and scattering. Far-field predictions include atmospheric and ground effects.

  20. Python for Information Theoretic Analysis of Neural Data

    PubMed Central

    Ince, Robin A. A.; Petersen, Rasmus S.; Swan, Daniel C.; Panzeri, Stefano

    2008-01-01

    Information theory, the mathematical theory of communication in the presence of noise, is playing an increasingly important role in modern quantitative neuroscience. It makes it possible to treat neural systems as stochastic communication channels and gain valuable, quantitative insights into their sensory coding function. These techniques provide results on how neurons encode stimuli in a way which is independent of any specific assumptions on which part of the neuronal response is signal and which is noise, and they can be usefully applied even to highly non-linear systems where traditional techniques fail. In this article, we describe our work and experiences using Python for information theoretic analysis. We outline some of the algorithmic, statistical and numerical challenges in the computation of information theoretic quantities from neural data. In particular, we consider the problems arising from limited sampling bias and from calculation of maximum entropy distributions in the presence of constraints representing the effects of different orders of interaction in the system. We explain how and why using Python has allowed us to significantly improve the speed and domain of applicability of the information theoretic algorithms, allowing analysis of data sets characterized by larger numbers of variables. We also discuss how our use of Python is facilitating integration with collaborative databases and centralised computational resources. PMID:19242557

  1. Solubility prediction of naphthalene in carbon dioxide from crystal microstructure

    NASA Astrophysics Data System (ADS)

    Sang, Jiarong; Jin, Junsu; Mi, Jianguo

    2018-03-01

    Crystals dissolved in solvents are ubiquitous in both natural and artificial systems. Due to the complicated structures and asymmetric interactions between the crystal and solvent, it is difficult to interpret the dissolution mechanism and predict solubility using traditional theories and models. Here we use the classical density functional theory (DFT) to describe the crystal dissolution behavior. As an example, naphthalene dissolved in carbon dioxide (CO2) is considered within the DFT framework. The unit cell dimensions and microstructure of crystalline naphthalene are determined by minimizing the free-energy of the crystal. According to the microstructure, the solubilities of naphthalene in CO2 are predicted based on the equality of naphthalene's chemical potential in crystal and solution phases, and the interfacial structures and free-energies between different crystal planes and solution are determined to investigate the dissolution mechanism at the molecular level. The theoretical predictions are in general agreement with the available experimental data, implying that the present model is quantitatively reliable in describing crystal dissolution.

  2. Invasion emerges from cancer cell adaptation to competitive microenvironments: Quantitative predictions from multiscale mathematical models

    PubMed Central

    Rejniak, Katarzyna A.; Gerlee, Philip

    2013-01-01

    Summary In this review we summarize our recent efforts using mathematical modeling and computation to simulate cancer invasion, with a special emphasis on the tumor microenvironment. We consider cancer progression as a complex multiscale process and approach it with three single-cell based mathematical models that examine the interactions between tumor microenvironment and cancer cells at several scales. The models exploit distinct mathematical and computational techniques, yet they share core elements and can be compared and/or related to each other. The overall aim of using mathematical models is to uncover the fundamental mechanisms that lend cancer progression its direction towards invasion and metastasis. The models effectively simulate various modes of cancer cell adaptation to the microenvironment in a growing tumor. All three point to a general mechanism underlying cancer invasion: competition for adaptation between distinct cancer cell phenotypes, driven by a tumor microenvironment with scarce resources. These theoretical predictions pose an intriguing experimental challenge: test the hypothesis that invasion is an emergent property of cancer cell populations adapting to selective microenvironment pressure, rather than culmination of cancer progression producing cells with the “invasive phenotype”. In broader terms, we propose that fundamental insights into cancer can be achieved by experimentation interacting with theoretical frameworks provided by computational and mathematical modeling. PMID:18524624

  3. Comparison of Quantitative and Qualitative Research Traditions: Epistemological, Theoretical, and Methodological Differences

    ERIC Educational Resources Information Center

    Yilmaz, Kaya

    2013-01-01

    There has been much discussion about quantitative and qualitative approaches to research in different disciplines. In the behavioural and social sciences, these two paradigms are compared to reveal their relative strengths and weaknesses. But the debate about both traditions has commonly taken place in academic books. It is hard to find an article…

  4. Evaluation of a quantitative structure-property relationship (QSPR) for predicting mid-visible refractive index of secondary organic aerosol (SOA).

    PubMed

    Redmond, Haley; Thompson, Jonathan E

    2011-04-21

    In this work we describe and evaluate a simple scheme by which the refractive index (λ = 589 nm) of non-absorbing components common to secondary organic aerosols (SOA) may be predicted from molecular formula and density (g cm(-3)). The QSPR approach described is based on three parameters linked to refractive index-molecular polarizability, the ratio of mass density to molecular weight, and degree of unsaturation. After computing these quantities for a training set of 111 compounds common to atmospheric aerosols, multi-linear regression analysis was conducted to establish a quantitative relationship between the parameters and accepted value of refractive index. The resulting quantitative relationship can often estimate refractive index to ±0.01 when averaged across a variety of compound classes. A notable exception is for alcohols for which the model consistently underestimates refractive index. Homogenous internal mixtures can conceivably be addressed through use of either the volume or mole fraction mixing rules commonly used in the aerosol community. Predicted refractive indices reconstructed from chemical composition data presented in the literature generally agree with previous reports of SOA refractive index. Additionally, the predicted refractive indices lie near measured values we report for λ = 532 nm for SOA generated from vapors of α-pinene (R.I. 1.49-1.51) and toluene (R.I. 1.49-1.50). We envision the QSPR method may find use in reconstructing optical scattering of organic aerosols if mass composition data is known. Alternatively, the method described could be incorporated into in models of organic aerosol formation/phase partitioning to better constrain organic aerosol optical properties.

  5. Accurate Theoretical Predictions of the Properties of Energetic Materials

    DTIC Science & Technology

    2008-09-18

    decomposition, Monte Carlo, molecular dynamics, supercritical fluids, solvation and separation, quantum Monte Carlo, potential energy surfaces, RDX , TNAZ...labs, who are contributing to the theoretical efforts, providing data for testing of the models, or aiding in the transition of the methods, models...and results to DoD applications. The major goals of the project are: • Models that describe phase transitions and chemical reactions in

  6. Chemometric Methods and Theoretical Molecular Descriptors in Predictive QSAR Modeling of the Environmental Behavior of Organic Pollutants

    NASA Astrophysics Data System (ADS)

    Gramatica, Paola

    This chapter surveys the QSAR modeling approaches (developed by the author's research group) for the validated prediction of environmental properties of organic pollutants. Various chemometric methods, based on different theoretical molecular descriptors, have been applied: explorative techniques (such as PCA for ranking, SOM for similarity analysis), modeling approaches by multiple-linear regression (MLR, in particular OLS), and classification methods (mainly k-NN, CART, CP-ANN). The focus of this review is on the main topics of environmental chemistry and ecotoxicology, related to the physico-chemical properties, the reactivity, and biological activity of chemicals of high environmental concern. Thus, the review deals with atmospheric degradation reactions of VOCs by tropospheric oxidants, persistence and long-range transport of POPs, sorption behavior of pesticides (Koc and leaching), bioconcentration, toxicity (acute aquatic toxicity, mutagenicity of PAHs, estrogen binding activity for endocrine disruptors compounds (EDCs)), and finally persistent bioaccumulative and toxic (PBT) behavior for the screening and prioritization of organic pollutants. Common to all the proposed models is the attention paid to model validation for predictive ability (not only internal, but also external for chemicals not participating in the model development) and checking of the chemical domain of applicability. Adherence to such a policy, requested also by the OECD principles, ensures the production of reliable predicted data, useful also in the new European regulation of chemicals, REACH.

  7. Quantitative confirmation of diffusion-limited oxidation theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gillen, K.T.; Clough, R.L.

    1990-01-01

    Diffusion-limited (heterogeneous) oxidation effects are often important for studies of polymer degradation. Such effects are common in polymers subjected to ionizing radiation at relatively high dose rate. To better understand the underlying oxidation processes and to aid in the planning of accelerated aging studies, it would be desirable to be able to monitor and quantitatively understand these effects. In this paper, we briefly review a theoretical diffusion approach which derives model profiles for oxygen surrounded sheets of material by combining oxygen permeation rates with kinetically based oxygen consumption expressions. The theory leads to a simple governing expression involving the oxygenmore » consumption and permeation rates together with two model parameters {alpha} and {beta}. To test the theory, gamma-initiated oxidation of a sheet of commercially formulated EPDM rubber was performed under conditions which led to diffusion-limited oxidation. Profile shapes from the theoretical treatments are shown to accurately fit experimentally derived oxidation profiles. In addition, direct measurements on the same EPDM material of the oxygen consumption and permeation rates, together with values of {alpha} and {beta} derived from the fitting procedure, allow us to quantitatively confirm for the first time the governing theoretical relationship. 17 refs., 3 figs.« less

  8. Theoretical study in carrier mobility of two-dimensional materials

    NASA Astrophysics Data System (ADS)

    Huang, R.

    2017-09-01

    Recently, the theoretical prediction on carrier mobility of two-dimensional (2D) materials has aroused wild attention. At present, there is still a large gap between the theoretical prediction and the device performance of the semiconductor based on the 2D layer semiconductor materials such as graphene. It is particularly important to theoretically design and screen the high-performance 2D layered semiconductor materials with suitable band gap and high carrier mobility. This paper introduces some 2D materials with fine properties and deduces the formula for mobility of the isotropic materials on the basis of the deformation potential theory and Fermic golden rule under acoustic phonon scattering conditions, and then discusses the carrier mobility of anisotropic materials with Dirac cones. We point out the misconceptions in the existing literature and discuss the correct ones.

  9. Quantitative structure activity relationship (QSAR) of piperine analogs for bacterial NorA efflux pump inhibitors.

    PubMed

    Nargotra, Amit; Sharma, Sujata; Koul, Jawahir Lal; Sangwan, Pyare Lal; Khan, Inshad Ali; Kumar, Ashwani; Taneja, Subhash Chander; Koul, Surrinder

    2009-10-01

    Quantitative structure activity relationship (QSAR) analysis of piperine analogs as inhibitors of efflux pump NorA from Staphylococcus aureus has been performed in order to obtain a highly accurate model enabling prediction of inhibition of S. aureus NorA of new chemical entities from natural sources as well as synthetic ones. Algorithm based on genetic function approximation method of variable selection in Cerius2 was used to generate the model. Among several types of descriptors viz., topological, spatial, thermodynamic, information content and E-state indices that were considered in generating the QSAR model, three descriptors such as partial negative surface area of the compounds, area of the molecular shadow in the XZ plane and heat of formation of the molecules resulted in a statistically significant model with r(2)=0.962 and cross-validation parameter q(2)=0.917. The validation of the QSAR models was done by cross-validation, leave-25%-out and external test set prediction. The theoretical approach indicates that the increase in the exposed partial negative surface area increases the inhibitory activity of the compound against NorA whereas the area of the molecular shadow in the XZ plane is inversely proportional to the inhibitory activity. This model also explains the relationship of the heat of formation of the compound with the inhibitory activity. The model is not only able to predict the activity of new compounds but also explains the important regions in the molecules in quantitative manner.

  10. Theoretical prediction and impact of fundamental electric dipole moments

    DOE PAGES

    Ellis, Sebastian A. R.; Kane, Gordon L.

    2016-01-13

    The predicted Standard Model (SM) electric dipole moments (EDMs) of electrons and quarks are tiny, providing an important window to observe new physics. Theories beyond the SM typically allow relatively large EDMs. The EDMs depend on the relative phases of terms in the effective Lagrangian of the extended theory, which are generally unknown. Underlying theories, such as string/M-theories compactified to four dimensions, could predict the phases and thus EDMs in the resulting supersymmetric (SUSY) theory. Earlier one of us, with collaborators, made such a prediction and found, unexpectedly, that the phases were predicted to be zero at tree level inmore » the theory at the unification or string scale ~O(10 16 GeV). Electroweak (EW) scale EDMs still arise via running from the high scale, and depend only on the SM Yukawa couplings that also give the CKM phase. Here we extend the earlier work by studying the dependence of the low scale EDMs on the constrained but not fully known fundamental Yukawa couplings. The dominant contribution is from two loop diagrams and is not sensitive to the choice of Yukawa texture. The electron EDM should not be found to be larger than about 5 × 10 –30e cm, and the neutron EDM should not be larger than about 5 × 10 –29e cm. These values are quite a bit smaller than the reported predictions from Split SUSY and typical effective theories, but much larger than the Standard Model prediction. Also, since models with random phases typically give much larger EDMs, it is a significant testable prediction of compactified M-theory that the EDMs should not be above these upper limits. The actual EDMs can be below the limits, so once they are measured they could provide new insight into the fundamental Yukawa couplings of leptons and quarks. As a result, we comment also on the role of strong CP violation. EDMs probe fundamental physics near the Planck scale.« less

  11. Theoretical prediction and impact of fundamental electric dipole moments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ellis, Sebastian A. R.; Kane, Gordon L.

    The predicted Standard Model (SM) electric dipole moments (EDMs) of electrons and quarks are tiny, providing an important window to observe new physics. Theories beyond the SM typically allow relatively large EDMs. The EDMs depend on the relative phases of terms in the effective Lagrangian of the extended theory, which are generally unknown. Underlying theories, such as string/M-theories compactified to four dimensions, could predict the phases and thus EDMs in the resulting supersymmetric (SUSY) theory. Earlier one of us, with collaborators, made such a prediction and found, unexpectedly, that the phases were predicted to be zero at tree level inmore » the theory at the unification or string scale ~O(10 16 GeV). Electroweak (EW) scale EDMs still arise via running from the high scale, and depend only on the SM Yukawa couplings that also give the CKM phase. Here we extend the earlier work by studying the dependence of the low scale EDMs on the constrained but not fully known fundamental Yukawa couplings. The dominant contribution is from two loop diagrams and is not sensitive to the choice of Yukawa texture. The electron EDM should not be found to be larger than about 5 × 10 –30e cm, and the neutron EDM should not be larger than about 5 × 10 –29e cm. These values are quite a bit smaller than the reported predictions from Split SUSY and typical effective theories, but much larger than the Standard Model prediction. Also, since models with random phases typically give much larger EDMs, it is a significant testable prediction of compactified M-theory that the EDMs should not be above these upper limits. The actual EDMs can be below the limits, so once they are measured they could provide new insight into the fundamental Yukawa couplings of leptons and quarks. As a result, we comment also on the role of strong CP violation. EDMs probe fundamental physics near the Planck scale.« less

  12. Target and Tissue Selectivity Prediction by Integrated Mechanistic Pharmacokinetic-Target Binding and Quantitative Structure Activity Modeling.

    PubMed

    Vlot, Anna H C; de Witte, Wilhelmus E A; Danhof, Meindert; van der Graaf, Piet H; van Westen, Gerard J P; de Lange, Elizabeth C M

    2017-12-04

    Selectivity is an important attribute of effective and safe drugs, and prediction of in vivo target and tissue selectivity would likely improve drug development success rates. However, a lack of understanding of the underlying (pharmacological) mechanisms and availability of directly applicable predictive methods complicates the prediction of selectivity. We explore the value of combining physiologically based pharmacokinetic (PBPK) modeling with quantitative structure-activity relationship (QSAR) modeling to predict the influence of the target dissociation constant (K D ) and the target dissociation rate constant on target and tissue selectivity. The K D values of CB1 ligands in the ChEMBL database are predicted by QSAR random forest (RF) modeling for the CB1 receptor and known off-targets (TRPV1, mGlu5, 5-HT1a). Of these CB1 ligands, rimonabant, CP-55940, and Δ 8 -tetrahydrocanabinol, one of the active ingredients of cannabis, were selected for simulations of target occupancy for CB1, TRPV1, mGlu5, and 5-HT1a in three brain regions, to illustrate the principles of the combined PBPK-QSAR modeling. Our combined PBPK and target binding modeling demonstrated that the optimal values of the K D and k off for target and tissue selectivity were dependent on target concentration and tissue distribution kinetics. Interestingly, if the target concentration is high and the perfusion of the target site is low, the optimal K D value is often not the lowest K D value, suggesting that optimization towards high drug-target affinity can decrease the benefit-risk ratio. The presented integrative structure-pharmacokinetic-pharmacodynamic modeling provides an improved understanding of tissue and target selectivity.

  13. In silico prediction of nematic transition temperature for liquid crystals using quantitative structure-property relationship approaches.

    PubMed

    Fatemi, Mohammad Hossein; Ghorbanzad'e, Mehdi

    2009-11-01

    Quantitative structure-property relationship models for the prediction of the nematic transition temperature (T (N)) were developed by using multilinear regression analysis and a feedforward artificial neural network (ANN). A collection of 42 thermotropic liquid crystals was chosen as the data set. The data set was divided into three sets: for training, and an internal and external test set. Training and internal test sets were used for ANN model development, and the external test set was used for evaluation of the predictive power of the model. In order to build the models, a set of six descriptors were selected by the best multilinear regression procedure of the CODESSA program. These descriptors were: atomic charge weighted partial negatively charged surface area, relative negative charged surface area, polarity parameter/square distance, minimum most negative atomic partial charge, molecular volume, and the A component of moment of inertia, which encode geometrical and electronic characteristics of molecules. These descriptors were used as inputs to ANN. The optimized ANN model had 6:6:1 topology. The standard errors in the calculation of T (N) for the training, internal, and external test sets using the ANN model were 1.012, 4.910, and 4.070, respectively. To further evaluate the ANN model, a crossvalidation test was performed, which produced the statistic Q (2) = 0.9796 and standard deviation of 2.67 based on predicted residual sum of square. Also, the diversity test was performed to ensure the model's stability and prove its predictive capability. The obtained results reveal the suitability of ANN for the prediction of T (N) for liquid crystals using molecular structural descriptors.

  14. Quantitative Analysis of Defects in Silicon. [to predict energy conversion efficiency of silicon samples for solar cells

    NASA Technical Reports Server (NTRS)

    Natesh, R.; Smith, J. M.; Qidwai, H. A.; Bruce, T.

    1979-01-01

    The evaluation and prediction of the conversion efficiency for a variety of silicon samples with differences in structural defects, such as grain boundaries, twin boundaries, precipitate particles, dislocations, etc. are discussed. Quantitative characterization of these structural defects, which were revealed by etching the surface of silicon samples, is performed by using an image analyzer. Due to different crystal growth and fabrication techniques the various types of silicon contain a variety of trace impurity elements and structural defects. The two most important criteria in evaluating the various silicon types for solar cell applications are cost and conversion efficiency.

  15. Theoretical prediction of the mechanistic pathways and kinetics of methylcyclohexane initiated by OH radicals

    NASA Astrophysics Data System (ADS)

    Begum, Saheen Shehnaz; Deka, Ramesh Chandra; Gour, Nand Kishor

    2018-06-01

    In this manuscript, we have systematically depicted the theoretical prediction of H-absorption from methylcyclohexane initiated by OH radical. For this we have performed dual-level of quantum chemical calculations on the gas-phase reactions between methylcyclohexane (MCH) and OH radical. Geometry optimisation and vibrational frequency calculations have been performed at BHandHLYP/6-311G(d,p) level of theory along with energetic calculations at coupled cluster CCSD(T) method using the same basis set. All the stationary points of titled reaction have been located on the potential energy surface. It has also been found that the H-abstraction takes place from -CH site of MCH, which is the minimum energy pathway than others. The rate constant was calculated using canonical transition state theory for MCH with OH radical and is found to be 3.27 × 10-12 cm3 molecule-1 s-1, which is in sound agreement with reported experimental data. The atmospheric lifetime of MCH and branching ratios of the reaction channels are also reported in the manuscript.

  16. Theoretical prediction of the electronic transport properties of the Al-Cu alloys based on the first-principle calculation and Boltzmann transport equation

    NASA Astrophysics Data System (ADS)

    Choi, Garam; Lee, Won Bo

    Metal alloys, especially Al-based, are commonly-used materials for various industrial applications. In this paper, the Al-Cu alloys with varying the Al-Cu ratio were investigated based on the first-principle calculation using density functional theory. And the electronic transport properties of the Al-Cu alloys were carried out using Boltzmann transport theory. From the results, the transport properties decrease with Cu-containing ratio at the temperature from moderate to high, but with non-linearity. It is inferred by various scattering effects from the calculation results with relaxation time approximation. For the Al-Cu alloy system, where it is hard to find the reliable experimental data for various alloys, it supports understanding and expectation for the thermal electrical properties from the theoretical prediction. Theoretical and computational soft matters laboratory.

  17. Quantitative PET Imaging with Novel HER3 Targeted Peptides Selected by Phage Display to Predict Androgen Independent Prostate Cancer Progression

    DTIC Science & Technology

    2017-08-01

    9 4 1. Introduction The subject of this research is the design and testing of a PET imaging agent for the detection and...AWARD NUMBER: W81XWH-16-1-0447 TITLE: Quantitative PET Imaging with Novel HER3-Targeted Peptides Selected by Phage Display to Predict Androgen...MA 02114 REPORT DATE: August 2017 TYPE OF REPORT: Annual PREPARED FOR: U.S. Army Medical Research and Materiel Command Fort Detrick, Maryland

  18. Enthalpy/entropy contributions to conformational KIEs: theoretical predictions and comparison with experiment.

    PubMed

    Fong, Aaron; Meyer, Matthew P; O'Leary, Daniel J

    2013-02-18

    Previous theoretical studies of Mislow's doubly-bridged biphenyl ketone 1 and dihydrodimethylphenanthrene 2 have determined significant entropic contributions to their normal (1) and inverse (2) conformational kinetic isotope effects (CKIEs). To broaden our investigation, we have used density functional methods to characterize the potential energy surfaces and vibrational frequencies for ground and transition structures of additional systems with measured CKIEs, including [2.2]-metaparacyclophane-d (3), 1,1'-binaphthyl (4), 2,2'-dibromo-[1,1'-biphenyl]-4,4'-dicarboxylic acid (5), and the 2-(N,N,N-trimethyl)-2'-(N,N-dimethyl)-diaminobiphenyl cation (6). We have also computed CKIEs in a number of systems whose experimental CKIEs are unknown. These include analogs of 1 in which the C=O groups have been replaced with CH₂ (7), O (8), and S (9) atoms and ring-expanded variants of 2 containing CH₂ (10), O (11), S (12), or C=O (13) groups. Vibrational entropy contributes to the CKIEs in all of these systems with the exception of cyclophane 3, whose isotope effect is predicted to be purely enthalpic in origin and whose Bigeleisen-Mayer ZPE term is equivalent to DDH‡. There is variable correspondence between these terms in the other molecules studied, thus identifying additional examples of systems in which the Bigeleisen-Mayer formalism does not correlate with DH/DS dissections.

  19. Quantitative thickness prediction of tectonically deformed coal using Extreme Learning Machine and Principal Component Analysis: a case study

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Li, Yan; Chen, Tongjun; Yan, Qiuyan; Ma, Li

    2017-04-01

    The thickness of tectonically deformed coal (TDC) has positive correlation associations with gas outbursts. In order to predict the TDC thickness of coal beds, we propose a new quantitative predicting method using an extreme learning machine (ELM) algorithm, a principal component analysis (PCA) algorithm, and seismic attributes. At first, we build an ELM prediction model using the PCA attributes of a synthetic seismic section. The results suggest that the ELM model can produce a reliable and accurate prediction of the TDC thickness for synthetic data, preferring Sigmoid activation function and 20 hidden nodes. Then, we analyze the applicability of the ELM model on the thickness prediction of the TDC with real application data. Through the cross validation of near-well traces, the results suggest that the ELM model can produce a reliable and accurate prediction of the TDC. After that, we use 250 near-well traces from 10 wells to build an ELM predicting model and use the model to forecast the TDC thickness of the No. 15 coal in the study area using the PCA attributes as the inputs. Comparing the predicted results, it is noted that the trained ELM model with two selected PCA attributes yields better predication results than those from the other combinations of the attributes. Finally, the trained ELM model with real seismic data have a different number of hidden nodes (10) than the trained ELM model with synthetic seismic data. In summary, it is feasible to use an ELM model to predict the TDC thickness using the calculated PCA attributes as the inputs. However, the input attributes, the activation function and the number of hidden nodes in the ELM model should be selected and tested carefully based on individual application.

  20. A theoretical perspective on road safety communication campaigns.

    PubMed

    Elvik, Rune

    2016-12-01

    This paper proposes a theoretical perspective on road safety communication campaigns, which may help in identifying the conditions under which such campaigns can be effective. The paper proposes that, from a theoretical point of view, it is reasonable to assume that road user behaviour is, by and large, subjectively rational. This means that road users are assumed to behave the way they think is best. If this assumption is accepted, the best theoretical prediction is that road safety campaigns consisting of persuasive messages only will have no effect on road user behaviour and accordingly no effect on accidents. This theoretical prediction is not supported by meta-analyses of studies that have evaluated the effects of road safety communication campaigns. These analyses conclude that, on the average, such campaigns are associated with an accident reduction. The paper discusses whether this finding can be explained theoretically. The discussion relies on the distinction made by many modern theorists between bounded and perfect rationality. Road user behaviour is characterised by bounded rationality. Hence, if road users can gain insight into the bounds of their rationality, so that they see advantages to themselves of changing behaviour, they are likely to do so. It is, however, largely unknown whether such a mechanism explains why some road safety communication campaigns have been found to be more effective than others. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Incorporation of membrane potential into theoretical analysis of electrogenic ion pumps.

    PubMed Central

    Reynolds, J A; Johnson, E A; Tanford, C

    1985-01-01

    The transport rate of an electrogenic ion pump, and therefore also the current generated by the pump, depends on the potential difference (delta psi) between the two sides of the membrane. This dependence arises from at least three sources: (i) charges carried across the membrane by the transported ions; (ii) protein charges in the ion binding sites that alternate between exposure to (and therefore electrical contact with) the two sides of the membrane; (iii) protein charges or dipoles that move within the domain of the membrane as a result of conformational changes linked to the transport cycle. Quantitative prediction of these separate effects requires presently unavailable molecular information, so that there is great freedom in assigning voltage dependence to individual steps of a transport cycle when one attempts to make theoretical calculations of physiological behavior for an ion pump for which biochemical data (mechanism, rate constants, etc.) are already established. The need to make kinetic behavior consistent with thermodynamic laws, however, limits this freedom, and in most cases two points on a curve of rate versus delta psi will be fixed points independent of how voltage dependence is assigned. Theoretical discussion of these principles is illustrated by reference to ATP-driven Na,K pumps. Physiological data for this system suggest that all three of the possible mechanisms for generating voltage dependence do in fact make significant contributions. PMID:2413447

  2. Theoretical and experimental α decay half-lives of the heaviest odd-Z elements and general predictions

    NASA Astrophysics Data System (ADS)

    Zhang, H. F.; Royer, G.

    2007-10-01

    Theoretical α decay half-lives of the heaviest odd-Z nuclei are calculated using the experimental Qα value. The barriers in the quasimolecular shape path are determined within a Generalized Liquid Drop Model (GLDM) and the WKB approximation is used. The results are compared with calculations using the Density-Dependent M3Y (DDM3Y) effective interaction and the Viola-Seaborg-Sobiczewski (VSS) formulas. The calculations provide consistent estimates for the half-lives of the α decay chains of these superheavy elements. The experimental data stand between the GLDM calculations and VSS ones in the most time. Predictions are provided for the α decay half-lives of other superheavy nuclei within the GLDM and VSS approaches using the recent extrapolated Qα of Audi, Wapstra, and Thibault [Nucl. Phys. A729, 337 (2003)], which may be used for future experimental assignment and identification.

  3. Quantitative measures of meniscus extrusion predict incident radiographic knee osteoarthritis – data from the Osteoarthritis Initiative

    PubMed Central

    Emmanuel, K.; Quinn, E.; Niu, J.; Guermazi, A.; Roemer, F.; Wirth, W.; Eckstein, F.; Felson, D.

    2017-01-01

    SUMMARY Objective To test the hypothesis that quantitative measures of meniscus extrusion predict incident radiographic knee osteoarthritis (KOA), prior to the advent of radiographic disease. Methods 206 knees with incident radiographic KOA (Kellgren Lawrence Grade (KLG) 0 or 1 at baseline, developing KLG 2 or greater with a definite osteophyte and joint space narrowing (JSN) grade ≥1 by year 4) were matched to 232 control knees not developing incident KOA. Manual segmentation of the central five slices of the medial and lateral meniscus was performed on coronal 3T DESS MRI and quantitative meniscus position was determined. Cases and controls were compared using conditional logistic regression adjusting for age, sex, BMI, race and clinical site. Sensitivity analyses of early (year [Y] 1/2) and late (Y3/4) incidence was performed. Results Mean medial extrusion distance was significantly greater for incident compared to non-incident knees (1.56 mean ± 1.12 mm SD vs 1.29 ± 0.99 mm; +21%, P < 0.01), so was the percent extrusion area of the medial meniscus (25.8 ± 15.8% vs 22.0 ± 13.5%; +17%, P < 0.05). This finding was consistent for knees restricted to medial incidence. No significant differences were observed for the lateral meniscus in incident medial KOA, or for the tibial plateau coverage between incident and non-incident knees. Restricting the analysis to medial incident KOA at Y1/2 differences were attenuated, but reached significance for extrusion distance, whereas no significant differences were observed at incident KOA in Y3/4. Conclusion Greater medial meniscus extrusion predicts incident radiographic KOA. Early onset KOA showed greater differences for meniscus position between incident and non-incident knees than late onset KOA. PMID:26318658

  4. Applications of Microfluidics in Quantitative Biology.

    PubMed

    Bai, Yang; Gao, Meng; Wen, Lingling; He, Caiyun; Chen, Yuan; Liu, Chenli; Fu, Xiongfei; Huang, Shuqiang

    2018-05-01

    Quantitative biology is dedicated to taking advantage of quantitative reasoning and advanced engineering technologies to make biology more predictable. Microfluidics, as an emerging technique, provides new approaches to precisely control fluidic conditions on small scales and collect data in high-throughput and quantitative manners. In this review, the authors present the relevant applications of microfluidics to quantitative biology based on two major categories (channel-based microfluidics and droplet-based microfluidics), and their typical features. We also envision some other microfluidic techniques that may not be employed in quantitative biology right now, but have great potential in the near future. © 2017 Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences. Biotechnology Journal Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  5. Theoretical Characterizaiton of Visual Signatures

    NASA Astrophysics Data System (ADS)

    Kashinski, D. O.; Chase, G. M.; di Nallo, O. E.; Scales, A. N.; Vanderley, D. L.; Byrd, E. F. C.

    2015-05-01

    We are investigating the accuracy of theoretical models used to predict the visible, ultraviolet, and infrared spectra, as well as other properties, of product materials ejected from the muzzle of currently fielded systems. Recent advances in solid propellants has made the management of muzzle signature (flash) a principle issue in weapons development across the calibers. A priori prediction of the electromagnetic spectra of formulations will allow researchers to tailor blends that yield desired signatures and determine spectrographic detection ranges. Quantum chemistry methods at various levels of sophistication have been employed to optimize molecular geometries, compute unscaled vibrational frequencies, and determine the optical spectra of specific gas-phase species. Electronic excitations are being computed using Time Dependent Density Functional Theory (TD-DFT). A full statistical analysis and reliability assessment of computational results is currently underway. A comparison of theoretical results to experimental values found in the literature is used to assess any affects of functional choice and basis set on calculation accuracy. The status of this work will be presented at the conference. Work supported by the ARL, DoD HPCMP, and USMA.

  6. Molecular descriptor subset selection in theoretical peptide quantitative structure-retention relationship model development using nature-inspired optimization algorithms.

    PubMed

    Žuvela, Petar; Liu, J Jay; Macur, Katarzyna; Bączek, Tomasz

    2015-10-06

    In this work, performance of five nature-inspired optimization algorithms, genetic algorithm (GA), particle swarm optimization (PSO), artificial bee colony (ABC), firefly algorithm (FA), and flower pollination algorithm (FPA), was compared in molecular descriptor selection for development of quantitative structure-retention relationship (QSRR) models for 83 peptides that originate from eight model proteins. The matrix with 423 descriptors was used as input, and QSRR models based on selected descriptors were built using partial least squares (PLS), whereas root mean square error of prediction (RMSEP) was used as a fitness function for their selection. Three performance criteria, prediction accuracy, computational cost, and the number of selected descriptors, were used to evaluate the developed QSRR models. The results show that all five variable selection methods outperform interval PLS (iPLS), sparse PLS (sPLS), and the full PLS model, whereas GA is superior because of its lowest computational cost and higher accuracy (RMSEP of 5.534%) with a smaller number of variables (nine descriptors). The GA-QSRR model was validated initially through Y-randomization. In addition, it was successfully validated with an external testing set out of 102 peptides originating from Bacillus subtilis proteomes (RMSEP of 22.030%). Its applicability domain was defined, from which it was evident that the developed GA-QSRR exhibited strong robustness. All the sources of the model's error were identified, thus allowing for further application of the developed methodology in proteomics.

  7. Prediction of Moisture Content for Congou Black Tea Withering Leaves Using Image Features and Nonlinear Method.

    PubMed

    Liang, Gaozhen; Dong, Chunwang; Hu, Bin; Zhu, Hongkai; Yuan, Haibo; Jiang, Yongwen; Hao, Guoshuang

    2018-05-18

    Withering is the first step in the processing of congou black tea. With respect to the deficiency of traditional water content detection methods, a machine vision based NDT (Non Destructive Testing) method was established to detect the moisture content of withered leaves. First, according to the time sequences using computer visual system collected visible light images of tea leaf surfaces, and color and texture characteristics are extracted through the spatial changes of colors. Then quantitative prediction models for moisture content detection of withered tea leaves was established through linear PLS (Partial Least Squares) and non-linear SVM (Support Vector Machine). The results showed correlation coefficients higher than 0.8 between the water contents and green component mean value (G), lightness component mean value (L * ) and uniformity (U), which means that the extracted characteristics have great potential to predict the water contents. The performance parameters as correlation coefficient of prediction set (Rp), root-mean-square error of prediction (RMSEP), and relative standard deviation (RPD) of the SVM prediction model are 0.9314, 0.0411 and 1.8004, respectively. The non-linear modeling method can better describe the quantitative analytical relations between the image and water content. With superior generalization and robustness, the method would provide a new train of thought and theoretical basis for the online water content monitoring technology of automated production of black tea.

  8. Conceptual Diversity, Moderators, and Theoretical Issues in Quantitative Studies of Cultural Capital Theory

    ERIC Educational Resources Information Center

    Tan, Cheng Yong

    2017-01-01

    The present study reviewed quantitative empirical studies examining the relationship between cultural capital and student achievement. Results showed that researchers had conceptualized and measured cultural capital in different ways. It is argued that the more holistic understanding of the construct beyond highbrow cultural consumption must be…

  9. An overview of quantitative approaches in Gestalt perception.

    PubMed

    Jäkel, Frank; Singh, Manish; Wichmann, Felix A; Herzog, Michael H

    2016-09-01

    Gestalt psychology is often criticized as lacking quantitative measurements and precise mathematical models. While this is true of the early Gestalt school, today there are many quantitative approaches in Gestalt perception and the special issue of Vision Research "Quantitative Approaches in Gestalt Perception" showcases the current state-of-the-art. In this article we give an overview of these current approaches. For example, ideal observer models are one of the standard quantitative tools in vision research and there is a clear trend to try and apply this tool to Gestalt perception and thereby integrate Gestalt perception into mainstream vision research. More generally, Bayesian models, long popular in other areas of vision research, are increasingly being employed to model perceptual grouping as well. Thus, although experimental and theoretical approaches to Gestalt perception remain quite diverse, we are hopeful that these quantitative trends will pave the way for a unified theory. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Quantitative parameters of CT texture analysis as potential markersfor early prediction of spontaneous intracranial hemorrhage enlargement.

    PubMed

    Shen, Qijun; Shan, Yanna; Hu, Zhengyu; Chen, Wenhui; Yang, Bing; Han, Jing; Huang, Yanfang; Xu, Wen; Feng, Zhan

    2018-04-30

    To objectively quantify intracranial hematoma (ICH) enlargement by analysing the image texture of head CT scans and to provide objective and quantitative imaging parameters for predicting early hematoma enlargement. We retrospectively studied 108 ICH patients with baseline non-contrast computed tomography (NCCT) and 24-h follow-up CT available. Image data were assessed by a chief radiologist and a resident radiologist. Consistency analysis between observers was tested. The patients were divided into training set (75%) and validation set (25%) by stratified sampling. Patients in the training set were dichotomized according to 24-h hematoma expansion ≥ 33%. Using the Laplacian of Gaussian bandpass filter, we chose different anatomical spatial domains ranging from fine texture to coarse texture to obtain a series of derived parameters (mean grayscale intensity, variance, uniformity) in order to quantify and evaluate all data. The parameters were externally validated on validation set. Significant differences were found between the two groups of patients within variance at V 1.0 and in uniformity at U 1.0 , U 1.8 and U 2.5 . The intraclass correlation coefficients for the texture parameters were between 0.67 and 0.99. The area under the ROC curve between the two groups of ICH cases was between 0.77 and 0.92. The accuracy of validation set by CTTA was 0.59-0.85. NCCT texture analysis can objectively quantify the heterogeneity of ICH and independently predict early hematoma enlargement. • Heterogeneity is helpful in predicting ICH enlargement. • CTTA could play an important role in predicting early ICH enlargement. • After filtering, fine texture had the best diagnostic performance. • The histogram-based uniformity parameters can independently predict ICH enlargement. • CTTA is more objective, more comprehensive, more independently operable, than previous methods.

  11. Breaking the theoretical scaling limit for predicting quasiparticle energies: the stochastic GW approach.

    PubMed

    Neuhauser, Daniel; Gao, Yi; Arntsen, Christopher; Karshenas, Cyrus; Rabani, Eran; Baer, Roi

    2014-08-15

    We develop a formalism to calculate the quasiparticle energy within the GW many-body perturbation correction to the density functional theory. The occupied and virtual orbitals of the Kohn-Sham Hamiltonian are replaced by stochastic orbitals used to evaluate the Green function G, the polarization potential W, and, thereby, the GW self-energy. The stochastic GW (sGW) formalism relies on novel theoretical concepts such as stochastic time-dependent Hartree propagation, stochastic matrix compression, and spatial or temporal stochastic decoupling techniques. Beyond the theoretical interest, the formalism enables linear scaling GW calculations breaking the theoretical scaling limit for GW as well as circumventing the need for energy cutoff approximations. We illustrate the method for silicon nanocrystals of varying sizes with N_{e}>3000 electrons.

  12. Methodological uncertainty in quantitative prediction of human hepatic clearance from in vitro experimental systems.

    PubMed

    Hallifax, D; Houston, J B

    2009-03-01

    Mechanistic prediction of unbound drug clearance from human hepatic microsomes and hepatocytes correlates with in vivo clearance but is both systematically low (10 - 20 % of in vivo clearance) and highly variable, based on detailed assessments of published studies. Metabolic capacity (Vmax) of commercially available human hepatic microsomes and cryopreserved hepatocytes is log-normally distributed within wide (30 - 150-fold) ranges; Km is also log-normally distributed and effectively independent of Vmax, implying considerable variability in intrinsic clearance. Despite wide overlap, average capacity is 2 - 20-fold (dependent on P450 enzyme) greater in microsomes than hepatocytes, when both are normalised (scaled to whole liver). The in vitro ranges contrast with relatively narrow ranges of clearance among clinical studies. The high in vitro variation probably reflects unresolved phenotypical variability among liver donors and practicalities in processing of human liver into in vitro systems. A significant contribution from the latter is supported by evidence of low reproducibility (several fold) of activity in cryopreserved hepatocytes and microsomes prepared from the same cells, between separate occasions of thawing of cells from the same liver. The large uncertainty which exists in human hepatic in vitro systems appears to dominate the overall uncertainty of in vitro-in vivo extrapolation, including uncertainties within scaling, modelling and drug dependent effects. As such, any notion of quantitative prediction of clearance appears severely challenged.

  13. Coherent evolution of parahydrogen induced polarisation using laser pump, NMR probe spectroscopy: Theoretical framework and experimental observation.

    PubMed

    Halse, Meghan E; Procacci, Barbara; Henshaw, Sarah-Louise; Perutz, Robin N; Duckett, Simon B

    2017-05-01

    We recently reported a pump-probe method that uses a single laser pulse to introduce parahydrogen (p-H 2 ) into a metal dihydride complex and then follows the time-evolution of the p-H 2 -derived nuclear spin states by NMR. We present here a theoretical framework to describe the oscillatory behaviour of the resultant hyperpolarised NMR signals using a product operator formalism. We consider the cases where the p-H 2 -derived protons form part of an AX, AXY, AXYZ or AA'XX' spin system in the product molecule. We use this framework to predict the patterns for 2D pump-probe NMR spectra, where the indirect dimension represents the evolution during the pump-probe delay and the positions of the cross-peaks depend on the difference in chemical shift of the p-H 2 -derived protons and the difference in their couplings to other nuclei. The evolution of the NMR signals of the p-H 2 -derived protons, as well as the transfer of hyperpolarisation to other NMR-active nuclei in the product, is described. The theoretical framework is tested experimentally for a set of ruthenium dihydride complexes representing the different spin systems. Theoretical predictions and experimental results agree to within experimental error for all features of the hyperpolarised 1 H and 31 P pump-probe NMR spectra. Thus we establish the laser pump, NMR probe approach as a robust way to directly observe and quantitatively analyse the coherent evolution of p-H 2 -derived spin order over micro-to-millisecond timescales. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  14. [Prediction of the molecular response to pertubations from single cell measurements].

    PubMed

    Remacle, Françoise; Levine, Raphael D

    2014-12-01

    The response of protein signalization networks to perturbations is analysed from single cell measurements. This experimental approach allows characterizing the fluctuations in protein expression levels from cell to cell. The analysis is based on an information theoretic approach grounded in thermodynamics leading to a quantitative version of Le Chatelier principle which allows to predict the molecular response. Two systems are investigated: human macrophages subjected to lipopolysaccharide challenge, analogous to the immune response against Gram-negative bacteria and the response of the proteins involved in the mTOR signalizing network of GBM cancer cells to changes in partial oxygen pressure. © 2014 médecine/sciences – Inserm.

  15. Design and prediction of new anticoagulants as a selective Factor IXa inhibitor via three-dimensional quantitative structure-property relationships of amidinobenzothiophene derivatives.

    PubMed

    Gao, Jia-Suo; Tong, Xu-Peng; Chang, Yi-Qun; He, Yu-Xuan; Mei, Yu-Dan; Tan, Pei-Hong; Guo, Jia-Liang; Liao, Guo-Chao; Xiao, Gao-Keng; Chen, Wei-Min; Zhou, Shu-Feng; Sun, Ping-Hua

    2015-01-01

    Factor IXa (FIXa), a blood coagulation factor, is specifically inhibited at the initiation stage of the coagulation cascade, promising an excellent approach for developing selective and safe anticoagulants. Eighty-four amidinobenzothiophene antithrombotic derivatives targeting FIXa were selected to establish three-dimensional quantitative structure-activity relationship (3D-QSAR) and three-dimensional quantitative structure-selectivity relationship (3D-QSSR) models using comparative molecular field analysis and comparative similarity indices analysis methods. Internal and external cross-validation techniques were investigated as well as region focusing and bootstrapping. The satisfactory q (2) values of 0.753 and 0.770, and r (2) values of 0.940 and 0.965 for 3D-QSAR and 3D-QSSR, respectively, indicated that the models are available to predict both the inhibitory activity and selectivity on FIXa against Factor Xa, the activated status of Factor X. This work revealed that the steric, hydrophobic, and H-bond factors should appropriately be taken into account in future rational design, especially the modifications at the 2'-position of the benzene and the 6-position of the benzothiophene in the R group, providing helpful clues to design more active and selective FIXa inhibitors for the treatment of thrombosis. On the basis of the three-dimensional quantitative structure-property relationships, 16 new potent molecules have been designed and are predicted to be more active and selective than Compound 33, which has the best activity as reported in the literature.

  16. Design and prediction of new anticoagulants as a selective Factor IXa inhibitor via three-dimensional quantitative structure-property relationships of amidinobenzothiophene derivatives

    PubMed Central

    Gao, Jia-Suo; Tong, Xu-Peng; Chang, Yi-Qun; He, Yu-Xuan; Mei, Yu-Dan; Tan, Pei-Hong; Guo, Jia-Liang; Liao, Guo-Chao; Xiao, Gao-Keng; Chen, Wei-Min; Zhou, Shu-Feng; Sun, Ping-Hua

    2015-01-01

    Factor IXa (FIXa), a blood coagulation factor, is specifically inhibited at the initiation stage of the coagulation cascade, promising an excellent approach for developing selective and safe anticoagulants. Eighty-four amidinobenzothiophene antithrombotic derivatives targeting FIXa were selected to establish three-dimensional quantitative structure–activity relationship (3D-QSAR) and three-dimensional quantitative structure–selectivity relationship (3D-QSSR) models using comparative molecular field analysis and comparative similarity indices analysis methods. Internal and external cross-validation techniques were investigated as well as region focusing and bootstrapping. The satisfactory q2 values of 0.753 and 0.770, and r2 values of 0.940 and 0.965 for 3D-QSAR and 3D-QSSR, respectively, indicated that the models are available to predict both the inhibitory activity and selectivity on FIXa against Factor Xa, the activated status of Factor X. This work revealed that the steric, hydrophobic, and H-bond factors should appropriately be taken into account in future rational design, especially the modifications at the 2′-position of the benzene and the 6-position of the benzothiophene in the R group, providing helpful clues to design more active and selective FIXa inhibitors for the treatment of thrombosis. On the basis of the three-dimensional quantitative structure–property relationships, 16 new potent molecules have been designed and are predicted to be more active and selective than Compound 33, which has the best activity as reported in the literature. PMID:25848211

  17. Evaluation of New Zealand's high-seas bottom trawl closures using predictive habitat models and quantitative risk assessment.

    PubMed

    Penney, Andrew J; Guinotte, John M

    2013-01-01

    United Nations General Assembly Resolution 61/105 on sustainable fisheries (UNGA 2007) establishes three difficult questions for participants in high-seas bottom fisheries to answer: 1) Where are vulnerable marine systems (VMEs) likely to occur?; 2) What is the likelihood of fisheries interaction with these VMEs?; and 3) What might qualify as adequate conservation and management measures to prevent significant adverse impacts? This paper develops an approach to answering these questions for bottom trawling activities in the Convention Area of the South Pacific Regional Fisheries Management Organisation (SPRFMO) within a quantitative risk assessment and cost : benefit analysis framework. The predicted distribution of deep-sea corals from habitat suitability models is used to answer the first question. Distribution of historical bottom trawl effort is used to answer the second, with estimates of seabed areas swept by bottom trawlers being used to develop discounting factors for reduced biodiversity in previously fished areas. These are used in a quantitative ecological risk assessment approach to guide spatial protection planning to address the third question. The coral VME likelihood (average, discounted, predicted coral habitat suitability) of existing spatial closures implemented by New Zealand within the SPRFMO area is evaluated. Historical catch is used as a measure of cost to industry in a cost : benefit analysis of alternative spatial closure scenarios. Results indicate that current closures within the New Zealand SPRFMO area bottom trawl footprint are suboptimal for protection of VMEs. Examples of alternative trawl closure scenarios are provided to illustrate how the approach could be used to optimise protection of VMEs under chosen management objectives, balancing protection of VMEs against economic loss to commercial fishers from closure of historically fished areas.

  18. Thermal expansion coefficient prediction of fuel-cell seal materials from silica sand

    NASA Astrophysics Data System (ADS)

    Hidayat, Nurul; Triwikantoro, Baqiya, Malik A.; Pratapa, Suminar

    2013-09-01

    This study is focused on the prediction of coefficient of thermal expansion (CTE) of silica-sand-based fuel-cell seal materials (FcSMs) which in principle require a CTE value in the range of 9.5-12 ppm/°C. A semi-quantitative theoretical method to predict the CTE value is proposed by applying the analyzed phase compositions from XRD data and characterized density-porosity behavior. A typical silica sand was milled at 150 rpm for 1 hour followed by heating at 1000 °C for another hour. The sand and heated samples were characterized by means of XRD to perceive the phase composition correlation between them. Rietveld refinement was executed to investigate the weight fraction of the phase contained in the samples, and then converted to volume fraction for composite CTE calculations. The result was applied to predict their potential physical properties for FcSM. Porosity was taken into account in the calculation after which it was directly measured by the Archimedes method.

  19. Predicting the size of individual and group differences on speeded cognitive tasks.

    PubMed

    Chen, Jing; Hale, Sandra; Myerson, Joel

    2007-06-01

    An a priori test of the difference engine model (Myerson, Hale, Zheng, Jenkins, & Widaman, 2003) was conducted using a large, diverse sample of individuals who performed three speeded verbal tasks and three speeded visuospatial tasks. Results demonstrated that, as predicted by the model, the group standard deviation (SD) on any task was proportional to the amount of processing required by that task. Both individual performances as well as those of fast and slow subgroups could be accurately predicted by the model using no free parameters, just an individual or subgroup's mean z-score and the values of theoretical constructs estimated from fits to the group SDs. Taken together, these results are consistent with post hoc analyses reported by Myerson et al. and provide even stronger supporting evidence. In particular, the ability to make quantitative predictions without using any free parameters provides the clearest demonstration to date of the power of an analytic approach on the basis of the difference engine.

  20. High Speed Research Noise Prediction Code (HSRNOISE) User's and Theoretical Manual

    NASA Technical Reports Server (NTRS)

    Golub, Robert (Technical Monitor); Rawls, John W., Jr.; Yeager, Jessie C.

    2004-01-01

    This report describes a computer program, HSRNOISE, that predicts noise levels for a supersonic aircraft powered by mixed flow turbofan engines with rectangular mixer-ejector nozzles. It fully documents the noise prediction algorithms, provides instructions for executing the HSRNOISE code, and provides predicted noise levels for the High Speed Research (HSR) program Technology Concept (TC) aircraft. The component source noise prediction algorithms were developed jointly by Boeing, General Electric Aircraft Engines (GEAE), NASA and Pratt & Whitney during the course of the NASA HSR program. Modern Technologies Corporation developed an alternative mixer ejector jet noise prediction method under contract to GEAE that has also been incorporated into the HSRNOISE prediction code. Algorithms for determining propagation effects and calculating noise metrics were taken from the NASA Aircraft Noise Prediction Program.

  1. Quantitative structure-activity relationships for predicting potential ecological hazard of organic chemicals for use in regulatory risk assessments.

    PubMed

    Comber, Mike H I; Walker, John D; Watts, Chris; Hermens, Joop

    2003-08-01

    The use of quantitative structure-activity relationships (QSARs) for deriving the predicted no-effect concentration of discrete organic chemicals for the purposes of conducting a regulatory risk assessment in Europe and the United States is described. In the United States, under the Toxic Substances Control Act (TSCA), the TSCA Interagency Testing Committee and the U.S. Environmental Protection Agency (U.S. EPA) use SARs to estimate the hazards of existing and new chemicals. Within the Existing Substances Regulation in Europe, QSARs may be used for data evaluation, test strategy indications, and the identification and filling of data gaps. To illustrate where and when QSARs may be useful and when their use is more problematic, an example, methyl tertiary-butyl ether (MTBE), is given and the predicted and experimental data are compared. Improvements needed for new QSARs and tools for developing and using QSARs are discussed.

  2. Theoretical Prediction of the Forming Limit Band

    NASA Astrophysics Data System (ADS)

    Banabic, D.; Vos, M.; Paraianu, L.; Jurco, P.

    2007-04-01

    Forming Limit Band (FLB) is a very useful tool to improve the sheet metal forming simulation robustness. Until now, the study of the FLB was only experimental. This paper presents the first attempt to model the FLB. The authors have established an original method for predicting the two margins of the limit band. The method was illustrated on the AA6111-T43 aluminum alloy. A good agreement with the experiments has been obtained.

  3. Theoretical Treatment of Ion Transfers in Two Polarizable Interface Systems When the Analyte Has Access to Both Interfaces.

    PubMed

    Olmos, José Manuel; Molina, Ángela; Laborda, Eduardo; Millán-Barrios, Enrique; Ortuño, Joaquín Ángel

    2018-02-06

    A new theory is presented to tackle the study of transfer processes of hydrophilic ions in two polarizable interface systems when the analyte is initially present in both aqueous phases. The treatment is applied to macrointerfaces (linear diffusion) and microholes (highly convergent diffusion), obtaining analytical equations for the current response in any voltammetric technique. The novel equations predict two signals in the current-potential curves that are symmetric when the compositions of the aqueous phases are identical while asymmetries appear otherwise. The theoretical results show good agreement with the experimental behavior of the "double transfer voltammograms" reported by Dryfe et al. in cyclic voltammetry (CV) ( Anal. Chem. 2014 , 86 , 435 - 442 ) as well as with cyclic square wave voltammetry (cSWV) experiments performed in the current work. The theoretical treatment is also extended to the situation where the target ion is lipophilic and initially present in the organic phase. The theory predicts an opposite effect of the lipophilicity of the ion on the shape of the voltammograms, which is validated experimentally via both CV and cSWV. For the above two cases, simple and manageable expressions and diagnosis criteria are derived for the qualitative and quantitative study of ion lipophilicity. The ion-transfer potentials can be easily quantified from the separation between the two signals making use of explicit analytical equations.

  4. On the feasibility of quantitative ultrasonic determination of fracture toughness: A literature review

    NASA Technical Reports Server (NTRS)

    Fu, L. S.

    1980-01-01

    The three main topics covered are: (1) fracture toughness and microstructure, (2) quantitative ultrasonic and microstructure; and (3) scattering and related mathematical methods. Literature in these areas is reviewed to give insight to the search of a theoretical foundation for quantitative ultrasonic measurement of fracture toughness. The literature review shows that fracture toughness is inherently related to the microstructure and in particular, it depends upon the spacing of inclusions or second particles and the aspect ratio of second phase particles. There are indications that ultrasonic velocity attenuation measurements can be used to determine fracture toughness. The leads to a review of the mathematical models available in solving boundary value problems related to microstructural factors that govern facture toughness and wave motion. A framework towards the theoretical study for the quantitative determination of fracture toughness is described and suggestions for future research are proposed.

  5. Quantitative fibrosis parameters highly predict esophageal-gastro varices in primary biliary cirrhosis.

    PubMed

    Wu, Q-M; Zhao, X-Y; You, H

    2016-01-01

    Esophageal-gastro Varices (EGV) may develop in any histological stages of primary biliary cirrhosis (PBC). We aim to establish and validate quantitative fibrosis (qFibrosis) parameters in portal, septal and fibrillar areas as ideal predictors of EGV in PBC patients. PBC patients with liver biopsy, esophagogastroscopy and Second Harmonic Generation (SHG)/Two-photon Excited Fluorescence (TPEF) microscopy images were retrospectively enrolled in this study. qFibrosis parameters in portal, septal and fibrillar areas were acquired by computer-assisted SHG/TPEF imaging system. Independent predictor was identified using multivariate logistic regression analysis. PBC patients with liver biopsy, esophagogastroscopy and Second Harmonic Generation (SHG)/Two-photon Excited Fluorescence (TPEF) microscopy images were retrospectively enrolled in this study. qFibrosis parameters in portal, septal and fibrillar areas were acquired by computer-assisted SHG/TPEF imaging system. Independent predictor was identified using multivariate logistic regression analysis. Among the forty-nine PBC patients with qFibrosis images, twenty-nine PBC patients with both esophagogastroscopy data and qFibrosis data were selected out for EGV prognosis analysis and 44.8% (13/29) of them had EGV. The qFibrosis parameters of collagen percentage and number of crosslink in fibrillar area, short/long/thin strings number and length/width of the strings in septa area were associated with EGV (p < 0.05). Multivariate logistic analysis showed that the collagen percentage in fibrillar area ≥ 3.6% was an independent factor to predict EGV (odds ratio 6.9; 95% confidence interval 1.6-27.4). The area under receiver operating characteristic (ROC), diagnostic sensitivity and specificity was 0.9, 100% and 75% respectively. Collagen percentage in Collagen percentage in the fibrillar area as an independent predictor can highly predict EGV in PBC patients.

  6. Application of quantitative structure activity relationship (QSAR) models to predict ozone toxicity in the lung.

    PubMed

    Kafoury, Ramzi M; Huang, Ming-Ju

    2005-08-01

    The sequence of events leading to ozone-induced airway inflammation is not well known. To elucidate the molecular and cellular events underlying ozone toxicity in the lung, we hypothesized that lipid ozonation products (LOPs) generated by the reaction of ozone with unsaturated fatty acids in the epithelial lining fluid and cell membranes play a key role in mediating ozone-induced airway inflammation. To test our hypothesis, we ozonized 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphatidylcholine (POPC) and generated LOPs. Confluent human bronchial epithelial cells were exposed to the derivatives of ozonized POPC-9-oxononanoyl, 9-hydroxy-9-hydroperoxynonanoyl, and 8-(5-octyl-1,2,4-trioxolan-3-yl-)octanoyl-at a concentration of 10 muM, and the activity of phospholipases A2 (PLA2), C (PLC), and D (PLD) was measured (1, 0.5, and 1 h, respectively). Quantitative structure-activity relationship (QSAR) models were utilized to predict the biological activity of LOPs in airway epithelial cells. The QSAR results showed a strong correlation between experimental and computed activity (r = 0.97, 0.98, 0.99, for PLA2, PLC, and PLD, respectively). The results indicate that QSAR models can be utilized to predict the biological activity of the various ozone-derived LOP species in the lung. Copyright 2005 Wiley Periodicals, Inc.

  7. Comparing quantitative values of two generations of laser-assisted indocyanine green dye angiography systems: can we predict necrosis?

    PubMed

    Phillips, Brett T; Fourman, Mitchell S; Rivara, Andrew; Dagum, Alexander B; Huston, Tara L; Ganz, Jason C; Bui, Duc T; Khan, Sami U

    2014-01-01

    Several devices exist today to assist the intraoperative determination of skin flap perfusion. Laser-Assisted Indocyanine Green Dye Angiography (LAICGA) has been shown to accurately predict mastectomy skin flap necrosis using quantitative perfusion values. The laser properties of the latest LAICGA device (SPY Elite) differ significantly from its predecessor system (SPY 2001), preventing direct translation of previous published data. The purpose of this study was to establish a mathematical relationship of perfusion values between these 2 devices. Breast reconstruction patients were prospectively enrolled into a clinical trial where skin flap evaluation and excision was based on quantitative SPY Q values previously established in the literature. Initial study patients underwent mastectomy skin flap evaluation using both SPY systems simultaneously. Absolute perfusion unit (APU) values at identical locations on the breast were then compared graphically. 210 data points were identified on the same patients (n = 4) using both SPY systems. A linear relationship (y = 2.9883x + 12.726) was identified with a high level or correlation (R(2) = 0.744). Previously published values using SPY 2001 (APU 3.7) provided a value of 23.8 APU on the SPY Elite. In addition, postoperative necrosis in these patients correlated to regions of skin identified with the SPY Elite with APU less than 23.8. Intraoperative comparison of LAICGA systems has provided direct correlation of perfusion values predictive of necrosis that were previously established in the literature. An APU value of 3.7 from the SPY 2001 correlates to a SPY Elite APU value of 23.8.

  8. Comparing Quantitative Values of Two Generations of Laser-Assisted Indocyanine Green Dye Angiography Systems: Can We Predict Necrosis?

    PubMed Central

    Fourman, Mitchell S.; Rivara, Andrew; Dagum, Alexander B.; Huston, Tara L.; Ganz, Jason C.; Bui, Duc T.; Khan, Sami U.

    2014-01-01

    Objective: Several devices exist today to assist the intraoperative determination of skin flap perfusion. Laser-Assisted Indocyanine Green Dye Angiography (LAICGA) has been shown to accurately predict mastectomy skin flap necrosis using quantitative perfusion values. The laser properties of the latest LAICGA device (SPY Elite) differ significantly from its predecessor system (SPY 2001), preventing direct translation of previous published data. The purpose of this study was to establish a mathematical relationship of perfusion values between these 2 devices. Methods: Breast reconstruction patients were prospectively enrolled into a clinical trial where skin flap evaluation and excision was based on quantitative SPY Q values previously established in the literature. Initial study patients underwent mastectomy skin flap evaluation using both SPY systems simultaneously. Absolute perfusion unit (APU) values at identical locations on the breast were then compared graphically. Results: 210 data points were identified on the same patients (n = 4) using both SPY systems. A linear relationship (y = 2.9883x + 12.726) was identified with a high level or correlation (R2 = 0.744). Previously published values using SPY 2001 (APU 3.7) provided a value of 23.8 APU on the SPY Elite. In addition, postoperative necrosis in these patients correlated to regions of skin identified with the SPY Elite with APU less than 23.8. Conclusion: Intraoperative comparison of LAICGA systems has provided direct correlation of perfusion values predictive of necrosis that were previously established in the literature. An APU value of 3.7 from the SPY 2001 correlates to a SPY Elite APU value of 23.8. PMID:25525483

  9. Quantitative analysis and prediction of G-quadruplex forming sequences in double-stranded DNA

    PubMed Central

    Kim, Minji; Kreig, Alex; Lee, Chun-Ying; Rube, H. Tomas; Calvert, Jacob; Song, Jun S.; Myong, Sua

    2016-01-01

    Abstract G-quadruplex (GQ) is a four-stranded DNA structure that can be formed in guanine-rich sequences. GQ structures have been proposed to regulate diverse biological processes including transcription, replication, translation and telomere maintenance. Recent studies have demonstrated the existence of GQ DNA in live mammalian cells and a significant number of potential GQ forming sequences in the human genome. We present a systematic and quantitative analysis of GQ folding propensity on a large set of 438 GQ forming sequences in double-stranded DNA by integrating fluorescence measurement, single-molecule imaging and computational modeling. We find that short minimum loop length and the thymine base are two main factors that lead to high GQ folding propensity. Linear and Gaussian process regression models further validate that the GQ folding potential can be predicted with high accuracy based on the loop length distribution and the nucleotide content of the loop sequences. Our study provides important new parameters that can inform the evaluation and classification of putative GQ sequences in the human genome. PMID:27095201

  10. Demonstration of a viable quantitative theory for interplanetary type II radio bursts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmidt, J. M., E-mail: jschmidt@physics.usyd.edu.au; Cairns, Iver H.

    Between 29 November and 1 December 2013 the two widely separated spacecraft STEREO A and B observed a long lasting, intermittent, type II radio burst for the extended frequency range ≈ 4 MHz to 30 kHz, including an intensification when the shock wave of the associated coronal mass ejection (CME) reached STEREO A. We demonstrate for the first time our ability to quantitatively and accurately simulate the fundamental (F) and harmonic (H) emission of type II bursts from the higher corona (near 11 solar radii) to 1 AU. Our modeling requires the combination of data-driven three-dimensional magnetohydrodynamic simulations for the CME andmore » plasma background, carried out with the BATS-R-US code, with an analytic quantitative kinetic model for both F and H radio emission, including the electron reflection at the shock, growth of Langmuir waves and radio waves, and the radiations propagation to an arbitrary observer. The intensities and frequencies of the observed radio emissions vary hugely by factors ≈ 10{sup 6} and ≈ 10{sup 3}, respectively; the theoretical predictions are impressively accurate, being typically in error by less than a factor of 10 and 20 %, for both STEREO A and B. We also obtain accurate predictions for the timing and characteristics of the shock and local radio onsets at STEREO A, the lack of such onsets at STEREO B, and the z-component of the magnetic field at STEREO A ahead of the shock, and in the sheath. Very strong support is provided by these multiple agreements for the theory, the efficacy of the BATS-R-US code, and the vision of using type IIs and associated data-theory iterations to predict whether a CME will impact Earth’s magnetosphere and drive space weather events.« less

  11. Demonstration of a viable quantitative theory for interplanetary type II radio bursts

    NASA Astrophysics Data System (ADS)

    Schmidt, J. M.; Cairns, Iver H.

    2016-03-01

    Between 29 November and 1 December 2013 the two widely separated spacecraft STEREO A and B observed a long lasting, intermittent, type II radio burst for the extended frequency range ≈ 4 MHz to 30 kHz, including an intensification when the shock wave of the associated coronal mass ejection (CME) reached STEREO A. We demonstrate for the first time our ability to quantitatively and accurately simulate the fundamental (F) and harmonic (H) emission of type II bursts from the higher corona (near 11 solar radii) to 1 AU. Our modeling requires the combination of data-driven three-dimensional magnetohydrodynamic simulations for the CME and plasma background, carried out with the BATS-R-US code, with an analytic quantitative kinetic model for both F and H radio emission, including the electron reflection at the shock, growth of Langmuir waves and radio waves, and the radiations propagation to an arbitrary observer. The intensities and frequencies of the observed radio emissions vary hugely by factors ≈ 106 and ≈ 103, respectively; the theoretical predictions are impressively accurate, being typically in error by less than a factor of 10 and 20 %, for both STEREO A and B. We also obtain accurate predictions for the timing and characteristics of the shock and local radio onsets at STEREO A, the lack of such onsets at STEREO B, and the z-component of the magnetic field at STEREO A ahead of the shock, and in the sheath. Very strong support is provided by these multiple agreements for the theory, the efficacy of the BATS-R-US code, and the vision of using type IIs and associated data-theory iterations to predict whether a CME will impact Earth's magnetosphere and drive space weather events.

  12. Theoretical Study of pKa Values for Trivalent Rare-Earth Metal Cations in Aqueous Solution.

    PubMed

    Yu, Donghai; Du, Ruobing; Xiao, Ji-Chang; Xu, Shengming; Rong, Chunying; Liu, Shubin

    2018-01-18

    Molecular acidity of trivalent rare-earth metal cations in aqueous solution is an important factor dedicated to the efficiency of their extraction and separation processes. In this work, the aqueous acidity of these metal ions has been quantitatively investigated using a few theoretical approaches. Our computational results expressed in terms of pK a values agree well with the tetrad effect of trivalent rare-earth ions extensively reported in the extraction and separation of these elements. Strong linear relationships have been observed between the acidity and quantum electronic descriptors such as the molecular electrostatic potential on the acidic nucleus and the sum of the valence natural atomic orbitals energies of the dissociating proton. Making use of the predicted pK a values, we have also predicted the major ionic forms of these species in the aqueous environment with different pH values, which can be employed to rationalize the behavior difference of different rare-earth metal cations during the extraction process. Our present results should provide needed insights not only for the qualitatively understanding about the extraction and separation between yttrium and lanthanide elements but also for the prediction of novel and more efficient rare-earth metal extractants in the future.

  13. Quantitative magnetic resonance imaging in traumatic brain injury.

    PubMed

    Bigler, E D

    2001-04-01

    Quantitative neuroimaging has now become a well-established method for analyzing magnetic resonance imaging in traumatic brain injury (TBI). A general review of studies that have examined quantitative changes following TBI is presented. The consensus of quantitative neuroimaging studies is that most brain structures demonstrate changes in volume or surface area after injury. The patterns of atrophy are consistent with the generalized nature of brain injury and diffuse axonal injury. Various clinical caveats are provided including how quantitative neuroimaging findings can be used clinically and in predicting rehabilitation outcome. The future of quantitative neuroimaging also is discussed.

  14. A new rapid quantitative test for fecal calprotectin predicts endoscopic activity in ulcerative colitis.

    PubMed

    Lobatón, Triana; Rodríguez-Moranta, Francisco; Lopez, Alicia; Sánchez, Elena; Rodríguez-Alonso, Lorena; Guardiola, Jordi

    2013-04-01

    Fecal calprotectin (FC) determined by the enzyme-linked immunosorbent assay (ELISA) test has been proposed as a promising biomarker of endoscopic activity in ulcerative colitis (UC). However, data on its accuracy in predicting endoscopic activity is scarce. Besides, FC determined by the quantitative-point-of-care test (FC-QPOCT) that provides rapid and individual results could optimize its use in clinical practice. The aims of our study were to evaluate the ability of FC to predict endoscopic activity according to the Mayo score in patients with UC when determined by FC-QPOCT and to compare it with the ELISA test (FC-ELISA). FC was determined simultaneously by FC-ELISA and FC-QPOCT in patients with UC undergoing colonoscopy. Clinical disease activity and endoscopy were assessed according to the Mayo score. Blood tests were taken to analyze serological biomarkers. A total of 146 colonoscopies were performed on 123 patients with UC. FC-QPOCT correlated more closely with the Mayo endoscopic subscore (Spearman's correlation coefficient rank r = 0.727, P < 0.001) than clinical activity (r = 0.636, P < 0.001), platelets (r = 0.381, P < 0.001), leucocytes (r = 0.300, P < 0.001), and C-reactive protein (r = 0.291, P = 0.002). The prediction of "endoscopic remission" (Mayo endoscopic subscore ≤1) with FC-QPOCT (280 µg/g) and FC-ELISA (250 µg/g) presented an area under the curve of 0.906 and 0.924, respectively. The interclass correlation index between both tests was 0.904 (95% confidence interval, 0.864-0.932; P < 0.001). FC determined by QPOCT was an accurate surrogate marker of "endoscopic remission" in UC and presented a good correlation with the FC-ELISA test.

  15. EnviroLand: A Simple Computer Program for Quantitative Stream Assessment.

    ERIC Educational Resources Information Center

    Dunnivant, Frank; Danowski, Dan; Timmens-Haroldson, Alice; Newman, Meredith

    2002-01-01

    Introduces the Enviroland computer program which features lab simulations of theoretical calculations for quantitative analysis and environmental chemistry, and fate and transport models. Uses the program to demonstrate the nature of linear and nonlinear equations. (Author/YDS)

  16. Theoretical integration and the psychology of sport injury prevention.

    PubMed

    Chan, Derwin King-Chung; Hagger, Martin S

    2012-09-01

    Integrating different theories of motivation to facilitate or predict behaviour change has received an increasing amount of attention within the health, sport and exercise science literature. A recent review article in Sports Medicine, by Keats, Emery and Finch presented an integrated model using two prominent theories in social psychology, self-determination theory (SDT) and the theory of planned behaviour (TPB), aimed at explaining and enhancing athletes' adherence to sport injury prevention. While echoing their optimistic views about the utility of these two theories to explain adherence in this area and the virtues of theoretical integration, we would like to seize this opportunity to clarify several conceptual principles arising from the authors' integration of the theories. Clarifying the theoretical assumptions and explaining precisely how theoretical integration works is crucial not only for improving the comprehensiveness of the integrated framework for predicting injury prevention behaviour, but also to aid the design of effective intervention strategies targeting behavioural adherence. In this article, we use the integration of SDT and TPB as an example to demonstrate how theoretical integration can advance the understanding of injury prevention behaviour in sport.

  17. Comparative analysis of predictive models for nongenotoxic hepatocarcinogenicity using both toxicogenomics and quantitative structure-activity relationships.

    PubMed

    Liu, Zhichao; Kelly, Reagan; Fang, Hong; Ding, Don; Tong, Weida

    2011-07-18

    The primary testing strategy to identify nongenotoxic carcinogens largely relies on the 2-year rodent bioassay, which is time-consuming and labor-intensive. There is an increasing effort to develop alternative approaches to prioritize the chemicals for, supplement, or even replace the cancer bioassay. In silico approaches based on quantitative structure-activity relationships (QSAR) are rapid and inexpensive and thus have been investigated for such purposes. A slightly more expensive approach based on short-term animal studies with toxicogenomics (TGx) represents another attractive option for this application. Thus, the primary questions are how much better predictive performance using short-term TGx models can be achieved compared to that of QSAR models, and what length of exposure is sufficient for high quality prediction based on TGx. In this study, we developed predictive models for rodent liver carcinogenicity using gene expression data generated from short-term animal models at different time points and QSAR. The study was focused on the prediction of nongenotoxic carcinogenicity since the genotoxic chemicals can be inexpensively removed from further development using various in vitro assays individually or in combination. We identified 62 chemicals whose hepatocarcinogenic potential was available from the National Center for Toxicological Research liver cancer database (NCTRlcdb). The gene expression profiles of liver tissue obtained from rats treated with these chemicals at different time points (1 day, 3 days, and 5 days) are available from the Gene Expression Omnibus (GEO) database. Both TGx and QSAR models were developed on the basis of the same set of chemicals using the same modeling approach, a nearest-centroid method with a minimum redundancy and maximum relevancy-based feature selection with performance assessed using compound-based 5-fold cross-validation. We found that the TGx models outperformed QSAR in every aspect of modeling. For example, the

  18. Interactive 3D visualization for theoretical virtual observatories

    NASA Astrophysics Data System (ADS)

    Dykes, T.; Hassan, A.; Gheller, C.; Croton, D.; Krokos, M.

    2018-06-01

    Virtual observatories (VOs) are online hubs of scientific knowledge. They encompass a collection of platforms dedicated to the storage and dissemination of astronomical data, from simple data archives to e-research platforms offering advanced tools for data exploration and analysis. Whilst the more mature platforms within VOs primarily serve the observational community, there are also services fulfilling a similar role for theoretical data. Scientific visualization can be an effective tool for analysis and exploration of data sets made accessible through web platforms for theoretical data, which often contain spatial dimensions and properties inherently suitable for visualization via e.g. mock imaging in 2D or volume rendering in 3D. We analyse the current state of 3D visualization for big theoretical astronomical data sets through scientific web portals and virtual observatory services. We discuss some of the challenges for interactive 3D visualization and how it can augment the workflow of users in a virtual observatory context. Finally we showcase a lightweight client-server visualization tool for particle-based data sets, allowing quantitative visualization via data filtering, highlighting two example use cases within the Theoretical Astrophysical Observatory.

  19. Extension of nanoconfined DNA: Quantitative comparison between experiment and theory

    NASA Astrophysics Data System (ADS)

    Iarko, V.; Werner, E.; Nyberg, L. K.; Müller, V.; Fritzsche, J.; Ambjörnsson, T.; Beech, J. P.; Tegenfeldt, J. O.; Mehlig, K.; Westerlund, F.; Mehlig, B.

    2015-12-01

    The extension of DNA confined to nanochannels has been studied intensively and in detail. However, quantitative comparisons between experiments and model calculations are difficult because most theoretical predictions involve undetermined prefactors, and because the model parameters (contour length, Kuhn length, effective width) are difficult to compute reliably, leading to substantial uncertainties. Here we use a recent asymptotically exact theory for the DNA extension in the "extended de Gennes regime" that allows us to compare experimental results with theory. For this purpose, we performed experiments measuring the mean DNA extension and its standard deviation while varying the channel geometry, dye intercalation ratio, and ionic strength of the buffer. The experimental results agree very well with theory at high ionic strengths, indicating that the model parameters are reliable. At low ionic strengths, the agreement is less good. We discuss possible reasons. In principle, our approach allows us to measure the Kuhn length and the effective width of a single DNA molecule and more generally of semiflexible polymers in solution.

  20. An Information Theoretic Analysis of Classification Sorting and Cognition by Ninth Grade Children within a Piagetian Setting.

    ERIC Educational Resources Information Center

    Dunlop, David Livingston

    The purpose of this study was to use an information theoretic memory model to quantitatively investigate classification sorting and recall behaviors of various groups of students. The model provided theorems for the determination of information theoretic measures from which inferences concerning mental processing were made. The basic procedure…

  1. Quantitative X-ray mapping, scatter diagrams and the generation of correction maps to obtain more information about your material

    NASA Astrophysics Data System (ADS)

    Wuhrer, R.; Moran, K.

    2014-03-01

    Quantitative X-ray mapping with silicon drift detectors and multi-EDS detector systems have become an invaluable analysis technique and one of the most useful methods of X-ray microanalysis today. The time to perform an X-ray map has reduced considerably with the ability to map minor and trace elements very accurately due to the larger detector area and higher count rate detectors. Live X-ray imaging can now be performed with a significant amount of data collected in a matter of minutes. A great deal of information can be obtained from X-ray maps. This includes; elemental relationship or scatter diagram creation, elemental ratio mapping, chemical phase mapping (CPM) and quantitative X-ray maps. In obtaining quantitative x-ray maps, we are able to easily generate atomic number (Z), absorption (A), fluorescence (F), theoretical back scatter coefficient (η), and quantitative total maps from each pixel in the image. This allows us to generate an image corresponding to each factor (for each element present). These images allow the user to predict and verify where they are likely to have problems in our images, and are especially helpful to look at possible interface artefacts. The post-processing techniques to improve the quantitation of X-ray map data and the development of post processing techniques for improved characterisation are covered in this paper.

  2. Multivariate linear regression analysis to identify general factors for quantitative predictions of implant stability quotient values

    PubMed Central

    Huang, Hairong; Xu, Zanzan; Shao, Xianhong; Wismeijer, Daniel; Sun, Ping; Wang, Jingxiao

    2017-01-01

    Objectives This study identified potential general influencing factors for a mathematical prediction of implant stability quotient (ISQ) values in clinical practice. Methods We collected the ISQ values of 557 implants from 2 different brands (SICace and Osstem) placed by 2 surgeons in 336 patients. Surgeon 1 placed 329 SICace implants, and surgeon 2 placed 113 SICace implants and 115 Osstem implants. ISQ measurements were taken at T1 (immediately after implant placement) and T2 (before dental restoration). A multivariate linear regression model was used to analyze the influence of the following 11 candidate factors for stability prediction: sex, age, maxillary/mandibular location, bone type, immediate/delayed implantation, bone grafting, insertion torque, I-stage or II-stage healing pattern, implant diameter, implant length and T1-T2 time interval. Results The need for bone grafting as a predictor significantly influenced ISQ values in all three groups at T1 (weight coefficients ranging from -4 to -5). In contrast, implant diameter consistently influenced the ISQ values in all three groups at T2 (weight coefficients ranging from 3.4 to 4.2). Other factors, such as sex, age, I/II-stage implantation and bone type, did not significantly influence ISQ values at T2, and implant length did not significantly influence ISQ values at T1 or T2. Conclusions These findings provide a rational basis for mathematical models to quantitatively predict the ISQ values of implants in clinical practice. PMID:29084260

  3. Biochemical interpretation of quantitative structure-activity relationships (QSAR) for biodegradation of N-heterocycles: a complementary approach to predict biodegradability.

    PubMed

    Philipp, Bodo; Hoff, Malte; Germa, Florence; Schink, Bernhard; Beimborn, Dieter; Mersch-Sundermann, Volker

    2007-02-15

    Prediction of the biodegradability of organic compounds is an ecologically desirable and economically feasible tool for estimating the environmental fate of chemicals. We combined quantitative structure-activity relationships (QSAR) with the systematic collection of biochemical knowledge to establish rules for the prediction of aerobic biodegradation of N-heterocycles. Validated biodegradation data of 194 N-heterocyclic compounds were analyzed using the MULTICASE-method which delivered two QSAR models based on 17 activating (OSAR 1) and on 16 inactivating molecular fragments (GSAR 2), which were statistically significantly linked to efficient or poor biodegradability, respectively. The percentages of correct classifications were over 99% for both models, and cross-validation resulted in 67.9% (GSAR 1) and 70.4% (OSAR 2) correct predictions. Biochemical interpretation of the activating and inactivating characteristics of the molecular fragments delivered plausible mechanistic interpretations and enabled us to establish the following biodegradation rules: (1) Target sites for amidohydrolases and for cytochrome P450 monooxygenases enhance biodegradation of nonaromatic N-heterocycles. (2) Target sites for molybdenum hydroxylases enhance biodegradation of aromatic N-heterocycles. (3) Target sites for hydratation by an urocanase-like mechanism enhance biodegradation of imidazoles. Our complementary approach represents a feasible strategy for generating concrete rules for the prediction of biodegradability of organic compounds.

  4. Theoretical status of the lifetime predictions:. (ΔΓ/Γ)Bs, τB+/τBd and τΛbBd

    NASA Astrophysics Data System (ADS)

    Lenz, Alexander

    2002-04-01

    We give a review of the theoretical status of the lifetime predictions in the standard model. In case of (ΔΓ/Γ)Bs we are already in a rather advanced stage. We obtain (Δ Γ /Γ )Bs = (9.3+3.4-4.6)%. It seems to be difficult to improve these errors substantially. In addition now some experimental results are available. For τB+/τBd and τΛbBd the theoretical status is much less advanced and the discrepancy between experiment and theory still remains. We conclude with a what-to-do-list for theorists.

  5. A quantum theoretical study of polyimides

    NASA Technical Reports Server (NTRS)

    Burke, Luke A.

    1987-01-01

    One of the most important contributions of theoretical chemistry is the correct prediction of properties of materials before any costly experimental work begins. This is especially true in the field of electrically conducting polymers. Development of the Valence Effective Hamiltonian (VEH) technique for the calculation of the band structure of polymers was initiated. The necessary VEH potentials were developed for the sulfur and oxygen atoms within the particular molecular environments and the explanation explored for the success of this approximate method in predicting the optical properties of conducting polymers.

  6. An Experimental and Theoretical Study on Cavitating Propellers.

    DTIC Science & Technology

    1982-10-01

    34 And Identfyp eV &to" nMeeJ cascade flow theoretical supercavitating flow performance prediction method partially cavitating flow supercavitating ...the present work was to develop an analytical tool for predicting the off-design performance of supercavitating propellers over a wide range of...operating conditions. Due to the complex nature of the flow phenomena, a lifting line theory sirply combined with the two-dimensional supercavitating

  7. A combined theoretical and in vitro modeling approach for predicting the magnetic capture and retention of magnetic nanoparticles in vivo

    PubMed Central

    David, Allan E.; Cole, Adam J.; Chertok, Beata; Park, Yoon Shin; Yang, Victor C.

    2011-01-01

    Magnetic nanoparticles (MNP) continue to draw considerable attention as potential diagnostic and therapeutic tools in the fight against cancer. Although many interacting forces present themselves during magnetic targeting of MNP to tumors, most theoretical considerations of this process ignore all except for the magnetic and drag forces. Our validation of a simple in vitro model against in vivo data, and subsequent reproduction of the in vitro results with a theoretical model indicated that these two forces do indeed dominate the magnetic capture of MNP. However, because nanoparticles can be subject to aggregation, and large MNP experience an increased magnetic force, the effects of surface forces on MNP stability cannot be ignored. We accounted for the aggregating surface forces simply by measuring the size of MNP retained from flow by magnetic fields, and utilized this size in the mathematical model. This presumably accounted for all particle-particle interactions, including those between magnetic dipoles. Thus, our “corrected” mathematical model provided a reasonable estimate of not only fractional MNP retention, but also predicted the regions of accumulation in a simulated capillary. Furthermore, the model was also utilized to calculate the effects of MNP size and spatial location, relative to the magnet, on targeting of MNPs to tumors. This combination of an in vitro model with a theoretical model could potentially assist with parametric evaluations of magnetic targeting, and enable rapid enhancement and optimization of magnetic targeting methodologies. PMID:21295085

  8. Dynamical attribution of oceanic prediction uncertainty in the North Atlantic: application to the design of optimal monitoring systems

    NASA Astrophysics Data System (ADS)

    Sévellec, Florian; Dijkstra, Henk A.; Drijfhout, Sybren S.; Germe, Agathe

    2017-11-01

    In this study, the relation between two approaches to assess the ocean predictability on interannual to decadal time scales is investigated. The first pragmatic approach consists of sampling the initial condition uncertainty and assess the predictability through the divergence of this ensemble in time. The second approach is provided by a theoretical framework to determine error growth by estimating optimal linear growing modes. In this paper, it is shown that under the assumption of linearized dynamics and normal distributions of the uncertainty, the exact quantitative spread of ensemble can be determined from the theoretical framework. This spread is at least an order of magnitude less expensive to compute than the approximate solution given by the pragmatic approach. This result is applied to a state-of-the-art Ocean General Circulation Model to assess the predictability in the North Atlantic of four typical oceanic metrics: the strength of the Atlantic Meridional Overturning Circulation (AMOC), the intensity of its heat transport, the two-dimensional spatially-averaged Sea Surface Temperature (SST) over the North Atlantic, and the three-dimensional spatially-averaged temperature in the North Atlantic. For all tested metrics, except for SST, ˜ 75% of the total uncertainty on interannual time scales can be attributed to oceanic initial condition uncertainty rather than atmospheric stochastic forcing. The theoretical method also provide the sensitivity pattern to the initial condition uncertainty, allowing for targeted measurements to improve the skill of the prediction. It is suggested that a relatively small fleet of several autonomous underwater vehicles can reduce the uncertainty in AMOC strength prediction by 70% for 1-5 years lead times.

  9. Theoretical and experimental {alpha} decay half-lives of the heaviest odd-Z elements and general predictions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, H. F.; Royer, G.

    Theoretical {alpha} decay half-lives of the heaviest odd-Z nuclei are calculated using the experimental Q{sub {alpha}} value. The barriers in the quasimolecular shape path are determined within a Generalized Liquid Drop Model (GLDM) and the WKB approximation is used. The results are compared with calculations using the Density-Dependent M3Y (DDM3Y) effective interaction and the Viola-Seaborg-Sobiczewski (VSS) formulas. The calculations provide consistent estimates for the half-lives of the {alpha} decay chains of these superheavy elements. The experimental data stand between the GLDM calculations and VSS ones in the most time. Predictions are provided for the {alpha} decay half-lives of other superheavymore » nuclei within the GLDM and VSS approaches using the recent extrapolated Q{sub {alpha}} of Audi, Wapstra, and Thibault [Nucl. Phys. A729, 337 (2003)], which may be used for future experimental assignment and identification.« less

  10. Theoretical calculations of physico-chemical and spectroscopic properties of bioinorganic systems: current limits and perspectives.

    PubMed

    Rokob, Tibor András; Srnec, Martin; Rulíšek, Lubomír

    2012-05-21

    In the last decade, we have witnessed substantial progress in the development of quantum chemical methodologies. Simultaneously, robust solvation models and various combined quantum and molecular mechanical (QM/MM) approaches have become an integral part of quantum chemical programs. Along with the steady growth of computer power and, more importantly, the dramatic increase of the computer performance to price ratio, this has led to a situation where computational chemistry, when exercised with the proper amount of diligence and expertise, reproduces, predicts, and complements the experimental data. In this perspective, we review some of the latest achievements in the field of theoretical (quantum) bioinorganic chemistry, concentrating mostly on accurate calculations of the spectroscopic and physico-chemical properties of open-shell bioinorganic systems by wave-function (ab initio) and DFT methods. In our opinion, the one-to-one mapping between the calculated properties and individual molecular structures represents a major advantage of quantum chemical modelling since this type of information is very difficult to obtain experimentally. Once (and only once) the physico-chemical, thermodynamic and spectroscopic properties of complex bioinorganic systems are quantitatively reproduced by theoretical calculations may we consider the outcome of theoretical modelling, such as reaction profiles and the various decompositions of the calculated parameters into individual spatial or physical contributions, to be reliable. In an ideal situation, agreement between theory and experiment may imply that the practical problem at hand, such as the reaction mechanism of the studied metalloprotein, can be considered as essentially solved.

  11. Mergers in ΛCDM: Uncertainties in Theoretical Predictions and Interpretations of the Merger Rate

    NASA Astrophysics Data System (ADS)

    Hopkins, Philip F.; Croton, Darren; Bundy, Kevin; Khochfar, Sadegh; van den Bosch, Frank; Somerville, Rachel S.; Wetzel, Andrew; Keres, Dusan; Hernquist, Lars; Stewart, Kyle; Younger, Joshua D.; Genel, Shy; Ma, Chung-Pei

    2010-12-01

    Different theoretical methodologies lead to order-of-magnitude variations in predicted galaxy-galaxy merger rates. We examine how this arises and quantify the dominant uncertainties. Modeling of dark matter and galaxy inspiral/merger times contribute factor of ~2 uncertainties. Different estimates of the halo-halo merger rate, the subhalo "destruction" rate, and the halo merger rate with some dynamical friction time delay for galaxy-galaxy mergers, agree to within this factor of ~2, provided proper care is taken to define mergers consistently. There are some caveats: if halo/subhalo masses are not appropriately defined the major-merger rate can be dramatically suppressed, and in models with "orphan" galaxies and under-resolved subhalos the merger timescale can be severely over-estimated. The dominant differences in galaxy-galaxy merger rates between models owe to the treatment of the baryonic physics. Cosmological hydrodynamic simulations without strong feedback and some older semi-analytic models (SAMs), with known discrepancies in mass functions, can be biased by large factors (~5) in predicted merger rates. However, provided that models yield a reasonable match to the total galaxy mass function, the differences in properties of central galaxies are sufficiently small to alone contribute small (factor of ~1.5) additional systematics to merger rate predictions. But variations in the baryonic physics of satellite galaxies in models can also have a dramatic effect on merger rates. The well-known problem of satellite "over-quenching" in most current SAMs—whereby SAM satellite populations are too efficiently stripped of their gas—could lead to order-of-magnitude under-estimates of merger rates for low-mass, gas-rich galaxies. Models in which the masses of satellites are fixed by observations (or SAMs adjusted to resolve this "over-quenching") tend to predict higher merger rates, but with factor of ~2 uncertainties stemming from the uncertainty in those

  12. Potential usefulness of a topic model-based categorization of lung cancers as quantitative CT biomarkers for predicting the recurrence risk after curative resection

    NASA Astrophysics Data System (ADS)

    Kawata, Y.; Niki, N.; Ohmatsu, H.; Satake, M.; Kusumoto, M.; Tsuchida, T.; Aokage, K.; Eguchi, K.; Kaneko, M.; Moriyama, N.

    2014-03-01

    In this work, we investigate a potential usefulness of a topic model-based categorization of lung cancers as quantitative CT biomarkers for predicting the recurrence risk after curative resection. The elucidation of the subcategorization of a pulmonary nodule type in CT images is an important preliminary step towards developing the nodule managements that are specific to each patient. We categorize lung cancers by analyzing volumetric distributions of CT values within lung cancers via a topic model such as latent Dirichlet allocation. Through applying our scheme to 3D CT images of nonsmall- cell lung cancer (maximum lesion size of 3 cm) , we demonstrate the potential usefulness of the topic model-based categorization of lung cancers as quantitative CT biomarkers.

  13. The value of assessing pulmonary venous flow velocity for predicting severity of mitral regurgitation: A quantitative assessment integrating left ventricular function

    NASA Technical Reports Server (NTRS)

    Pu, M.; Griffin, B. P.; Vandervoort, P. M.; Stewart, W. J.; Fan, X.; Cosgrove, D. M.; Thomas, J. D.

    1999-01-01

    Although alteration in pulmonary venous flow has been reported to relate to mitral regurgitant severity, it is also known to vary with left ventricular (LV) systolic and diastolic dysfunction. There are few data relating pulmonary venous flow to quantitative indexes of mitral regurgitation (MR). The object of this study was to assess quantitatively the accuracy of pulmonary venous flow for predicting MR severity by using transesophageal echocardiographic measurement in patients with variable LV dysfunction. This study consisted of 73 patients undergoing heart surgery with mild to severe MR. Regurgitant orifice area (ROA), regurgitant stroke volume (RSV), and regurgitant fraction (RF) were obtained by quantitative transesophageal echocardiography and proximal isovelocity surface area. Both left and right upper pulmonary venous flow velocities were recorded and their patterns classified by the ratio of systolic to diastolic velocity: normal (>/=1), blunted (<1), and systolic reversal (<0). Twenty-three percent of patients had discordant patterns between the left and right veins. When the most abnormal patterns either in the left or right vein were used for analysis, the ratio of peak systolic to diastolic flow velocity was negatively correlated with ROA (r = -0.74, P <.001), RSV (r = -0.70, P <.001), and RF (r = -0.66, P <.001) calculated by the Doppler thermodilution method; values were r = -0.70, r = -0.67, and r = -0.57, respectively (all P <.001), for indexes calculated by the proximal isovelocity surface area method. The sensitivity, specificity, and predictive values of the reversed pulmonary venous flow pattern for detecting a large ROA (>0.3 cm(2)) were 69%, 98%, and 97%, respectively. The sensitivity, specificity, and predictive values of the normal pulmonary venous flow pattern for detecting a small ROA (<0.3 cm(2)) were 60%, 96%, and 94%, respectively. However, the blunted pattern had low sensitivity (22%), specificity (61%), and predictive values (30

  14. A Novel Quantitative Prediction Approach for Astringency Level of Herbs Based on an Electronic Tongue

    PubMed Central

    Han, Xue; Jiang, Hong; Zhang, Dingkun; Zhang, Yingying; Xiong, Xi; Jiao, Jiaojiao; Xu, Runchun; Yang, Ming; Han, Li; Lin, Junzhi

    2017-01-01

    Background: The current astringency evaluation for herbs has become dissatisfied with the requirement of pharmaceutical process. It needed a new method to accurately assess astringency. Methods: First, quinine, sucrose, citric acid, sodium chloride, monosodium glutamate, and tannic acid (TA) were analyzed by electronic tongue (e-tongue) to determine the approximate region of astringency in partial least square (PLS) map. Second, different concentrations of TA were detected to define the standard curve of astringency. Meanwhile, coordinate-concentration relationship could be obtained by fitting the PLS abscissa of standard curve and corresponding concentration. Third, Chebulae Fructus (CF), Yuganzi throat tablets (YGZTT), and Sanlejiang oral liquid (SLJOL) were tested to define the region in PLS map. Finally, the astringent intensities of samples were calculated combining with the standard coordinate-concentration relationship and expressed by concentrations of TA. Then, Euclidean distance (Ed) analysis and human sensory test were processed to verify the results. Results: The fitting equation between concentration and abscissa of TA was Y = 0.00498 × e(−X/0.51035) + 0.10905 (r = 0.999). The astringency of 1, 0.1 mg/mL CF was predicted at 0.28, 0.12 mg/mL TA; 2, 0.2 mg/mL YGZTTs was predicted at 0.18, 0.11 mg/mL TA; 0.002, 0.0002 mg/mL SLJOL was predicted at 0.15, 0.10 mg/mL TA. The validation results showed that the predicted astringency of e-tongue was basically consistent to human sensory and was more accuracy than Ed analysis. Conclusion: The study indicated the established method was objective and feasible. It provided a new quantitative method for astringency of herbs. SUMMARY The astringency of Chebulae Fructus, Yuganzi throat tablets, and Sanlejiang oral liquid was predicted by electronic tongueEuclidean distance analysis and human sensory test verified the resultsA new strategy which was objective, simple, and sensitive to compare astringent intensity of

  15. Advanced 2-dimensional quantitative coronary angiographic analysis for prediction of fractional flow reserve in intermediate coronary stenoses.

    PubMed

    Opolski, Maksymilian P; Pregowski, Jerzy; Kruk, Mariusz; Kepka, Cezary; Staruch, Adam D; Witkowski, Adam

    2014-07-01

    The widespread clinical application of coronary computed tomography angiography (CCTA) has resulted in increased referral patterns of patients with intermediate coronary stenoses to invasive coronary angiography. We evaluated the application of advanced quantitative coronary angiography (A-QCA) for predicting fractional flow reserve (FFR) in intermediate coronary lesions detected on CCTA. Fifty-six patients with 66 single intermediate coronary lesions (≥ 50% to 80% stenosis) on CCTA prospectively underwent coronary angiography and FFR. A-QCA including calculation of the Poiseuille-based index defined as the ratio of lesion length to the fourth power of the minimal lumen diameter (MLD) was performed. Significant stenosis was defined as FFR ≤ 0.80. The mean FFR was 0.86 ± 0.09, and 18 lesions (27%) were functionally significant. FFR correlated with lesion length (R=-0.303, P=0.013), MLD (R=0.527, P<0.001), diameter stenosis (R=-0.404, P=0.001), minimum lumen area (MLA) (R=0.530, P<0.001), lumen stenosis (R=-0.400, P=0.001), and Poiseuille-based index (R=-0.602, P<0.001). The optimal cutoff values for MLD, MLA, diameter stenosis, and lumen stenosis were ≤ 1.3 mm, ≤ 1.5 mm, >44%, and >69%, respectively (maximum negative predictive value of 94% for MLA, maximum positive predictive value of 58% for diameter stenosis). The Poiseuille-based index was the most accurate (C statistic 0.86, sensitivity 100%, specificity 71%, positive predictive value 56%, and negative predictive value 100%) predictor of FFR ≤ 0.80, but showed the lowest interobserver agreement (intraclass correlation coefficient 0.37). A-QCA might be used to rule out significant ischemia in intermediate stenoses detected by CCTA. The diagnostic application of the Poiseuille-based angiographic index is precluded by its high interobserver variability.

  16. Evaluation of New Zealand’s High-Seas Bottom Trawl Closures Using Predictive Habitat Models and Quantitative Risk Assessment

    PubMed Central

    Penney, Andrew J.; Guinotte, John M.

    2013-01-01

    United Nations General Assembly Resolution 61/105 on sustainable fisheries (UNGA 2007) establishes three difficult questions for participants in high-seas bottom fisheries to answer: 1) Where are vulnerable marine systems (VMEs) likely to occur?; 2) What is the likelihood of fisheries interaction with these VMEs?; and 3) What might qualify as adequate conservation and management measures to prevent significant adverse impacts? This paper develops an approach to answering these questions for bottom trawling activities in the Convention Area of the South Pacific Regional Fisheries Management Organisation (SPRFMO) within a quantitative risk assessment and cost : benefit analysis framework. The predicted distribution of deep-sea corals from habitat suitability models is used to answer the first question. Distribution of historical bottom trawl effort is used to answer the second, with estimates of seabed areas swept by bottom trawlers being used to develop discounting factors for reduced biodiversity in previously fished areas. These are used in a quantitative ecological risk assessment approach to guide spatial protection planning to address the third question. The coral VME likelihood (average, discounted, predicted coral habitat suitability) of existing spatial closures implemented by New Zealand within the SPRFMO area is evaluated. Historical catch is used as a measure of cost to industry in a cost : benefit analysis of alternative spatial closure scenarios. Results indicate that current closures within the New Zealand SPRFMO area bottom trawl footprint are suboptimal for protection of VMEs. Examples of alternative trawl closure scenarios are provided to illustrate how the approach could be used to optimise protection of VMEs under chosen management objectives, balancing protection of VMEs against economic loss to commercial fishers from closure of historically fished areas. PMID:24358162

  17. Theoretical survey on positronium formation and ionisation in positron atom scattering

    NASA Technical Reports Server (NTRS)

    Basu, Madhumita; Ghosh, A. S.

    1990-01-01

    The recent theoretical studies are surveyed and reported on the formation of exotic atoms in positron-hydrogen, positron-helium and positron-lithium scattering specially at intermediate energy region. The ionizations of these targets by positron impact was also considered. Theoretical predictions for both the processes are compared with existing measured values.

  18. Advanced turboprop noise prediction based on recent theoretical results

    NASA Technical Reports Server (NTRS)

    Farassat, F.; Padula, S. L.; Dunn, M. H.

    1987-01-01

    The development of a high speed propeller noise prediction code at Langley Research Center is described. The code utilizes two recent acoustic formulations in the time domain for subsonic and supersonic sources. The structure and capabilities of the code are discussed. Grid size study for accuracy and speed of execution on a computer is also presented. The code is tested against an earlier Langley code. Considerable increase in accuracy and speed of execution are observed. Some examples of noise prediction of a high speed propeller for which acoustic test data are available are given. A brisk derivation of formulations used is given in an appendix.

  19. Aluminum/hydrocarbon gel propellants: An experimental and theoretical investigation of secondary atomization and predicted rocket engine performance

    NASA Astrophysics Data System (ADS)

    Mueller, Donn Christopher

    1997-12-01

    Experimental and theoretical investigations of aluminum/hydrocarbon gel propellant secondary atomization and its potential effects on rocket engine performance were conducted. In the experimental efforts, a dilute, polydisperse, gel droplet spray was injected into the postflame region of a burner and droplet size distributions was measured as a function of position above the burner using a laser-based sizing/velocimetry technique. The sizing/velocimetry technique was developed to measure droplets in the 10-125 mum size range and avoids size-biased detection through the use of a uniformly illuminated probe volume. The technique was used to determine particle size distributions and velocities at various axial locations above the burner for JP-10, and 50 and 60 wt% aluminum gels. Droplet shell formation models were applied to aluminum/hydrocarbon gels to examine particle size and mass loading effects on the minimum droplet diameter that will permit secondary atomization. This diameter was predicted to be 38.1 and 34.7 mum for the 50 and 60 wt% gels, which is somewhat greater than the experimentally measured 30 and 25 mum diameters. In the theoretical efforts, three models were developed and an existing rocket code was exercised to gain insights into secondary atomization. The first model was designed to predict gel droplet properties and shell stresses after rigid shell formation, while the second, a one-dimensional gel spray combustion model was created to quantify the secondary atomization process. Experimental and numerical comparisons verify that secondary atomization occurs in 10-125 mum diameter particles although an exact model could not be derived. The third model, a one-dimensional gel-fueled rocket combustion chamber, was developed to evaluate secondary atomization effects on various engine performance parameters. Results show that only modest secondary atomization may be required to reduce propellant burnout distance and radiation losses. A solid propellant

  20. A semi-quantitative World Health Organization grading scheme evaluating worst tumor differentiation predicts disease-free survival in oral squamous carcinoma patients.

    PubMed

    Jain, Dhruv; Tikku, Gargi; Bhadana, Pallavi; Dravid, Chandrashekhar; Grover, Rajesh Kumar

    2017-08-01

    We investigated World Health Organization (WHO) grading and pattern of invasion based histological schemes as independent predictors of disease-free survival, in oral squamous carcinoma patients. Tumor resection slides of eighty-seven oral squamous carcinoma patients [pTNM: I&II/III&IV-32/55] were evaluated. Besides examining various patterns of invasion, invasive front grade, predominant and worst (highest) WHO grade were recorded. For worst WHO grading, poor-undifferentiated component was estimated semi-quantitatively at advancing tumor edge (invasive growth front) in histology sections. Tumor recurrence was observed in 31 (35.6%) cases. The 2-year disease-free survival was 47% [Median: 656; follow-up: 14-1450] days. Using receiver operating characteristic curves, we defined poor-undifferentiated component exceeding 5% of tumor as the cutoff to assign an oral squamous carcinoma as grade-3, when following worst WHO grading. Kaplan-Meier curves for disease-free survival revealed prognostic association with nodal involvement, tumor size, worst WHO grading; most common pattern of invasion and invasive pattern grading score (sum of two most predominant patterns of invasion). In further multivariate analysis, tumor size (>2.5cm) and worst WHO grading (grade-3 tumors) independently predicted reduced disease-free survival [HR, 2.85; P=0.028 and HR, 3.37; P=0.031 respectively]. The inter-observer agreement was moderate for observers who semi-quantitatively estimated percentage of poor-undifferentiated morphology in oral squamous carcinomas. Our results support the value of semi-quantitative method to assign tumors as grade-3 with worst WHO grading for predicting reduced disease-free survival. Despite limitations, of the various histological tumor stratification schemes, WHO grading holds adjunctive value for its prognostic role, ease and universal familiarity. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Synthesis, Spectra, and Theoretical Investigations of 1,3,5-Triazines Compounds as Ultraviolet Rays Absorber Based on Time-Dependent Density Functional Calculations and three-Dimensional Quantitative Structure-Property Relationship.

    PubMed

    Wang, Xueding; Xu, Yilian; Yang, Lu; Lu, Xiang; Zou, Hao; Yang, Weiqing; Zhang, Yuanyuan; Li, Zicheng; Ma, Menglin

    2018-03-01

    A series of 1,3,5-triazines were synthesized and their UV absorption properties were tested. The computational chemistry methods were used to construct quantitative structure-property relationship (QSPR), which was used to computer aided design of new 1,3,5-triazines ultraviolet rays absorber compounds. The experimental UV absorption data are in good agreement with those predicted data using the Time-dependent density functional theory (TD-DFT) [B3LYP/6-311 + G(d,p)]. A suitable forecasting model (R > 0.8, P < 0.0001) was revealed. Predictive three-dimensional quantitative structure-property relationship (3D-QSPR) model was established using multifit molecular alignment rule of Sybyl program, which conclusion is consistent with the TD-DFT calculation. The exceptional photostability mechanism of such ultraviolet rays absorber compounds was studied and confirmed as principally banked upon their ability to undergo excited-state deactivation via an ultrafast excited-state proton transfer (ESIPT). The intramolecular hydrogen bond (IMHB) of 1,3,5-triazines compounds is the basis for the excited state proton transfer, which was explored by IR spectroscopy, UV spectra, structural and energetic aspects of different conformers and frontier molecular orbitals analysis.

  2. QSAR prediction of additive and non-additive mixture toxicities of antibiotics and pesticide.

    PubMed

    Qin, Li-Tang; Chen, Yu-Han; Zhang, Xin; Mo, Ling-Yun; Zeng, Hong-Hu; Liang, Yan-Peng

    2018-05-01

    Antibiotics and pesticides may exist as a mixture in real environment. The combined effect of mixture can either be additive or non-additive (synergism and antagonism). However, no effective predictive approach exists on predicting the synergistic and antagonistic toxicities of mixtures. In this study, we developed a quantitative structure-activity relationship (QSAR) model for the toxicities (half effect concentration, EC 50 ) of 45 binary and multi-component mixtures composed of two antibiotics and four pesticides. The acute toxicities of single compound and mixtures toward Aliivibrio fischeri were tested. A genetic algorithm was used to obtain the optimized model with three theoretical descriptors. Various internal and external validation techniques indicated that the coefficient of determination of 0.9366 and root mean square error of 0.1345 for the QSAR model predicted that 45 mixture toxicities presented additive, synergistic, and antagonistic effects. Compared with the traditional concentration additive and independent action models, the QSAR model exhibited an advantage in predicting mixture toxicity. Thus, the presented approach may be able to fill the gaps in predicting non-additive toxicities of binary and multi-component mixtures. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. A Quantitative Model of Expert Transcription Typing

    DTIC Science & Technology

    1993-03-08

    side of pure psychology, several researchers have argued that transcription typing is a particularly good activity for the study of human skilled...phenomenon with a quantitative METT prediction. The first, quick and dirty analysis gives a good prediction of the copy span, in fact, it is even...typing, it should be demonstrated that the mechanism of the model does not get in the way of good predictions. If situations occur where the entire

  4. Statistical design of quantitative mass spectrometry-based proteomic experiments.

    PubMed

    Oberg, Ann L; Vitek, Olga

    2009-05-01

    We review the fundamental principles of statistical experimental design, and their application to quantitative mass spectrometry-based proteomics. We focus on class comparison using Analysis of Variance (ANOVA), and discuss how randomization, replication and blocking help avoid systematic biases due to the experimental procedure, and help optimize our ability to detect true quantitative changes between groups. We also discuss the issues of pooling multiple biological specimens for a single mass analysis, and calculation of the number of replicates in a future study. When applicable, we emphasize the parallels between designing quantitative proteomic experiments and experiments with gene expression microarrays, and give examples from that area of research. We illustrate the discussion using theoretical considerations, and using real-data examples of profiling of disease.

  5. Validation of finite element computations for the quantitative prediction of underwater noise from impact pile driving.

    PubMed

    Zampolli, Mario; Nijhof, Marten J J; de Jong, Christ A F; Ainslie, Michael A; Jansen, Erwin H W; Quesson, Benoit A J

    2013-01-01

    The acoustic radiation from a pile being driven into the sediment by a sequence of hammer strikes is studied with a linear, axisymmetric, structural acoustic frequency domain finite element model. Each hammer strike results in an impulsive sound that is emitted from the pile and then propagated in the shallow water waveguide. Measurements from accelerometers mounted on the head of a test pile and from hydrophones deployed in the water are used to validate the model results. Transfer functions between the force input at the top of the anvil and field quantities, such as acceleration components in the structure or pressure in the fluid, are computed with the model. These transfer functions are validated using accelerometer or hydrophone measurements to infer the structural forcing. A modeled hammer forcing pulse is used in the successive step to produce quantitative predictions of sound exposure at the hydrophones. The comparison between the model and the measurements shows that, although several simplifying assumptions were made, useful predictions of noise levels based on linear structural acoustic models are possible. In the final part of the paper, the model is used to characterize the pile as an acoustic radiator by analyzing the flow of acoustic energy.

  6. Multi-Target Mining of Alzheimer Disease Proteome with Hansch's QSBR-Perturbation Theory and Experimental-Theoretic Study of New Thiophene Isosters of Rasagiline.

    PubMed

    Abeijon, Paula; Garcia-Mera, Xerardo; Caamano, Olga; Yanez, Matilde; Lopez-Castro, Edgar; Romero-Duran, Francisco J; Gonzalez-Diaz, Humberto

    2017-01-01

    Hansch's model is a classic approach to Quantitative Structure-Binding Relationships (QSBR) problems in Pharmacology and Medicinal Chemistry. Hansch QSAR equations are used as input parameters of electronic structure and lipophilicity. In this work, we perform a review on Hansch's analysis. We also developed a new type of PT-QSBR Hansch's model based on Perturbation Theory (PT) and QSBR approach for a large number of drugs reported in CheMBL. The targets are proteins expressed by the Hippocampus region of the brain of Alzheimer Disease (AD) patients. The model predicted correctly 49312 out of 53783 negative perturbations (Specificity = 91.7%) and 16197 out of 21245 positive perturbations (Sensitivity = 76.2%) in training series. The model also predicted correctly 49312/53783 (91.7%) and 16197/21245 (76.2%) negative or positive perturbations in external validation series. We applied our model in theoretical-experimental studies of organic synthesis, pharmacological assay, and prediction of unmeasured results for a series of compounds similar to Rasagiline (compound of reference) with potential neuroprotection effect. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  7. From information theory to quantitative description of steric effects.

    PubMed

    Alipour, Mojtaba; Safari, Zahra

    2016-07-21

    Immense efforts have been made in the literature to apply the information theory descriptors for investigating the electronic structure theory of various systems. In the present study, the information theoretic quantities, such as Fisher information, Shannon entropy, Onicescu information energy, and Ghosh-Berkowitz-Parr entropy, have been used to present a quantitative description for one of the most widely used concepts in chemistry, namely the steric effects. Taking the experimental steric scales for the different compounds as benchmark sets, there are reasonable linear relationships between the experimental scales of the steric effects and theoretical values of steric energies calculated from information theory functionals. Perusing the results obtained from the information theoretic quantities with the two representations of electron density and shape function, the Shannon entropy has the best performance for the purpose. On the one hand, the usefulness of considering the contributions of functional groups steric energies and geometries, and on the other hand, dissecting the effects of both global and local information measures simultaneously have also been explored. Furthermore, the utility of the information functionals for the description of steric effects in several chemical transformations, such as electrophilic and nucleophilic reactions and host-guest chemistry, has been analyzed. The functionals of information theory correlate remarkably with the stability of systems and experimental scales. Overall, these findings show that the information theoretic quantities can be introduced as quantitative measures of steric effects and provide further evidences of the quality of information theory toward helping theoreticians and experimentalists to interpret different problems in real systems.

  8. The quantitative lung index and the prediction of survival in fetuses with congenital diaphragmatic hernia.

    PubMed

    Illescas, Tamara; Rodó, Carlota; Arévalo, Silvia; Giné, Carles; Peiró, José L; Carreras, Elena

    2016-03-01

    The lung-to-head ratio (LHR) is routinely used to select the best candidates for prenatal surgery and to follow-up the fetuses with congenital diaphragmatic hernia (CDH). Since this index is gestation-dependent, the quantitative lung index (QLI) was proposed as an alternative parameter that stays constant throughout pregnancy. Our objective was to study the performance of QLI to predict survival in fetuses with CDH. Observational retrospective study of fetuses with isolated CDH, referred to our center. LHR was originally used for the prenatal surgery evaluation. We calculated the QLI and compared the performance of both indexes (QLI and LHR) to predict survival. From January-2009 to February-2015 we followed 31 fetuses with isolated CDH. The mean QLI was 0.66 (95% CI: 0.57-0.75) for survivors and 0.41 (95% CI: 0.25-0.58) for non-survivors (p<0.01) and the mean LHR was 1.38 (95% CI: 1.17-1.60) for survivors and 0.91 (95% CI: 0.57-1.25) for non-survivors (p<0.02). All operated fetuses (n=12) had a LHR <1 and a QLI <0.5 and none of them survived when the QLI was <0.32. When separately considering the prenatal surgery status, the mean values of the QLI (but not those of the LHR) were still significantly different between survivors and non-survivors. The comparative ROC curves showed a better performance of the QLI with respect to the LHR for the prediction of survival, especially in the group of operated fetuses, although differences were not statistically significant. The QLI seems to be a better predictor for survival than the LHR, especially for the group of fetuses undergoing prenatal surgery. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  9. Quantitative signal intensity alteration in infrapatellar fat pad predict incident radiographic osteoarthritis: the Osteoarthritis Initiative.

    PubMed

    Wang, Kang; Ding, Changhai; Hannon, Michael J; Chen, Zhongshan; Kwoh, C Kent; Hunter, David J

    2018-04-12

    To determine if infrapatellar fat pad (IPFP) signal intensity (SI) measures are predictive of incident radiographic osteoarthritis (iROA) over 4 years in the OA Initiative (OAI) study. Case knees (n=355) defined by iROA were matched one-to-one by gender, age and radiographic status with control knees. T2-weighted MR images were assessed at P0 (the visit when iROA was found on radiograph), P-1 (1 year prior to P0) and baseline, and utilized to assess IPFP SI semi-automatically using MATLAB. Conditional logistic regression analyses were used to assess risk of iROA associated with IPFP SI alteration after adjustment for covariates. Participants were on average 60.2 years old, predominantly female (66.7%) and overweight (mean BMI: 28.3). Baseline IPFP measures including mean value and standard deviation of IPFP SI [Mean(IPFP), sDev(IPFP)] (HR, 95%CI: 5.2, 1.1 to 23.6 and 5.7, 2.2 to 14.5, respectively), mean value and standard deviation of IPFP high SI [Mean(H), sDev(H)] (HR, 95%CI: 3.3, 1.7 to 6.4 and 3.1, 1.3 to 7.7, respectively), median value and upper quartile value of IPFP high SI [Median(H), UQ(H)], and clustering effect of high SI [Clustering factor(H)] were associated with iROA during 4 years. All P-1 IPFP measures were associated with iROA after 12 months. P-0 IPFP SI measures were all associated with ROA. The quantitative segmentation of high signal in IPFP is confirming previous work based on semiquantitative assessment suggesting its predictive validity. The IPFP high SI alteration could be an important imaging biomarker to predict the occurrence of radiographic OA. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  10. Epidemiological survey of the feasibility of broadband ultrasound attenuation measured using calcaneal quantitative ultrasound to predict the incidence of falls in the middle aged and elderly.

    PubMed

    Ou, Ling-Chun; Chang, Yin-Fan; Chang, Chin-Sung; Chiu, Ching-Ju; Chao, Ting-Hsing; Sun, Zih-Jie; Lin, Ruey-Mo; Wu, Chih-Hsing

    2017-01-09

    We investigated whether calcaneal quantitative ultrasound (QUS-C) is a feasible tool for predicting the incidence of falls. Prospective epidemiological cohort study. Community-dwelling people sampled in central western Taiwan. A cohort of community-dwelling people who were ≥40 years old (men: 524; women: 676) in 2009-2010. Follow-up questionnaires were completed by 186 men and 257 women in 2012. Structured questionnaires and broadband ultrasound attenuation (BUA) data were obtained in 2009-2010 using QUS-C, and follow-up surveys were done in a telephone interview in 2012. Using a binary logistic regression model, the risk factors associated with a new fall during follow-up were analysed with all significant variables from the bivariate comparisons and theoretically important variables. The incidence of falls was determined when the first new fall occurred during the follow-up period. The mean follow-up time was 2.83 years. The total incidence of falls was 28.0 per 1000 person-years for the ≥40 year old group (all participants), 23.3 per 1000 person-years for the 40-70 year old group, and 45.6 per 1000 person-years for the ≥70 year old group. Using multiple logistic regression models, the independent factors were current smoking, living alone, psychiatric drug usage and lower BUA (OR 0.93; 95% CI 0.88 to 0.99, p<0.05) in the ≥70 year old group. The incidence of falls was highest in the ≥70 year old group. Using QUS-C-derived BUA is feasible for predicting the incidence of falls in community-dwelling elderly people aged ≥70 years. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  11. Theoretical kinetics of O + C 2H 4

    DOE PAGES

    Li, Xiaohu; Jasper, Ahren W.; Zádor, Judit; ...

    2016-06-01

    The reaction of atomic oxygen with ethylene is a fundamental oxidation step in combustion and is prototypical of reactions in which oxygen adds to double bonds. For 3O+C 2H 4 and for this class of reactions generally, decomposition of the initial adduct via spin-allowed reaction channels on the triplet surface competes with intersystem crossing (ISC) and a set of spin-forbidden reaction channels on the ground-state singlet surface. The two surfaces share some bimolecular products but feature different intermediates, pathways, and transition states. In addition, the overall product branching is therefore a sensitive function of the ISC rate. The 3O+C 2Hmore » 4 reaction has been extensively studied, but previous experimental work has not provided detailed branching information at elevated temperatures, while previous theoretical studies have employed empirical treatments of ISC. Here we predict the kinetics of 3O+C 2H 4 using an ab initio transition state theory based master equation (AITSTME) approach that includes an a priori description of ISC. Specifically, the ISC rate is calculated using Landau–Zener statistical theory, consideration of the four lowest-energy electronic states, and a direct classical trajectory study of the product branching immediately after ISC. The present theoretical results are largely in good agreement with existing low-temperature experimental kinetics and molecular beam studies. Good agreement is also found with past theoretical work, with the notable exception of the predicted product branching at elevated temperatures. Above ~1000 K, we predict CH 2CHO+H and CH 2+CH 2O as the major products, which differs from the room temperature preference for CH 3+HCO (which is assumed to remain at higher temperatures in some models) and from the prediction of a previous detailed master equation study.« less

  12. Rapid Method Development in Hydrophilic Interaction Liquid Chromatography for Pharmaceutical Analysis Using a Combination of Quantitative Structure-Retention Relationships and Design of Experiments.

    PubMed

    Taraji, Maryam; Haddad, Paul R; Amos, Ruth I J; Talebi, Mohammad; Szucs, Roman; Dolan, John W; Pohl, Chris A

    2017-02-07

    A design-of-experiment (DoE) model was developed, able to describe the retention times of a mixture of pharmaceutical compounds in hydrophilic interaction liquid chromatography (HILIC) under all possible combinations of acetonitrile content, salt concentration, and mobile-phase pH with R 2 > 0.95. Further, a quantitative structure-retention relationship (QSRR) model was developed to predict retention times for new analytes, based only on their chemical structures, with a root-mean-square error of prediction (RMSEP) as low as 0.81%. A compound classification based on the concept of similarity was applied prior to QSRR modeling. Finally, we utilized a combined QSRR-DoE approach to propose an optimal design space in a quality-by-design (QbD) workflow to facilitate the HILIC method development. The mathematical QSRR-DoE model was shown to be highly predictive when applied to an independent test set of unseen compounds in unseen conditions with a RMSEP value of 5.83%. The QSRR-DoE computed retention time of pharmaceutical test analytes and subsequently calculated separation selectivity was used to optimize the chromatographic conditions for efficient separation of targets. A Monte Carlo simulation was performed to evaluate the risk of uncertainty in the model's prediction, and to define the design space where the desired quality criterion was met. Experimental realization of peak selectivity between targets under the selected optimal working conditions confirmed the theoretical predictions. These results demonstrate how discovery of optimal conditions for the separation of new analytes can be accelerated by the use of appropriate theoretical tools.

  13. Weather Prediction Center (WPC) Home Page

    Science.gov Websites

    grids, quantitative precipitation, and winter weather outlook probabilities can be found at: http Short Range Products » More Medium Range Products Quantitative Precipitation Forecasts Legacy Page Discussion (Day 1-3) Quantitative Precipitation Forecast Discussion NWS Weather Prediction Center College

  14. The mathematics of cancer: integrating quantitative models.

    PubMed

    Altrock, Philipp M; Liu, Lin L; Michor, Franziska

    2015-12-01

    Mathematical modelling approaches have become increasingly abundant in cancer research. The complexity of cancer is well suited to quantitative approaches as it provides challenges and opportunities for new developments. In turn, mathematical modelling contributes to cancer research by helping to elucidate mechanisms and by providing quantitative predictions that can be validated. The recent expansion of quantitative models addresses many questions regarding tumour initiation, progression and metastases as well as intra-tumour heterogeneity, treatment responses and resistance. Mathematical models can complement experimental and clinical studies, but also challenge current paradigms, redefine our understanding of mechanisms driving tumorigenesis and shape future research in cancer biology.

  15. Toward the prediction of class I and II mouse major histocompatibility complex-peptide-binding affinity: in silico bioinformatic step-by-step guide using quantitative structure-activity relationships.

    PubMed

    Hattotuwagama, Channa K; Doytchinova, Irini A; Flower, Darren R

    2007-01-01

    Quantitative structure-activity relationship (QSAR) analysis is a cornerstone of modern informatics. Predictive computational models of peptide-major histocompatibility complex (MHC)-binding affinity based on QSAR technology have now become important components of modern computational immunovaccinology. Historically, such approaches have been built around semiqualitative, classification methods, but these are now giving way to quantitative regression methods. We review three methods--a 2D-QSAR additive-partial least squares (PLS) and a 3D-QSAR comparative molecular similarity index analysis (CoMSIA) method--which can identify the sequence dependence of peptide-binding specificity for various class I MHC alleles from the reported binding affinities (IC50) of peptide sets. The third method is an iterative self-consistent (ISC) PLS-based additive method, which is a recently developed extension to the additive method for the affinity prediction of class II peptides. The QSAR methods presented here have established themselves as immunoinformatic techniques complementary to existing methodology, useful in the quantitative prediction of binding affinity: current methods for the in silico identification of T-cell epitopes (which form the basis of many vaccines, diagnostics, and reagents) rely on the accurate computational prediction of peptide-MHC affinity. We have reviewed various human and mouse class I and class II allele models. Studied alleles comprise HLA-A*0101, HLA-A*0201, HLA-A*0202, HLA-A*0203, HLA-A*0206, HLA-A*0301, HLA-A*1101, HLA-A*3101, HLA-A*6801, HLA-A*6802, HLA-B*3501, H2-K(k), H2-K(b), H2-D(b) HLA-DRB1*0101, HLA-DRB1*0401, HLA-DRB1*0701, I-A(b), I-A(d), I-A(k), I-A(S), I-E(d), and I-E(k). In this chapter we show a step-by-step guide into predicting the reliability and the resulting models to represent an advance on existing methods. The peptides used in this study are available from the AntiJen database (http://www.jenner.ac.uk/AntiJen). The PLS method

  16. Isotropic differential phase contrast microscopy for quantitative phase bio-imaging.

    PubMed

    Chen, Hsi-Hsun; Lin, Yu-Zi; Luo, Yuan

    2018-05-16

    Quantitative phase imaging (QPI) has been investigated to retrieve optical phase information of an object and applied to biological microscopy and related medical studies. In recent examples, differential phase contrast (DPC) microscopy can recover phase image of thin sample under multi-axis intensity measurements in wide-field scheme. Unlike conventional DPC, based on theoretical approach under partially coherent condition, we propose a new method to achieve isotropic differential phase contrast (iDPC) with high accuracy and stability for phase recovery in simple and high-speed fashion. The iDPC is simply implemented with a partially coherent microscopy and a programmable thin-film transistor (TFT) shield to digitally modulate structured illumination patterns for QPI. In this article, simulation results show consistency of our theoretical approach for iDPC under partial coherence. In addition, we further demonstrate experiments of quantitative phase images of a standard micro-lens array, as well as label-free live human cell samples. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Quantitative Prediction of Paravalvular Leak in Transcatheter Aortic Valve Replacement Based on Tissue-Mimicking 3D Printing.

    PubMed

    Qian, Zhen; Wang, Kan; Liu, Shizhen; Zhou, Xiao; Rajagopal, Vivek; Meduri, Christopher; Kauten, James R; Chang, Yung-Hang; Wu, Changsheng; Zhang, Chuck; Wang, Ben; Vannan, Mani A

    2017-07-01

    This study aimed to develop a procedure simulation platform for in vitro transcatheter aortic valve replacement (TAVR) using patient-specific 3-dimensional (3D) printed tissue-mimicking phantoms. We investigated the feasibility of using these 3D printed phantoms to quantitatively predict the occurrence, severity, and location of any degree of post-TAVR paravalvular leaks (PVL). We have previously shown that metamaterial 3D printing technique can be used to create patient-specific phantoms that mimic the mechanical properties of biological tissue. This may have applications in procedural planning for cardiovascular interventions. This retrospective study looked at 18 patients who underwent TAVR. Patient-specific aortic root phantoms were created using the tissue-mimicking 3D printing technique using pre-TAVR computed tomography. The CoreValve (self-expanding valve) prostheses were deployed in the phantoms to simulate the TAVR procedure, from which post-TAVR aortic root strain was quantified in vitro. A novel index, the annular bulge index, was measured to assess the post-TAVR annular strain unevenness in the phantoms. We tested the comparative predictive value of the bulge index and other known predictors of post-TAVR PVL. The maximum annular bulge index was significantly different among patient subgroups that had no PVL, trace-to-mild PVL, and moderate-to-severe PVL (p = 0.001). Compared with other known PVL predictors, bulge index was the only significant predictor of moderate-severe PVL (area under the curve = 95%; p < 0.0001). Also, in 12 patients with post-TAVR PVL, the annular bulge index predicted the major PVL location in 9 patients (accuracy = 75%). In this proof-of-concept study, we have demonstrated the feasibility of using 3D printed tissue-mimicking phantoms to quantitatively assess the post-TAVR aortic root strain in vitro. A novel indicator of the post-TAVR annular strain unevenness, the annular bulge index, outperformed the other

  18. Theoretical and experimental study of a new method for prediction of profile drag of airfoil sections

    NASA Technical Reports Server (NTRS)

    Goradia, S. H.; Lilley, D. E.

    1975-01-01

    Theoretical and experimental studies are described which were conducted for the purpose of developing a new generalized method for the prediction of profile drag of single component airfoil sections with sharp trailing edges. This method aims at solution for the flow in the wake from the airfoil trailing edge to the large distance in the downstream direction; the profile drag of the given airfoil section can then easily be obtained from the momentum balance once the shape of velocity profile at a large distance from the airfoil trailing edge has been computed. Computer program subroutines have been developed for the computation of the profile drag and flow in the airfoil wake on CDC6600 computer. The required inputs to the computer program consist of free stream conditions and the characteristics of the boundary layers at the airfoil trailing edge or at the point of incipient separation in the neighborhood of airfoil trailing edge. The method described is quite generalized and hence can be extended to the solution of the profile drag for multi-component airfoil sections.

  19. Commentary on factors affecting transverse vibration using an idealized theoretical equation

    Treesearch

    Joseph F. Murphy

    2000-01-01

    An idealized theoretical equation to calculate flexural stiffness using transverse vibration of a simply end-supported beam is being considered by the American Society of Testing and Materials (ASTM) Wood Committee D07 to determine lumber modulus of elasticity. This commentary provides the user a quantitative view of six factors that affect the accuracy of using the...

  20. Evaluation of a web based informatics system with data mining tools for predicting outcomes with quantitative imaging features in stroke rehabilitation clinical trials

    NASA Astrophysics Data System (ADS)

    Wang, Ximing; Kim, Bokkyu; Park, Ji Hoon; Wang, Erik; Forsyth, Sydney; Lim, Cody; Ravi, Ragini; Karibyan, Sarkis; Sanchez, Alexander; Liu, Brent

    2017-03-01

    Quantitative imaging biomarkers are used widely in clinical trials for tracking and evaluation of medical interventions. Previously, we have presented a web based informatics system utilizing quantitative imaging features for predicting outcomes in stroke rehabilitation clinical trials. The system integrates imaging features extraction tools and a web-based statistical analysis tool. The tools include a generalized linear mixed model(GLMM) that can investigate potential significance and correlation based on features extracted from clinical data and quantitative biomarkers. The imaging features extraction tools allow the user to collect imaging features and the GLMM module allows the user to select clinical data and imaging features such as stroke lesion characteristics from the database as regressors and regressands. This paper discusses the application scenario and evaluation results of the system in a stroke rehabilitation clinical trial. The system was utilized to manage clinical data and extract imaging biomarkers including stroke lesion volume, location and ventricle/brain ratio. The GLMM module was validated and the efficiency of data analysis was also evaluated.

  1. Prediction of anticancer property of bowsellic acid derivatives by quantitative structure activity relationship analysis and molecular docking study.

    PubMed

    Satpathy, Raghunath; Guru, R K; Behera, R; Nayak, B

    2015-01-01

    Boswellic acid consists of a series of pentacyclic triterpene molecules that are produced by the plant Boswellia serrata. The potential applications of Bowsellic acid for treatment of cancer have been focused here. To predict the property of the bowsellic acid derivatives as anticancer compounds by various computational approaches. In this work, all total 65 derivatives of bowsellic acids from the PubChem database were considered for the study. After energy minimization of the ligands various types of molecular descriptors were computed and corresponding two-dimensional quantitative structure activity relationship (QSAR) models were obtained by taking Andrews coefficient as the dependent variable. Different types of comparative analysis were used for QSAR study are multiple linear regression, partial least squares, support vector machines and artificial neural network. From the study geometrical descriptors shows the highest correlation coefficient, which indicates the binding factor of the compound. To evaluate the anticancer property molecular docking study of six selected ligands based on Andrews affinity were performed with nuclear factor-kappa protein kinase (Protein Data Bank ID 4G3D), which is an established therapeutic target for cancers. Along with QSAR study and docking result, it was predicted that bowsellic acid can also be treated as a potential anticancer compound. Along with QSAR study and docking result, it was predicted that bowsellic acid can also be treated as a potential anticancer compound.

  2. A generalized quantitative interpretation of dark-field contrast for highly concentrated microsphere suspensions

    PubMed Central

    Gkoumas, Spyridon; Villanueva-Perez, Pablo; Wang, Zhentian; Romano, Lucia; Abis, Matteo; Stampanoni, Marco

    2016-01-01

    In X-ray grating interferometry, dark-field contrast arises due to partial extinction of the detected interference fringes. This is also called visibility reduction and is attributed to small-angle scattering from unresolved structures in the imaged object. In recent years, analytical quantitative frameworks of dark-field contrast have been developed for highly diluted monodisperse microsphere suspensions with maximum 6% volume fraction. These frameworks assume that scattering particles are separated by large enough distances, which make any interparticle scattering interference negligible. In this paper, we start from the small-angle scattering intensity equation and, by linking Fourier and real-space, we introduce the structure factor and thus extend the analytical and experimental quantitative interpretation of dark-field contrast, for a range of suspensions with volume fractions reaching 40%. The structure factor accounts for interparticle scattering interference. Without introducing any additional fitting parameters, we successfully predict the experimental values measured at the TOMCAT beamline, Swiss Light Source. Finally, we apply this theoretical framework to an experiment probing a range of system correlation lengths by acquiring dark-field images at different energies. This proposed method has the potential to be applied in single-shot-mode using a polychromatic X-ray tube setup and a single-photon-counting energy-resolving detector. PMID:27734931

  3. Trainee and Instructor Task Quantification: Development of Quantitative Indices and a Predictive Methodology.

    ERIC Educational Resources Information Center

    Whaton, George R.; And Others

    As the first step in a program to develop quantitative techniques for prescribing the design and use of training systems, the present study attempted: to compile an initial set of quantitative indices, to determine whether these indices could be used to describe a sample of trainee tasks and differentiate among them, to develop a predictive…

  4. Pure shear and simple shear calcite textures. Comparison of experimental, theoretical and natural data

    USGS Publications Warehouse

    Wenk, H.-R.; Takeshita, T.; Bechler, E.; Erskine, B.G.; Matthies, S.

    1987-01-01

    The pattern of lattice preferred orientation (texture) in deformed rocks is an expression of the strain path and the acting deformation mechanisms. A first indication about the strain path is given by the symmetry of pole figures: coaxial deformation produces orthorhombic pole figures, while non-coaxial deformation yields monoclinic or triclinic pole figures. More quantitative information about the strain history can be obtained by comparing natural textures with experimental ones and with theoretical models. For this comparison, a representation in the sensitive three-dimensional orientation distribution space is extremely important and efforts are made to explain this concept. We have been investigating differences between pure shear and simple shear deformation incarbonate rocks and have found considerable agreement between textures produced in plane strain experiments and predictions based on the Taylor model. We were able to simulate the observed changes with strain history (coaxial vs non-coaxial) and the profound texture transition which occurs with increasing temperature. Two natural calcite textures were then selected which we interpreted by comparing them with the experimental and theoretical results. A marble from the Santa Rosa mylonite zone in southern California displays orthorhombic pole figures with patterns consistent with low temperature deformation in pure shear. A limestone from the Tanque Verde detachment fault in Arizona has a monoclinic fabric from which we can interpret that 60% of the deformation occurred by simple shear. ?? 1987.

  5. Conformal Regression for Quantitative Structure-Activity Relationship Modeling-Quantifying Prediction Uncertainty.

    PubMed

    Svensson, Fredrik; Aniceto, Natalia; Norinder, Ulf; Cortes-Ciriano, Isidro; Spjuth, Ola; Carlsson, Lars; Bender, Andreas

    2018-05-29

    Making predictions with an associated confidence is highly desirable as it facilitates decision making and resource prioritization. Conformal regression is a machine learning framework that allows the user to define the required confidence and delivers predictions that are guaranteed to be correct to the selected extent. In this study, we apply conformal regression to model molecular properties and bioactivity values and investigate different ways to scale the resultant prediction intervals to create as efficient (i.e., narrow) regressors as possible. Different algorithms to estimate the prediction uncertainty were used to normalize the prediction ranges, and the different approaches were evaluated on 29 publicly available data sets. Our results show that the most efficient conformal regressors are obtained when using the natural exponential of the ensemble standard deviation from the underlying random forest to scale the prediction intervals, but other approaches were almost as efficient. This approach afforded an average prediction range of 1.65 pIC50 units at the 80% confidence level when applied to bioactivity modeling. The choice of nonconformity function has a pronounced impact on the average prediction range with a difference of close to one log unit in bioactivity between the tightest and widest prediction range. Overall, conformal regression is a robust approach to generate bioactivity predictions with associated confidence.

  6. Quantitative assessment of background parenchymal enhancement in breast magnetic resonance images predicts the risk of breast cancer.

    PubMed

    Hu, Xiaoxin; Jiang, Luan; Li, Qiang; Gu, Yajia

    2017-02-07

    The objective of this study was to evaluate the association betweenthe quantitative assessment of background parenchymal enhancement rate (BPER) and breast cancer. From 14,033 consecutive patients who underwent breast MRI in our center, we randomly selected 101 normal controls. Then, we selected 101 women with benign breast lesions and 101 women with breast cancer who were matched for age and menstruation status. We evaluated BPER at early (2 minutes), medium (4 minutes) and late (6 minutes) enhanced time phases of breast MRI for quantitative assessment. Odds ratios (ORs) for risk of breast cancer were calculated using the receiver operating curve. The BPER increased in a time-dependent manner after enhancement in both premenopausal and postmenopausal women. Premenopausal women had higher BPER than postmenopausal women at early, medium and late enhanced phases. In the normal population, the OR for probability of breast cancer for premenopausal women with high BPER was 4.1 (95% CI: 1.7-9.7) and 4.6 (95% CI: 1.7-12.0) for postmenopausal women. The OR of breast cancer morbidity in premenopausal women with high BPER was 2.6 (95% CI: 1.1-6.4) and 2.8 (95% CI: 1.2-6.1) for postmenopausal women. The BPER was found to be a predictive factor of breast cancer morbidity. Different time phases should be used to assess BPER in premenopausal and postmenopausal women.

  7. Income distribution dependence of poverty measure: A theoretical analysis

    NASA Astrophysics Data System (ADS)

    Chattopadhyay, Amit K.; Mallick, Sushanta K.

    2007-04-01

    Using a modified deprivation (or poverty) function, in this paper, we theoretically study the changes in poverty with respect to the ‘global’ mean and variance of the income distribution using Indian survey data. We show that when the income obeys a log-normal distribution, a rising mean income generally indicates a reduction in poverty while an increase in the variance of the income distribution increases poverty. This altruistic view for a developing economy, however, is not tenable anymore once the poverty index is found to follow a pareto distribution. Here although a rising mean income indicates a reduction in poverty, due to the presence of an inflexion point in the poverty function, there is a critical value of the variance below which poverty decreases with increasing variance while beyond this value, poverty undergoes a steep increase followed by a decrease with respect to higher variance. Identifying this inflexion point as the poverty line, we show that the pareto poverty function satisfies all three standard axioms of a poverty index [N.C. Kakwani, Econometrica 43 (1980) 437; A.K. Sen, Econometrica 44 (1976) 219] whereas the log-normal distribution falls short of this requisite. Following these results, we make quantitative predictions to correlate a developing with a developed economy.

  8. Quantitative structure--property relationships for enhancing predictions of synthetic organic chemical removal from drinking water by granular activated carbon.

    PubMed

    Magnuson, Matthew L; Speth, Thomas F

    2005-10-01

    Granular activated carbon is a frequently explored technology for removing synthetic organic contaminants from drinking water sources. The success of this technology relies on a number of factors based not only on the adsorptive properties of the contaminant but also on properties of the water itself, notably the presence of substances in the water which compete for adsorption sites. Because it is impractical to perform field-scale evaluations for all possible contaminants, the pore surface diffusion model (PSDM) has been developed and used to predict activated carbon column performance using single-solute isotherm data as inputs. Many assumptions are built into this model to account for kinetics of adsorption and competition for adsorption sites. This work further evaluates and expands this model, through the use of quantitative structure-property relationships (QSPRs) to predict the effect of natural organic matter fouling on activated carbon adsorption of specific contaminants. The QSPRs developed are based on a combination of calculated topographical indices and quantum chemical parameters. The QSPRs were evaluated in terms of their statistical predictive ability,the physical significance of the descriptors, and by comparison with field data. The QSPR-enhanced PSDM was judged to give results better than what could previously be obtained.

  9. Review of Nearshore Morphologic Prediction

    NASA Astrophysics Data System (ADS)

    Plant, N. G.; Dalyander, S.; Long, J.

    2014-12-01

    The evolution of the world's erodible coastlines will determine the balance between the benefits and costs associated with human and ecological utilization of shores, beaches, dunes, barrier islands, wetlands, and estuaries. So, we would like to predict coastal evolution to guide management and planning of human and ecological response to coastal changes. After decades of research investment in data collection, theoretical and statistical analysis, and model development we have a number of empirical, statistical, and deterministic models that can predict the evolution of the shoreline, beaches, dunes, and wetlands over time scales of hours to decades, and even predict the evolution of geologic strata over the course of millennia. Comparisons of predictions to data have demonstrated that these models can have meaningful predictive skill. But these comparisons also highlight the deficiencies in fundamental understanding, formulations, or data that are responsible for prediction errors and uncertainty. Here, we review a subset of predictive models of the nearshore to illustrate tradeoffs in complexity, predictive skill, and sensitivity to input data and parameterization errors. We identify where future improvement in prediction skill will result from improved theoretical understanding, and data collection, and model-data assimilation.

  10. Theoretical prediction of welding distortion in large and complex structures

    NASA Astrophysics Data System (ADS)

    Deng, De-An

    2010-06-01

    Welding technology is widely used to assemble large thin plate structures such as ships, automobiles, and passenger trains because of its high productivity. However, it is impossible to avoid welding-induced distortion during the assembly process. Welding distortion not only reduces the fabrication accuracy of a weldment, but also decreases the productivity due to correction work. If welding distortion can be predicted using a practical method beforehand, the prediction will be useful for taking appropriate measures to control the dimensional accuracy to an acceptable limit. In this study, a two-step computational approach, which is a combination of a thermoelastic-plastic finite element method (FEM) and an elastic finite element with consideration for large deformation, is developed to estimate welding distortion for large and complex welded structures. Welding distortions in several representative large complex structures, which are often used in shipbuilding, are simulated using the proposed method. By comparing the predictions and the measurements, the effectiveness of the two-step computational approach is verified.

  11. An empirical/theoretical model with dimensionless numbers to predict the performance of electrodialysis systems on the basis of operating conditions.

    PubMed

    Karimi, Leila; Ghassemi, Abbas

    2016-07-01

    Among the different technologies developed for desalination, the electrodialysis/electrodialysis reversal (ED/EDR) process is one of the most promising for treating brackish water with low salinity when there is high risk of scaling. Multiple researchers have investigated ED/EDR to optimize the process, determine the effects of operating parameters, and develop theoretical/empirical models. Previously published empirical/theoretical models have evaluated the effect of the hydraulic conditions of the ED/EDR on the limiting current density using dimensionless numbers. The reason for previous studies' emphasis on limiting current density is twofold: 1) to maximize ion removal, most ED/EDR systems are operated close to limiting current conditions if there is not a scaling potential in the concentrate chamber due to a high concentration of less-soluble salts; and 2) for modeling the ED/EDR system with dimensionless numbers, it is more accurate and convenient to use limiting current density, where the boundary layer's characteristics are known at constant electrical conditions. To improve knowledge of ED/EDR systems, ED/EDR models should be also developed for the Ohmic region, where operation reduces energy consumption, facilitates targeted ion removal, and prolongs membrane life compared to limiting current conditions. In this paper, theoretical/empirical models were developed for ED/EDR performance in a wide range of operating conditions. The presented ion removal and selectivity models were developed for the removal of monovalent ions and divalent ions utilizing the dominant dimensionless numbers obtained from laboratory scale electrodialysis experiments. At any system scale, these models can predict ED/EDR performance in terms of monovalent and divalent ion removal. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Theoretical models of helicopter rotor noise

    NASA Technical Reports Server (NTRS)

    Hawkings, D. L.

    1978-01-01

    For low speed rotors, it is shown that unsteady load models are only partially successful in predicting experimental levels. A theoretical model is presented which leads to the concept of unsteady thickness noise. This gives better agreement with test results. For high speed rotors, it is argued that present models are incomplete and that other mechanisms are at work. Some possibilities are briefly discussed.

  13. Predicting Retention Times of Naturally Occurring Phenolic Compounds in Reversed-Phase Liquid Chromatography: A Quantitative Structure-Retention Relationship (QSRR) Approach

    PubMed Central

    Akbar, Jamshed; Iqbal, Shahid; Batool, Fozia; Karim, Abdul; Chan, Kim Wei

    2012-01-01

    Quantitative structure-retention relationships (QSRRs) have successfully been developed for naturally occurring phenolic compounds in a reversed-phase liquid chromatographic (RPLC) system. A total of 1519 descriptors were calculated from the optimized structures of the molecules using MOPAC2009 and DRAGON softwares. The data set of 39 molecules was divided into training and external validation sets. For feature selection and mapping we used step-wise multiple linear regression (SMLR), unsupervised forward selection followed by step-wise multiple linear regression (UFS-SMLR) and artificial neural networks (ANN). Stable and robust models with significant predictive abilities in terms of validation statistics were obtained with negation of any chance correlation. ANN models were found better than remaining two approaches. HNar, IDM, Mp, GATS2v, DISP and 3D-MoRSE (signals 22, 28 and 32) descriptors based on van der Waals volume, electronegativity, mass and polarizability, at atomic level, were found to have significant effects on the retention times. The possible implications of these descriptors in RPLC have been discussed. All the models are proven to be quite able to predict the retention times of phenolic compounds and have shown remarkable validation, robustness, stability and predictive performance. PMID:23203132

  14. Quantitative AOP-based predictions for two aromatase inhibitors evaluating the influence of bioaccumulation on prediction accuracy

    EPA Science Inventory

    The adverse outcome pathway (AOP) framework can be used to support the use of mechanistic toxicology data as a basis for risk assessment. For certain risk contexts this includes defining, quantitative linkages between the molecular initiating event (MIE) and subsequent key events...

  15. Quantitative DNA Methylation Analysis Identifies a Single CpG Dinucleotide Important for ZAP-70 Expression and Predictive of Prognosis in Chronic Lymphocytic Leukemia

    PubMed Central

    Claus, Rainer; Lucas, David M.; Stilgenbauer, Stephan; Ruppert, Amy S.; Yu, Lianbo; Zucknick, Manuela; Mertens, Daniel; Bühler, Andreas; Oakes, Christopher C.; Larson, Richard A.; Kay, Neil E.; Jelinek, Diane F.; Kipps, Thomas J.; Rassenti, Laura Z.; Gribben, John G.; Döhner, Hartmut; Heerema, Nyla A.; Marcucci, Guido; Plass, Christoph; Byrd, John C.

    2012-01-01

    Purpose Increased ZAP-70 expression predicts poor prognosis in chronic lymphocytic leukemia (CLL). Current methods for accurately measuring ZAP-70 expression are problematic, preventing widespread application of these tests in clinical decision making. We therefore used comprehensive DNA methylation profiling of the ZAP-70 regulatory region to identify sites important for transcriptional control. Patients and Methods High-resolution quantitative DNA methylation analysis of the entire ZAP-70 gene regulatory regions was conducted on 247 samples from patients with CLL from four independent clinical studies. Results Through this comprehensive analysis, we identified a small area in the 5′ regulatory region of ZAP-70 that showed large variability in methylation in CLL samples but was universally methylated in normal B cells. High correlation with mRNA and protein expression, as well as activity in promoter reporter assays, revealed that within this differentially methylated region, a single CpG dinucleotide and neighboring nucleotides are particularly important in ZAP-70 transcriptional regulation. Furthermore, by using clustering approaches, we identified a prognostic role for this site in four independent data sets of patients with CLL using time to treatment, progression-free survival, and overall survival as clinical end points. Conclusion Comprehensive quantitative DNA methylation analysis of the ZAP-70 gene in CLL identified important regions responsible for transcriptional regulation. In addition, loss of methylation at a specific single CpG dinucleotide in the ZAP-70 5′ regulatory sequence is a highly predictive and reproducible biomarker of poor prognosis in this disease. This work demonstrates the feasibility of using quantitative specific ZAP-70 methylation analysis as a relevant clinically applicable prognostic test in CLL. PMID:22564988

  16. Differential contribution of genomic regions to marked genetic variation and prediction of quantitative traits in broiler chickens.

    PubMed

    Abdollahi-Arpanahi, Rostam; Morota, Gota; Valente, Bruno D; Kranis, Andreas; Rosa, Guilherme J M; Gianola, Daniel

    2016-02-03

    phenotypic variation for the three traits studied. Overall, the contribution of additive genetic variance to the total genetic variance was much greater than that of dominance variance. Our results show that all genomic regions are important for the prediction of the targeted traits, and the whole-genome approach was reaffirmed as the best tool for genome-enabled prediction of quantitative traits.

  17. NMR relaxation induced by iron oxide particles: testing theoretical models.

    PubMed

    Gossuin, Y; Orlando, T; Basini, M; Henrard, D; Lascialfari, A; Mattea, C; Stapf, S; Vuong, Q L

    2016-04-15

    Superparamagnetic iron oxide particles find their main application as contrast agents for cellular and molecular magnetic resonance imaging. The contrast they bring is due to the shortening of the transverse relaxation time T 2 of water protons. In order to understand their influence on proton relaxation, different theoretical relaxation models have been developed, each of them presenting a certain validity domain, which depends on the particle characteristics and proton dynamics. The validation of these models is crucial since they allow for predicting the ideal particle characteristics for obtaining the best contrast but also because the fitting of T 1 experimental data by the theory constitutes an interesting tool for the characterization of the nanoparticles. In this work, T 2 of suspensions of iron oxide particles in different solvents and at different temperatures, corresponding to different proton diffusion properties, were measured and were compared to the three main theoretical models (the motional averaging regime, the static dephasing regime, and the partial refocusing model) with good qualitative agreement. However, a real quantitative agreement was not observed, probably because of the complexity of these nanoparticulate systems. The Roch theory, developed in the motional averaging regime (MAR), was also successfully used to fit T 1 nuclear magnetic relaxation dispersion (NMRD) profiles, even outside the MAR validity range, and provided a good estimate of the particle size. On the other hand, the simultaneous fitting of T 1 and T 2 NMRD profiles by the theory was impossible, and this occurrence constitutes a clear limitation of the Roch model. Finally, the theory was shown to satisfactorily fit the deuterium T 1 NMRD profile of superparamagnetic particle suspensions in heavy water.

  18. Internal performance predictions for Langley scramjet engine module

    NASA Technical Reports Server (NTRS)

    Pinckney, S. Z.

    1978-01-01

    A one dimensional theoretical method for the prediction of the internal performance of a scramjet engine is presented. The effects of changes in vehicle forebody flow parameters and characteristics on predicted thrust for the scramjet engine were evaluated using this method, and results are presented. A theoretical evaluation of the effects of changes in the scramjet engine's internal parameters is also presented. Theoretical internal performance predictions, in terms thrust coefficient and specific impulse, are provided for the scramjet engine for free stream Mach numbers of 5, 6, and 7 free stream dynamic pressure of 23,940 N/sq m forebody surface angles of 4.6 deg to 14.6 deg, and fuel equivalence ratio of 1.0.

  19. Theoretical relationship between elastic wave velocity and electrical resistivity

    NASA Astrophysics Data System (ADS)

    Lee, Jong-Sub; Yoon, Hyung-Koo

    2015-05-01

    Elastic wave velocity and electrical resistivity have been commonly applied to estimate stratum structures and obtain subsurface soil design parameters. Both elastic wave velocity and electrical resistivity are related to the void ratio; the objective of this study is therefore to suggest a theoretical relationship between the two physical parameters. Gassmann theory and Archie's equation are applied to propose a new theoretical equation, which relates the compressional wave velocity to shear wave velocity and electrical resistivity. The piezo disk element (PDE) and bender element (BE) are used to measure the compressional and shear wave velocities, respectively. In addition, the electrical resistivity is obtained by using the electrical resistivity probe (ERP). The elastic wave velocity and electrical resistivity are recorded in several types of soils including sand, silty sand, silty clay, silt, and clay-sand mixture. The appropriate input parameters are determined based on the error norm in order to increase the reliability of the proposed relationship. The predicted compressional wave velocities from the shear wave velocity and electrical resistivity are similar to the measured compressional velocities. This study demonstrates that the new theoretical relationship may be effectively used to predict the unknown geophysical property from the measured values.

  20. Towards cheminformatics-based estimation of drug therapeutic index: Predicting the protective index of anticonvulsants using a new quantitative structure-index relationship approach.

    PubMed

    Chen, Shangying; Zhang, Peng; Liu, Xin; Qin, Chu; Tao, Lin; Zhang, Cheng; Yang, Sheng Yong; Chen, Yu Zong; Chui, Wai Keung

    2016-06-01

    The overall efficacy and safety profile of a new drug is partially evaluated by the therapeutic index in clinical studies and by the protective index (PI) in preclinical studies. In-silico predictive methods may facilitate the assessment of these indicators. Although QSAR and QSTR models can be used for predicting PI, their predictive capability has not been evaluated. To test this capability, we developed QSAR and QSTR models for predicting the activity and toxicity of anticonvulsants at accuracy levels above the literature-reported threshold (LT) of good QSAR models as tested by both the internal 5-fold cross validation and external validation method. These models showed significantly compromised PI predictive capability due to the cumulative errors of the QSAR and QSTR models. Therefore, in this investigation a new quantitative structure-index relationship (QSIR) model was devised and it showed improved PI predictive capability that superseded the LT of good QSAR models. The QSAR, QSTR and QSIR models were developed using support vector regression (SVR) method with the parameters optimized by using the greedy search method. The molecular descriptors relevant to the prediction of anticonvulsant activities, toxicities and PIs were analyzed by a recursive feature elimination method. The selected molecular descriptors are primarily associated with the drug-like, pharmacological and toxicological features and those used in the published anticonvulsant QSAR and QSTR models. This study suggested that QSIR is useful for estimating the therapeutic index of drug candidates. Copyright © 2016. Published by Elsevier Inc.

  1. Quantitative test for concave aspheric surfaces using a Babinet compensator.

    PubMed

    Saxena, A K

    1979-08-15

    A quantitative test for the evaluation of surface figures of concave aspheric surfaces using a Babinet compensator is reported. A theoretical estimate of the sensitivity is 0.002lambda for a minimum detectable phase change of 2 pi x 10(-3) rad over a segment length of 1.0 cm.

  2. Influence factors and prediction of stormwater runoff of urban green space in Tianjin, China: laboratory experiment and quantitative theory model.

    PubMed

    Yang, Xu; You, Xue-Yi; Ji, Min; Nima, Ciren

    2013-01-01

    The effects of limiting factors such as rainfall intensity, rainfall duration, grass type and vegetation coverage on the stormwater runoff of urban green space was investigated in Tianjin. The prediction equation of stormwater runoff was established by the quantitative theory with the lab experimental data of soil columns. It was validated by three field experiments and the relative errors between predicted and measured stormwater runoff are 1.41, 1.52 and 7.35%, respectively. The results implied that the prediction equation could be used to forecast the stormwater runoff of urban green space. The results of range and variance analysis indicated the sequence order of limiting factors is rainfall intensity > grass type > rainfall duration > vegetation coverage. The least runoff of green land in the present study is the combination of rainfall intensity 60.0 mm/h, duration 60.0 min, grass Festuca arundinacea and vegetation coverage 90.0%. When the intensity and duration of rainfall are 60.0 mm/h and 90.0 min, the predicted volumetric runoff coefficient is 0.23 with Festuca arundinacea of 90.0% vegetation coverage. The present approach indicated that green space is an effective method to reduce stormwater runoff and the conclusions are mainly applicable to Tianjin and the semi-arid areas with main summer precipitation and long-time interval rainfalls.

  3. Intrasubject Predictions of Vocational Preference: Convergent Validation via the Decision Theoretic Paradigm.

    ERIC Educational Resources Information Center

    Monahan, Carlyn J.; Muchinsky, Paul M.

    1985-01-01

    The degree of convergent validity among four methods of identifying vocational preferences is assessed via the decision theoretic paradigm. Vocational preferences identified by Holland's Vocational Preference Inventory (VPI), a rating procedure, and ranking were compared with preferences identified from a policy-capturing model developed from an…

  4. Exploring simple, transparent, interpretable and predictive QSAR models for classification and quantitative prediction of rat toxicity of ionic liquids using OECD recommended guidelines.

    PubMed

    Das, Rudra Narayan; Roy, Kunal; Popelier, Paul L A

    2015-11-01

    The present study explores the chemical attributes of diverse ionic liquids responsible for their cytotoxicity in a rat leukemia cell line (IPC-81) by developing predictive classification as well as regression-based mathematical models. Simple and interpretable descriptors derived from a two-dimensional representation of the chemical structures along with quantum topological molecular similarity indices have been used for model development, employing unambiguous modeling strategies that strictly obey the guidelines of the Organization for Economic Co-operation and Development (OECD) for quantitative structure-activity relationship (QSAR) analysis. The structure-toxicity relationships that emerged from both classification and regression-based models were in accordance with the findings of some previous studies. The models suggested that the cytotoxicity of ionic liquids is dependent on the cationic surfactant action, long alkyl side chains, cationic lipophilicity as well as aromaticity, the presence of a dialkylamino substituent at the 4-position of the pyridinium nucleus and a bulky anionic moiety. The models have been transparently presented in the form of equations, thus allowing their easy transferability in accordance with the OECD guidelines. The models have also been subjected to rigorous validation tests proving their predictive potential and can hence be used for designing novel and "greener" ionic liquids. The major strength of the present study lies in the use of a diverse and large dataset, use of simple reproducible descriptors and compliance with the OECD norms. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Subclinical Primary Psychopathy, but Not Physical Formidability or Attractiveness, Predicts Conversational Dominance in a Zero-Acquaintance Situation

    PubMed Central

    Manson, Joseph H.; Gervais, Matthew M.; Fessler, Daniel M. T.; Kline, Michelle A.

    2014-01-01

    The determinants of conversational dominance are not well understood. We used videotaped triadic interactions among unacquainted same-sex American college students to test predictions drawn from the theoretical distinction between dominance and prestige as modes of human status competition. Specifically, we investigated the effects of physical formidability, facial attractiveness, social status, and self-reported subclinical psychopathy on quantitative (proportion of words produced), participatory (interruptions produced and sustained), and sequential (topic control) dominance. No measure of physical formidability or attractiveness was associated with any form of conversational dominance, suggesting that the characteristics of our study population or experimental frame may have moderated their role in dominance dynamics. Primary psychopathy was positively associated with quantitative dominance and (marginally) overall triad talkativeness, and negatively associated (in men) with affect word use, whereas secondary psychopathy was unrelated to conversational dominance. The two psychopathy factors had significant opposing effects on quantitative dominance in a multivariate model. These latter findings suggest that glibness in primary psychopathy may function to elicit exploitable information from others in a relationally mobile society. PMID:25426962

  6. Subclinical primary psychopathy, but not physical formidability or attractiveness, predicts conversational dominance in a zero-acquaintance situation.

    PubMed

    Manson, Joseph H; Gervais, Matthew M; Fessler, Daniel M T; Kline, Michelle A

    2014-01-01

    The determinants of conversational dominance are not well understood. We used videotaped triadic interactions among unacquainted same-sex American college students to test predictions drawn from the theoretical distinction between dominance and prestige as modes of human status competition. Specifically, we investigated the effects of physical formidability, facial attractiveness, social status, and self-reported subclinical psychopathy on quantitative (proportion of words produced), participatory (interruptions produced and sustained), and sequential (topic control) dominance. No measure of physical formidability or attractiveness was associated with any form of conversational dominance, suggesting that the characteristics of our study population or experimental frame may have moderated their role in dominance dynamics. Primary psychopathy was positively associated with quantitative dominance and (marginally) overall triad talkativeness, and negatively associated (in men) with affect word use, whereas secondary psychopathy was unrelated to conversational dominance. The two psychopathy factors had significant opposing effects on quantitative dominance in a multivariate model. These latter findings suggest that glibness in primary psychopathy may function to elicit exploitable information from others in a relationally mobile society.

  7. Variational transition state theory: theoretical framework and recent developments.

    PubMed

    Bao, Junwei Lucas; Truhlar, Donald G

    2017-12-11

    This article reviews the fundamentals of variational transition state theory (VTST), its recent theoretical development, and some modern applications. The theoretical methods reviewed here include multidimensional quantum mechanical tunneling, multistructural VTST (MS-VTST), multi-path VTST (MP-VTST), both reaction-path VTST (RP-VTST) and variable reaction coordinate VTST (VRC-VTST), system-specific quantum Rice-Ramsperger-Kassel theory (SS-QRRK) for predicting pressure-dependent rate constants, and VTST in the solid phase, liquid phase, and enzymes. We also provide some perspectives regarding the general applicability of VTST.

  8. Microdose clinical trial: quantitative determination of nicardipine and prediction of metabolites in human plasma.

    PubMed

    Yamane, Naoe; Takami, Tomonori; Tozuka, Zenzaburo; Sugiyama, Yuichi; Yamazaki, Akira; Kumagai, Yuji

    2009-01-01

    A sample treatment procedure and high-sensitive liquid chromatography/tandem mass spectrometry (LC/MS/MS) method for quantitative determination of nicardipine in human plasma were developed for a microdose clinical trial with nicardipine, a non-radioisotope labeled drug. The calibration curve was linear in the range of 1-500 pg/mL using 1 mL of plasma. Analytical method validation for the clinical dose, for which the calibration curve was linear in the range of 0.2-100 ng/mL using 20 microL of plasma, was also conducted. Each method was successfully applied to making determinations in plasma using LC/MS/MS after administration of a microdose (100 microg) and clinical dose (20 mg) to each of six healthy volunteers. We tested new approaches in the search for metabolites in plasma after microdosing. In vitro metabolites of nicardipine were characterized using linear ion trap-fourier transform ion cyclotron resonance mass spectrometry (LIT-FTICRMS) and the nine metabolites predicted to be in plasma were analyzed using LC/MS/MS. There is a strong possibility that analysis of metabolites by LC/MS/MS may advance to utilization in microdose clinical trials with non-radioisotope labeled drugs.

  9. Droplet size in flow: Theoretical model and application to polymer blends

    NASA Astrophysics Data System (ADS)

    Fortelný, Ivan; Jůza, Josef

    2017-05-01

    The paper is focused on prediction of the average droplet radius, R, in flowing polymer blends where the droplet size is determined by dynamic equilibrium between the droplet breakup and coalescence. Expressions for the droplet breakup frequency in systems with low and high contents of the dispersed phase are derived using available theoretical and experimental results for model blends. Dependences of the coalescence probability, Pc, on system parameters, following from recent theories, is considered and approximate equation for Pc in a system with a low polydispersity in the droplet size is proposed. Equations for R in systems with low and high contents of the dispersed phase are derived. Combination of these equations predicts realistic dependence of R on the volume fraction of dispersed droplets, φ. Theoretical prediction of the ratio of R to the critical droplet radius at breakup agrees fairly well with experimental values for steadily mixed polymer blends.

  10. Quantitative systems toxicology

    PubMed Central

    Bloomingdale, Peter; Housand, Conrad; Apgar, Joshua F.; Millard, Bjorn L.; Mager, Donald E.; Burke, John M.; Shah, Dhaval K.

    2017-01-01

    The overarching goal of modern drug development is to optimize therapeutic benefits while minimizing adverse effects. However, inadequate efficacy and safety concerns remain to be the major causes of drug attrition in clinical development. For the past 80 years, toxicity testing has consisted of evaluating the adverse effects of drugs in animals to predict human health risks. The U.S. Environmental Protection Agency recognized the need to develop innovative toxicity testing strategies and asked the National Research Council to develop a long-range vision and strategy for toxicity testing in the 21st century. The vision aims to reduce the use of animals and drug development costs through the integration of computational modeling and in vitro experimental methods that evaluates the perturbation of toxicity-related pathways. Towards this vision, collaborative quantitative systems pharmacology and toxicology modeling endeavors (QSP/QST) have been initiated amongst numerous organizations worldwide. In this article, we discuss how quantitative structure-activity relationship (QSAR), network-based, and pharmacokinetic/pharmacodynamic modeling approaches can be integrated into the framework of QST models. Additionally, we review the application of QST models to predict cardiotoxicity and hepatotoxicity of drugs throughout their development. Cell and organ specific QST models are likely to become an essential component of modern toxicity testing, and provides a solid foundation towards determining individualized therapeutic windows to improve patient safety. PMID:29308440

  11. Quantitative T2 mapping of recurrent glioblastoma under bevacizumab improves monitoring for non-enhancing tumor progression and predicts overall survival

    PubMed Central

    Hattingen, Elke; Jurcoane, Alina; Daneshvar, Keivan; Pilatus, Ulrich; Mittelbronn, Michel; Steinbach, Joachim P.; Bähr, Oliver

    2013-01-01

    Background Anti-angiogenic treatment in recurrent glioblastoma patients suppresses contrast enhancement and reduces vasogenic edema while non-enhancing tumor progression is common. Thus, the importance of T2-weighted imaging is increasing. We therefore quantified T2 relaxation times, which are the basis for the image contrast on T2-weighted images. Methods Conventional and quantitative MRI procedures were performed on 18 patients with recurrent glioblastoma before treatment with bevacizumab and every 8 weeks thereafter until further tumor progression. We segmented the tumor on conventional MRI into 3 subvolumes: enhancing tumor, non-enhancing tumor, and edema. Using coregistered quantitative maps, we followed changes in T2 relaxation time in each subvolume. Moreover, we generated differential T2 maps by a voxelwise subtraction using the first T2 map under bevacizumab as reference. Results Visually segmented areas of tumor and edema did not differ in T2 relaxation times. Non-enhancing tumor volume did not decrease after commencement of bevacizumab treatment but strikingly increased at progression. Differential T2 maps clearly showed non-enhancing tumor progression in previously normal brain. T2 relaxation times decreased under bevacizumab without re-increasing at tumor progression. A decrease of <26 ms in the enhancing tumor following exposure to bevacizumab was associated with longer overall survival. Conclusions Combining quantitative MRI and tumor segmentation improves monitoring of glioblastoma patients under bevacizumab. The degree of change in T2 relaxation time under bevacizumab may be an early response parameter predictive of overall survival. The sustained decrease in T2 relaxation times toward values of healthy tissue masks progressive tumor on conventional T2-weighted images. Therefore, quantitative T2 relaxation times may detect non-enhancing progression better than conventional T2-weighted imaging. PMID:23925453

  12. Field theoretical prediction of a property of the tropical cyclone

    NASA Astrophysics Data System (ADS)

    Spineanu, F.; Vlad, M.

    2014-01-01

    The large scale atmospheric vortices (tropical cyclones, tornadoes) are complex physical systems combining thermodynamics and fluid-mechanical processes. The late phase of the evolution towards stationarity consists of the vorticity concentration, a well known tendency to self-organization , an universal property of the two-dimensional fluids. It may then be expected that the stationary state of the tropical cyclone has the same nature as the vortices of many other systems in nature: ideal (Euler) fluids, superconductors, Bose-Einsetin condensate, cosmic strings, etc. Indeed it was found that there is a description of the atmospheric vortex in terms of a classical field theory. It is compatible with the more conventional treatment based on conservation laws, but the field theoretical model reveals properties that are almost inaccessible to the conventional formulation: it identifies the stationary states as being close to self-duality. This is of highest importance: the self-duality is known to be the origin of all coherent structures known in natural systems. Therefore the field theoretical (FT) formulation finds that the cuasi-coherent form of the atmospheric vortex (tropical cyclone) at stationarity is an expression of this particular property. In the present work we examine a strong property of the tropical cyclone, which arises in the FT formulation in a natural way: the equality of the masses of the particles associated to the matter field and respectively to the gauge field in the FT model is translated into the equality between the maximum radial extension of the tropical cyclone and the Rossby radius. For the cases where the FT model is a good approximation we calculate characteristic quantities of the tropical cyclone and find good comparison with observational data.

  13. Theoretical prediction of a rotating magnon wave packet in ferromagnets.

    PubMed

    Matsumoto, Ryo; Murakami, Shuichi

    2011-05-13

    We theoretically show that the magnon wave packet has a rotational motion in two ways: a self-rotation and a motion along the boundary of the sample (edge current). They are similar to the cyclotron motion of electrons, but unlike electrons the magnons have no charge and the rotation is not due to the Lorentz force. These rotational motions are caused by the Berry phase in momentum space from the magnon band structure. Furthermore, the rotational motion of the magnon gives an additional correction term to the magnon Hall effect. We also discuss the Berry curvature effect in the classical limit of long-wavelength magnetostatic spin waves having macroscopic coherence length.

  14. Experimental and theoretical rotordynamic stiffness coefficients for a three-stage brush seal

    NASA Astrophysics Data System (ADS)

    Pugachev, A. O.; Deckner, M.

    2012-08-01

    Experimental and theoretical results are presented for a multistage brush seal. Experimental stiffness is obtained from integrating circumferential pressure distribution measured in seal cavities. A CFD analysis is used to predict seal performance. Bristle packs are modeled by the porous medium approach. Leakage is predicted well by the CFD method. Theoretical stiffness coefficients are in reasonable agreement with the measurements. Experimental results are also compared with a three-teeth-on-stator labyrinth seal. The multistage brush seal gives about 60% leakage reduction over the labyrinth seal. Rotordynamic stiffness coefficients are also improved: the brush seal has positive direct stiffness and smaller cross-coupled stiffness.

  15. Theoretical prediction of nuclear magnetic shieldings and indirect spin-spin coupling constants in 1,1-, cis-, and trans-1,2-difluoroethylenes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nozirov, Farhod, E-mail: teobaldk@gmail.com, E-mail: farhod.nozirov@gmail.com; Stachów, Michał, E-mail: michal.stachow@gmail.com; Kupka, Teobald, E-mail: teobaldk@gmail.com, E-mail: farhod.nozirov@gmail.com

    2014-04-14

    A theoretical prediction of nuclear magnetic shieldings and indirect spin-spin coupling constants in 1,1-, cis- and trans-1,2-difluoroethylenes is reported. The results obtained using density functional theory (DFT) combined with large basis sets and gauge-independent atomic orbital calculations were critically compared with experiment and conventional, higher level correlated electronic structure methods. Accurate structural, vibrational, and NMR parameters of difluoroethylenes were obtained using several density functionals combined with dedicated basis sets. B3LYP/6-311++G(3df,2pd) optimized structures of difluoroethylenes closely reproduced experimental geometries and earlier reported benchmark coupled cluster results, while BLYP/6-311++G(3df,2pd) produced accurate harmonic vibrational frequencies. The most accurate vibrations were obtained using B3LYP/6-311++G(3df,2pd)more » with correction for anharmonicity. Becke half and half (BHandH) density functional predicted more accurate {sup 19}F isotropic shieldings and van Voorhis and Scuseria's τ-dependent gradient-corrected correlation functional yielded better carbon shieldings than B3LYP. A surprisingly good performance of Hartree-Fock (HF) method in predicting nuclear shieldings in these molecules was observed. Inclusion of zero-point vibrational correction markedly improved agreement with experiment for nuclear shieldings calculated by HF, MP2, CCSD, and CCSD(T) methods but worsened the DFT results. The threefold improvement in accuracy when predicting {sup 2}J(FF) in 1,1-difluoroethylene for BHandH density functional compared to B3LYP was observed (the deviations from experiment were −46 vs. −115 Hz)« less

  16. National Centers for Environmental Prediction

    Science.gov Websites

    ENSEMBLE PRODUCTS & DATA SOURCES Probabilistic Forecasts of Quantitative Precipitation from the NCEP Predictability Research with Indian Monsoon Examples - PDF - 28 Mar 2005 North American Ensemble Forecast System QUANTITATIVE PRECIPITATION *PQPF* In these charts, the probability that 24-hour precipitation amounts over a

  17. Mapping surface charge density of lipid bilayers by quantitative surface conductivity microscopy

    PubMed Central

    Klausen, Lasse Hyldgaard; Fuhs, Thomas; Dong, Mingdong

    2016-01-01

    Local surface charge density of lipid membranes influences membrane–protein interactions leading to distinct functions in all living cells, and it is a vital parameter in understanding membrane-binding mechanisms, liposome design and drug delivery. Despite the significance, no method has so far been capable of mapping surface charge densities under physiologically relevant conditions. Here, we use a scanning nanopipette setup (scanning ion-conductance microscope) combined with a novel algorithm to investigate the surface conductivity near supported lipid bilayers, and we present a new approach, quantitative surface conductivity microscopy (QSCM), capable of mapping surface charge density with high-quantitative precision and nanoscale resolution. The method is validated through an extensive theoretical analysis of the ionic current at the nanopipette tip, and we demonstrate the capacity of QSCM by mapping the surface charge density of model cationic, anionic and zwitterionic lipids with results accurately matching theoretical values. PMID:27561322

  18. Mapping surface charge density of lipid bilayers by quantitative surface conductivity microscopy

    NASA Astrophysics Data System (ADS)

    Klausen, Lasse Hyldgaard; Fuhs, Thomas; Dong, Mingdong

    2016-08-01

    Local surface charge density of lipid membranes influences membrane-protein interactions leading to distinct functions in all living cells, and it is a vital parameter in understanding membrane-binding mechanisms, liposome design and drug delivery. Despite the significance, no method has so far been capable of mapping surface charge densities under physiologically relevant conditions. Here, we use a scanning nanopipette setup (scanning ion-conductance microscope) combined with a novel algorithm to investigate the surface conductivity near supported lipid bilayers, and we present a new approach, quantitative surface conductivity microscopy (QSCM), capable of mapping surface charge density with high-quantitative precision and nanoscale resolution. The method is validated through an extensive theoretical analysis of the ionic current at the nanopipette tip, and we demonstrate the capacity of QSCM by mapping the surface charge density of model cationic, anionic and zwitterionic lipids with results accurately matching theoretical values.

  19. Mapping surface charge density of lipid bilayers by quantitative surface conductivity microscopy.

    PubMed

    Klausen, Lasse Hyldgaard; Fuhs, Thomas; Dong, Mingdong

    2016-08-26

    Local surface charge density of lipid membranes influences membrane-protein interactions leading to distinct functions in all living cells, and it is a vital parameter in understanding membrane-binding mechanisms, liposome design and drug delivery. Despite the significance, no method has so far been capable of mapping surface charge densities under physiologically relevant conditions. Here, we use a scanning nanopipette setup (scanning ion-conductance microscope) combined with a novel algorithm to investigate the surface conductivity near supported lipid bilayers, and we present a new approach, quantitative surface conductivity microscopy (QSCM), capable of mapping surface charge density with high-quantitative precision and nanoscale resolution. The method is validated through an extensive theoretical analysis of the ionic current at the nanopipette tip, and we demonstrate the capacity of QSCM by mapping the surface charge density of model cationic, anionic and zwitterionic lipids with results accurately matching theoretical values.

  20. Third-Kind Encounters in Biomedicine: Immunology Meets Mathematics and Informatics to Become Quantitative and Predictive.

    PubMed

    Eberhardt, Martin; Lai, Xin; Tomar, Namrata; Gupta, Shailendra; Schmeck, Bernd; Steinkasserer, Alexander; Schuler, Gerold; Vera, Julio

    2016-01-01

    The understanding of the immune response is right now at the center of biomedical research. There are growing expectations that immune-based interventions will in the midterm provide new, personalized, and targeted therapeutic options for many severe and highly prevalent diseases, from aggressive cancers to infectious and autoimmune diseases. To this end, immunology should surpass its current descriptive and phenomenological nature, and become quantitative, and thereby predictive.Immunology is an ideal field for deploying the tools, methodologies, and philosophy of systems biology, an approach that combines quantitative experimental data, computational biology, and mathematical modeling. This is because, from an organism-wide perspective, the immunity is a biological system of systems, a paradigmatic instance of a multi-scale system. At the molecular scale, the critical phenotypic responses of immune cells are governed by large biochemical networks, enriched in nested regulatory motifs such as feedback and feedforward loops. This network complexity confers them the ability of highly nonlinear behavior, including remarkable examples of homeostasis, ultra-sensitivity, hysteresis, and bistability. Moving from the cellular level, different immune cell populations communicate with each other by direct physical contact or receiving and secreting signaling molecules such as cytokines. Moreover, the interaction of the immune system with its potential targets (e.g., pathogens or tumor cells) is far from simple, as it involves a number of attack and counterattack mechanisms that ultimately constitute a tightly regulated multi-feedback loop system. From a more practical perspective, this leads to the consequence that today's immunologists are facing an ever-increasing challenge of integrating massive quantities from multi-platforms.In this chapter, we support the idea that the analysis of the immune system demands the use of systems-level approaches to ensure the success in

  1. Improving the theoretical prediction for the Bs - B̅s width difference: matrix elements of next-to-leading order ΔB = 2 operators

    NASA Astrophysics Data System (ADS)

    Davies, Christine; Harrison, Judd; Lepage, G. Peter; Monahan, Christopher; Shigemitsu, Junko; Wingate, Matthew

    2018-03-01

    We present lattice QCD results for the matrix elements of R2 and other dimension-7, ΔB = 2 operators relevant for calculations of Δs, the Bs - B̅s width difference. We have computed correlation functions using 5 ensembles of the MILC Collaboration's 2+1 + 1-flavour gauge field configurations, spanning 3 lattice spacings and light sea quarks masses down to the physical point. The HISQ action is used for the valence strange quarks, and the NRQCD action is used for the bottom quarks. Once our analysis is complete, the theoretical uncertainty in the Standard Model prediction for ΔΓs will be substantially reduced.

  2. Rapid climate change and the rate of adaptation: insight from experimental quantitative genetics.

    PubMed

    Shaw, Ruth G; Etterson, Julie R

    2012-09-01

    Evolution proceeds unceasingly in all biological populations. It is clear that climate-driven evolution has molded plants in deep time and within extant populations. However, it is less certain whether adaptive evolution can proceed sufficiently rapidly to maintain the fitness and demographic stability of populations subjected to exceptionally rapid contemporary climate change. Here, we consider this question, drawing on current evidence on the rate of plant range shifts and the potential for an adaptive evolutionary response. We emphasize advances in understanding based on theoretical studies that model interacting evolutionary processes, and we provide an overview of quantitative genetic approaches that can parameterize these models to provide more meaningful predictions of the dynamic interplay between genetics, demography and evolution. We outline further research that can clarify both the adaptive potential of plant populations as climate continues to change and the role played by ongoing adaptation in their persistence. © 2012 The Authors. New Phytologist © 2012 New Phytologist Trust.

  3. An evidential link prediction method and link predictability based on Shannon entropy

    NASA Astrophysics Data System (ADS)

    Yin, Likang; Zheng, Haoyang; Bian, Tian; Deng, Yong

    2017-09-01

    Predicting missing links is of both theoretical value and practical interest in network science. In this paper, we empirically investigate a new link prediction method base on similarity and compare nine well-known local similarity measures on nine real networks. Most of the previous studies focus on the accuracy, however, it is crucial to consider the link predictability as an initial property of networks itself. Hence, this paper has proposed a new link prediction approach called evidential measure (EM) based on Dempster-Shafer theory. Moreover, this paper proposed a new method to measure link predictability via local information and Shannon entropy.

  4. Use of Cell Viability Assay Data Improves the Prediction Accuracy of Conventional Quantitative Structure–Activity Relationship Models of Animal Carcinogenicity

    PubMed Central

    Zhu, Hao; Rusyn, Ivan; Richard, Ann; Tropsha, Alexander

    2008-01-01

    Background To develop efficient approaches for rapid evaluation of chemical toxicity and human health risk of environmental compounds, the National Toxicology Program (NTP) in collaboration with the National Center for Chemical Genomics has initiated a project on high-throughput screening (HTS) of environmental chemicals. The first HTS results for a set of 1,408 compounds tested for their effects on cell viability in six different cell lines have recently become available via PubChem. Objectives We have explored these data in terms of their utility for predicting adverse health effects of the environmental agents. Methods and results Initially, the classification k nearest neighbor (kNN) quantitative structure–activity relationship (QSAR) modeling method was applied to the HTS data only, for a curated data set of 384 compounds. The resulting models had prediction accuracies for training, test (containing 275 compounds together), and external validation (109 compounds) sets as high as 89%, 71%, and 74%, respectively. We then asked if HTS results could be of value in predicting rodent carcinogenicity. We identified 383 compounds for which data were available from both the Berkeley Carcinogenic Potency Database and NTP–HTS studies. We found that compounds classified by HTS as “actives” in at least one cell line were likely to be rodent carcinogens (sensitivity 77%); however, HTS “inactives” were far less informative (specificity 46%). Using chemical descriptors only, kNN QSAR modeling resulted in 62.3% prediction accuracy for rodent carcinogenicity applied to this data set. Importantly, the prediction accuracy of the model was significantly improved (72.7%) when chemical descriptors were augmented by HTS data, which were regarded as biological descriptors. Conclusions Our studies suggest that combining NTP–HTS profiles with conventional chemical descriptors could considerably improve the predictive power of computational approaches in toxicology. PMID

  5. Quantitative prediction of the bitterness suppression of elemental diets by various flavors using a taste sensor.

    PubMed

    Miyanaga, Yohko; Inoue, Naoko; Ohnishi, Ayako; Fujisawa, Emi; Yamaguchi, Maki; Uchida, Takahiro

    2003-12-01

    The purpose of the study was to develop a method for the quantitative prediction of the bitterness suppression of elemental diets by various flavors and to predict the optimum composition of such elemental diets for oral administration using a multichannel taste sensor. We examined the effects of varying the volume of water used for dilution and of adding varying quantities of five flavors (pineapple, apple, milky coffee, powdered green tea, and banana) on the bitterness of the elemental diet, Aminoreban EN. Gustatory sensation tests with human volunteers (n = 9) and measurements using the artificial taste sensor were performed on 50 g Aminoreban EN dissolved in various volumes (140), 180, 220, 260, 300, 420, 660, 1140, and 2100 ml) of water, and on 50 g Aminoreban EN dissolved in 180 ml of water with the addition of 3-9 g of various flavors for taste masking. In gustatory sensation tests, the relationship between the logarithmic values of the volumes of water used for dilution and the bitterness intensity scores awarded by the volunteers proved to be linear. The addition of flavors also reduced the bitterness of elemental diets in gustatory sensation tests; the magnitude of this effect was, in decreasing order, apple, pineapple, milky coffee, powdered green tea, and banana. With the artificial taste sensor, large changes of membrane potential in channel 1, caused by adsorption (CPA values, corresponding to a bitter aftertaste), were observed for Aminoreban EN but not for any of the flavors. There was a good correlation between the CPA values in channel 1 and the results of the human gustatory tests, indicating that the taste sensor is capable of evaluating not only the bitterness of Aminoreban EN itself but also the bitterness-suppressing effect of the five flavors, which contained many elements such as organic acids and flavor components, and the effect of dilution (by water) on this bitterness. Using regression analysis of data derived from the taste sensor and

  6. Quantitative photoacoustic assessment of red blood cell aggregation under pulsatile blood flow: experimental and theoretical approaches

    NASA Astrophysics Data System (ADS)

    Bok, Tae-Hoon; Hysi, Eno; Kolios, Michael C.

    2017-03-01

    In the present paper, the optical wavelength dependence on the photoacoustic (PA) assessment of the pulsatile blood flow was investigated by means of the experimental and theoretical approaches analyzing PA radiofrequency spectral parameters such as the spectral slope (SS) and mid-band fit (MBF). For the experimental approach, the pulsatile flow of human whole blood at 60 bpm was imaged using the VevoLAZR system (40-MHz-linear-array probe, 700-900 nm illuminations). For the theoretical approach, a Monte Carlo simulation for the light transmit into a layered tissue phantom and a Green's function based method for the PA wave generation was implemented for illumination wavelengths of 700, 750, 800, 850 and 900 nm. The SS and MBF for the experimental results were compared to theoretical ones as a function of the illumination wavelength. The MBF increased with the optical wavelength in both theory and experiments. This was expected because the MBF is representative of the PA magnitude, and the PA signal from red blood cell (RBC) is dependent on the molar extinction coefficient of oxyhemoglobin. On the other hand, the SS decreased with the wavelength, even though the RBC size (absorber size which is related to the SS) cannot depend on the illumination wavelength. This conflicting result can be interpreted by means of the changes of the fluence pattern for different illumination wavelengths. The SS decrease with the increasing illumination wavelength should be further investigated.

  7. Biomacromolecular quantitative structure-activity relationship (BioQSAR): a proof-of-concept study on the modeling, prediction and interpretation of protein-protein binding affinity.

    PubMed

    Zhou, Peng; Wang, Congcong; Tian, Feifei; Ren, Yanrong; Yang, Chao; Huang, Jian

    2013-01-01

    Quantitative structure-activity relationship (QSAR), a regression modeling methodology that establishes statistical correlation between structure feature and apparent behavior for a series of congeneric molecules quantitatively, has been widely used to evaluate the activity, toxicity and property of various small-molecule compounds such as drugs, toxicants and surfactants. However, it is surprising to see that such useful technique has only very limited applications to biomacromolecules, albeit the solved 3D atom-resolution structures of proteins, nucleic acids and their complexes have accumulated rapidly in past decades. Here, we present a proof-of-concept paradigm for the modeling, prediction and interpretation of the binding affinity of 144 sequence-nonredundant, structure-available and affinity-known protein complexes (Kastritis et al. Protein Sci 20:482-491, 2011) using a biomacromolecular QSAR (BioQSAR) scheme. We demonstrate that the modeling performance and predictive power of BioQSAR are comparable to or even better than that of traditional knowledge-based strategies, mechanism-type methods and empirical scoring algorithms, while BioQSAR possesses certain additional features compared to the traditional methods, such as adaptability, interpretability, deep-validation and high-efficiency. The BioQSAR scheme could be readily modified to infer the biological behavior and functions of other biomacromolecules, if their X-ray crystal structures, NMR conformation assemblies or computationally modeled structures are available.

  8. The Gist of Delay of Gratification: Understanding and Predicting Problem Behaviors.

    PubMed

    Reyna, Valerie F; Wilhelms, Evan A

    2017-04-01

    Delay of gratification captures elements of temptation and self-denial that characterize real-life problems with money and other problem behaviors such as unhealthy risk taking. According to fuzzy-trace theory, decision makers mentally represent social values such as delay of gratification in a coarse but meaningful form of memory called "gist." Applying this theory, we developed a gist measure of delay of gratification that does not involve quantitative trade-offs (as delay discounting does) and hypothesize that this construct explains unique variance beyond sensation seeking and inhibition in accounting for problem behaviors. Across four studies, we examine this Delay-of-gratification Gist Scale by using principal components analyses and evaluating convergent and divergent validity with other potentially related scales such as Future Orientation, Propensity to Plan, Time Perspectives Inventory, Spendthrift-Tightwad, Sensation Seeking, Cognitive Reflection, Barratt Impulsiveness, and the Monetary Choice Questionnaire (delay discounting). The new 12-item measure captured a single dimension of delay of gratification, correlated as predicted with other scales, but accounted for unique variance in predicting such outcomes as overdrawing bank accounts, substance abuse, and overall subjective well-being. Results support a theoretical distinction between reward-related approach motivation, including sensation seeking, and inhibitory faculties, including cognitive reflection. However, individuals' agreement with the qualitative gist of delay of gratification, as expressed in many cultural traditions, could not be reduced to such dualist distinctions nor to quantitative conceptions of delay discounting, shedding light on mechanisms of self-control and risk taking.

  9. Three dimensional quantitative structure-toxicity relationship modeling and prediction of acute toxicity for organic contaminants to algae.

    PubMed

    Jin, Xiangqin; Jin, Minghao; Sheng, Lianxi

    2014-08-01

    Although numerous chemicals have been identified to have significant toxicological effect on aquatic organisms, there is still lack of a reliable, high-throughput approach to evaluate, screen and monitor the presence of organic contaminants in aquatic system. In the current study, we proposed a synthetic pipeline to automatically model and predict the acute toxicity of chemicals to algae. In the procedure, a new alignment-free three dimensional (3D) structure characterization method was described and, with this method, several 3D-quantitative structure-toxicity relationship (3D-QSTR) models were developed, from which two were found to exhibit strong internal fitting ability and high external predictive power. The best model was established by Gaussian process (GP), which was further employed to perform extrapolation on a random compound library consisting of 1014 virtually generated substituted benzenes. It was found that (i) substitution number can only exert slight influence on chemical׳s toxicity, but low-substituted benzenes seem to have higher toxicity than those of high-substituted entities, and (ii) benzenes substituted by nitro group and halogens exhibit high acute toxicity as compared to other substituents such as methyl and carboxyl groups. Subsequently, several promising candidates suggested by computational prediction were assayed by using a standard algal growth inhibition test. Consequently, four substituted benzenes, namely 2,3-dinitrophenol, 2-chloro-4-nitroaniline, 1,2,3-trinitrobenzene and 3-bromophenol, were determined to have high acute toxicity to Scenedesmus obliquus, with their EC50 values of 2.5±0.8, 10.5±2.1, 1.4±0.2 and 42.7±5.4μmol/L, respectively. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Theoretical Prediction of Magnetism in C-doped TlBr

    NASA Astrophysics Data System (ADS)

    Zhou, Yuzhi; Haller, E. E.; Chrzan, D. C.

    2014-05-01

    We predict that C, N, and O dopants in TlBr can display large, localized magnetic moments. Density functional theory based electronic structure calculations show that the moments arise from partial filling of the crystal-field-split localized p states of the dopant atoms. A simple model is introduced to explain the magnitude of the moments.

  11. Quantitative structure-property relationships for octanol-water partition coefficients of polybrominated diphenyl ethers.

    PubMed

    Li, Linnan; Xie, Shaodong; Cai, Hao; Bai, Xuetao; Xue, Zhao

    2008-08-01

    Theoretical molecular descriptors were tested against logK(OW) values for polybrominated diphenyl ethers (PBDEs) using the Partial Least-Squares Regression method which can be used to analyze data with many variables and few observations. A quantitative structure-property relationship (QSPR) model was successfully developed with a high cross-validated value (Q(cum)(2)) of 0.961, indicating a good predictive ability and stability of the model. The predictive power of the QSPR model was further cross-validated. The values of logK(OW) for PBDEs are mainly governed by molecular surface area, energy of the lowest unoccupied molecular orbital and the net atomic charges on the oxygen atom. All these descriptors have been discussed to interpret the partitioning mechanism of PBDE chemicals. The bulk property of the molecules represented by molecular surface area is the leading factor, and K(OW) values increase with the increase of molecular surface area. Higher energy of the lowest unoccupied molecular orbital and higher net atomic charge on the oxygen atom of PBDEs result in smaller K(OW). The energy of the lowest unoccupied molecular orbital and the net atomic charge on PBDEs oxygen also play important roles in affecting the partition of PBDEs between octanol and water by influencing the interactions between PBDEs and solvent molecules.

  12. Propagation studies using a theoretical ionosphere model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, M.K.

    1973-03-01

    The mid-latitude ionospheric and neutral atmospheric models are coupled with an advanced three dimensional ray-tracing pron predicting the wave propagation conditions and to study to what extent the use of theoretical ionospheric models is practical. The Penn State MK 1 ionospheric model, the Mitra--Rowe D-region model, and the Groves' neutral atmospheric model are used throughout ihis work to represent the real electron densities and collision frequencies. The Faraday rotation and differential Doppler velocities from satellites, the propagation modes for long-distance high-frequency propagation, the group delays for each mode, the ionospheric absorption, and the spatial loss are all predicted. (auth) (STAR)

  13. Emotions predictably modify response times in the initiation of human motor actions: A meta-analytic review.

    PubMed

    Beatty, Garrett F; Cranley, Nicole M; Carnaby, Giselle; Janelle, Christopher M

    2016-03-01

    Emotions motivate individuals to attain appetitive goals and avoid aversive consequences. Empirical investigations have detailed how broad approach and avoidance orientations are reflected in fundamental movement attributes such as the speed, accuracy, and variability of motor actions. Several theoretical perspectives propose explanations for how emotional states influence the speed with which goal directed movements are initiated. These perspectives include biological predisposition, muscle activation, distance regulation, cognitive evaluation, and evaluative response coding accounts. A comprehensive review of literature and meta-analysis were undertaken to quantify empirical support for these theoretical perspectives. The systematic review yielded 34 studies that contained 53 independent experiments producing 128 effect sizes used to evaluate the predictions of existing theories. The central tenets of the biological predisposition (Hedges' g = -0.356), distance regulation (g = -0.293; g = 0.243), and cognitive evaluation (g = -0.249; g = -0.405; g = -0.174) accounts were supported. Partial support was also identified for the evaluative response coding (g = -0.255) framework. Our findings provide quantitative evidence that substantiate existing theoretical perspectives, and provide potential direction for conceptual integration of these independent perspectives. Recommendations for future empirical work in this area are discussed. (c) 2016 APA, all rights reserved).

  14. Predicting quantitative and qualitative values of recreation participation

    Treesearch

    Elwood L., Jr. Shafer; George Moeller

    1971-01-01

    If future recreation consumption and associated intangible values can be predicted, the problem of rapid decision making in recreation-resource management can be reduced, and the problems of implementing those decisions can be anticipated. Management and research responsibilities for meeting recreation demand are discussed, and proved methods for forecasting recreation...

  15. Evolution of phenotypic plasticity and environmental tolerance of a labile quantitative character in a fluctuating environment.

    PubMed

    Lande, R

    2014-05-01

    Quantitative genetic models of evolution of phenotypic plasticity are used to derive environmental tolerance curves for a population in a changing environment, providing a theoretical foundation for integrating physiological and community ecology with evolutionary genetics of plasticity and norms of reaction. Plasticity is modelled for a labile quantitative character undergoing continuous reversible development and selection in a fluctuating environment. If there is no cost of plasticity, a labile character evolves expected plasticity equalling the slope of the optimal phenotype as a function of the environment. This contrasts with previous theory for plasticity influenced by the environment at a critical stage of early development determining a constant adult phenotype on which selection acts, for which the expected plasticity is reduced by the environmental predictability over the discrete time lag between development and selection. With a cost of plasticity in a labile character, the expected plasticity depends on the cost and on the environmental variance and predictability averaged over the continuous developmental time lag. Environmental tolerance curves derived from this model confirm traditional assumptions in physiological ecology and provide new insights. Tolerance curve width increases with larger environmental variance, but can only evolve within a limited range. The strength of the trade-off between tolerance curve height and width depends on the cost of plasticity. Asymmetric tolerance curves caused by male sterility at high temperature are illustrated. A simple condition is given for a large transient increase in plasticity and tolerance curve width following a sudden change in average environment. © 2014 The Author. Journal of Evolutionary Biology © 2014 European Society For Evolutionary Biology.

  16. [A novel approach to NIR spectral quantitative analysis: semi-supervised least-squares support vector regression machine].

    PubMed

    Li, Lin; Xu, Shuo; An, Xin; Zhang, Lu-Da

    2011-10-01

    In near infrared spectral quantitative analysis, the precision of measured samples' chemical values is the theoretical limit of those of quantitative analysis with mathematical models. However, the number of samples that can obtain accurately their chemical values is few. Many models exclude the amount of samples without chemical values, and consider only these samples with chemical values when modeling sample compositions' contents. To address this problem, a semi-supervised LS-SVR (S2 LS-SVR) model is proposed on the basis of LS-SVR, which can utilize samples without chemical values as well as those with chemical values. Similar to the LS-SVR, to train this model is equivalent to solving a linear system. Finally, the samples of flue-cured tobacco were taken as experimental material, and corresponding quantitative analysis models were constructed for four sample compositions' content(total sugar, reducing sugar, total nitrogen and nicotine) with PLS regression, LS-SVR and S2 LS-SVR. For the S2 LS-SVR model, the average relative errors between actual values and predicted ones for the four sample compositions' contents are 6.62%, 7.56%, 6.11% and 8.20%, respectively, and the correlation coefficients are 0.974 1, 0.973 3, 0.923 0 and 0.948 6, respectively. Experimental results show the S2 LS-SVR model outperforms the other two, which verifies the feasibility and efficiency of the S2 LS-SVR model.

  17. Within tree variation of lignin, extractives, and microfibril angle coupled with the theoretical and near infrared modeling of microfibril angle

    Treesearch

    Brian K. Via; chi L. So; Leslie H. Groom; Todd F. Shupe; michael Stine; Jan Wikaira

    2007-01-01

    A theoretical model was built predicting the relationship between microfibril angle and lignin content at the Angstrom (A) level. Both theoretical and statistical examination of experimental data supports a square root transformation of lignin to predict microfibril angle. The experimental material used came from 10 longleaf pine (Pinus palustris)...

  18. Theoretical Approaches in Evolutionary Ecology: Environmental Feedback as a Unifying Perspective.

    PubMed

    Lion, Sébastien

    2018-01-01

    Evolutionary biology and ecology have a strong theoretical underpinning, and this has fostered a variety of modeling approaches. A major challenge of this theoretical work has been to unravel the tangled feedback loop between ecology and evolution. This has prompted the development of two main classes of models. While quantitative genetics models jointly consider the ecological and evolutionary dynamics of a focal population, a separation of timescales between ecology and evolution is assumed by evolutionary game theory, adaptive dynamics, and inclusive fitness theory. As a result, theoretical evolutionary ecology tends to be divided among different schools of thought, with different toolboxes and motivations. My aim in this synthesis is to highlight the connections between these different approaches and clarify the current state of theory in evolutionary ecology. Central to this approach is to make explicit the dependence on environmental dynamics of the population and evolutionary dynamics, thereby materializing the eco-evolutionary feedback loop. This perspective sheds light on the interplay between environmental feedback and the timescales of ecological and evolutionary processes. I conclude by discussing some potential extensions and challenges to our current theoretical understanding of eco-evolutionary dynamics.

  19. Solubility advantage of amorphous pharmaceuticals: II. Application of quantitative thermodynamic relationships for prediction of solubility enhancement in structurally diverse insoluble pharmaceuticals.

    PubMed

    Murdande, Sharad B; Pikal, Michael J; Shanker, Ravi M; Bogner, Robin H

    2010-12-01

    To quantitatively assess the solubility advantage of amorphous forms of nine insoluble drugs with a wide range of physico-chemical properties utilizing a previously reported thermodynamic approach. Thermal properties of amorphous and crystalline forms of drugs were measured using modulated differential calorimetry. Equilibrium moisture sorption uptake by amorphous drugs was measured by a gravimetric moisture sorption analyzer, and ionization constants were determined from the pH-solubility profiles. Solubilities of crystalline and amorphous forms of drugs were measured in de-ionized water at 25°C. Polarized microscopy was used to provide qualitative information about the crystallization of amorphous drug in solution during solubility measurement. For three out the nine compounds, the estimated solubility based on thermodynamic considerations was within two-fold of the experimental measurement. For one compound, estimated solubility enhancement was lower than experimental value, likely due to extensive ionization in solution and hence its sensitivity to error in pKa measurement. For the remaining five compounds, estimated solubility was about 4- to 53-fold higher than experimental results. In all cases where the theoretical solubility estimates were significantly higher, it was observed that the amorphous drug crystallized rapidly during the experimental determination of solubility, thus preventing an accurate experimental assessment of solubility advantage. It has been demonstrated that the theoretical approach does provide an accurate estimate of the maximum solubility enhancement by an amorphous drug relative to its crystalline form for structurally diverse insoluble drugs when recrystallization during dissolution is minimal.

  20. Predicting human skin absorption of chemicals: development of a novel quantitative structure activity relationship.

    PubMed

    Luo, Wen; Medrek, Sarah; Misra, Jatin; Nohynek, Gerhard J

    2007-02-01

    The objective of this study was to construct and validate a quantitative structure-activity relationship model for skin absorption. Such models are valuable tools for screening and prioritization in safety and efficacy evaluation, and risk assessment of drugs and chemicals. A database of 340 chemicals with percutaneous absorption was assembled. Two models were derived from the training set consisting 306 chemicals (90/10 random split). In addition to the experimental K(ow) values, over 300 2D and 3D atomic and molecular descriptors were analyzed using MDL's QsarIS computer program. Subsequently, the models were validated using both internal (leave-one-out) and external validation (test set) procedures. Using the stepwise regression analysis, three molecular descriptors were determined to have significant statistical correlation with K(p) (R2 = 0.8225): logK(ow), X0 (quantification of both molecular size and the degree of skeletal branching), and SsssCH (count of aromatic carbon groups). In conclusion, two models to estimate skin absorption were developed. When compared to other skin absorption QSAR models in the literature, our model incorporated more chemicals and explored a large number of descriptors. Additionally, our models are reasonably predictive and have met both internal and external statistical validations.

  1. Activation of a camptothecin prodrug by specific carboxylesterases as predicted by quantitative structure-activity relationship and molecular docking studies.

    PubMed

    Yoon, Kyoung Jin P; Krull, Erik J; Morton, Christopher L; Bornmann, William G; Lee, Richard E; Potter, Philip M; Danks, Mary K

    2003-11-01

    7-Ethyl-10-[4-(1-piperidino)-1-piperidino]carbonyloxycamptothecin (irinotecan, CPT-11) is a camptothecin prodrug that is metabolized by carboxylesterases (CE) to the active metabolite 7-ethyl-10-hydroxycamptothecin (SN-38), a topoisomerase I inhibitor. CPT-11 has shown encouraging antitumor activity against a broad spectrum of tumor types in early clinical trials, but hematopoietic and gastrointestinal toxicity limit its administration. To increase the therapeutic index of CPT-11 and to develop other prodrug analogues for enzyme/prodrug gene therapy applications, our laboratories propose to develop camptothecin prodrugs that will be activated by specific CEs. Specific analogues might then be predicted to be activated, for example, predominantly by human liver CE(hCE1), by human intestinal CE (hiCE), or in gene therapy approaches using a rabbit liver CE (rCE). This study describes a molecular modeling approach to relate the structure of rCE-activated camptothecin prodrugs with their biological activation. Comparative molecular field analysis, comparative molecular similarity index analysis, and docking studies were used to predict the biological activity of a 4-benzylpiperazine derivative of CPT-11 [7-ethyl-10-[4-(1-benzyl)-1-piperazino]carbonyloxycamptothecin (BP-CPT)] in U373MG glioma cell lines transfected with plasmids encoding rCE or hiCE. BP-CPT has been reported to be activated more efficiently than CPT-11 by a rat serum esterase activity; however, three-dimensional quantitative structure-activity relationship studies predicted that rCE would activate BP-CPT less efficiently than CPT-11. This was confirmed by both growth inhibition experiments and kinetic studies. The method is being used to design camptothecin prodrugs predicted to be activated by specific CEs.

  2. Synthetic cannabinoids: In silico prediction of the cannabinoid receptor 1 affinity by a quantitative structure-activity relationship model.

    PubMed

    Paulke, Alexander; Proschak, Ewgenij; Sommer, Kai; Achenbach, Janosch; Wunder, Cora; Toennes, Stefan W

    2016-03-14

    The number of new synthetic psychoactive compounds increase steadily. Among the group of these psychoactive compounds, the synthetic cannabinoids (SCBs) are most popular and serve as a substitute of herbal cannabis. More than 600 of these substances already exist. For some SCBs the in vitro cannabinoid receptor 1 (CB1) affinity is known, but for the majority it is unknown. A quantitative structure-activity relationship (QSAR) model was developed, which allows the determination of the SCBs affinity to CB1 (expressed as binding constant (Ki)) without reference substances. The chemically advance template search descriptor was used for vector representation of the compound structures. The similarity between two molecules was calculated using the Feature-Pair Distribution Similarity. The Ki values were calculated using the Inverse Distance Weighting method. The prediction model was validated using a cross validation procedure. The predicted Ki values of some new SCBs were in a range between 20 (considerably higher affinity to CB1 than THC) to 468 (considerably lower affinity to CB1 than THC). The present QSAR model can serve as a simple, fast and cheap tool to get a first hint of the biological activity of new synthetic cannabinoids or of other new psychoactive compounds. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  3. Theoretical Analysis and Design of Ultrathin Broadband Optically Transparent Microwave Metamaterial Absorbers

    PubMed Central

    Deng, Ruixiang; Li, Meiling; Muneer, Badar; Zhu, Qi; Shi, Zaiying; Song, Lixin; Zhang, Tao

    2018-01-01

    Optically Transparent Microwave Metamaterial Absorber (OTMMA) is of significant use in both civil and military field. In this paper, equivalent circuit model is adopted as springboard to navigate the design of OTMMA. The physical model and absorption mechanisms of ideal lightweight ultrathin OTMMA are comprehensively researched. Both the theoretical value of equivalent resistance and the quantitative relation between the equivalent inductance and equivalent capacitance are derived for design. Frequency-dependent characteristics of theoretical equivalent resistance are also investigated. Based on these theoretical works, an effective and controllable design approach is proposed. To validate the approach, a wideband OTMMA is designed, fabricated, analyzed and tested. The results reveal that high absorption more than 90% can be achieved in the whole 6~18 GHz band. The fabricated OTMMA also has an optical transparency up to 78% at 600 nm and is much thinner and lighter than its counterparts. PMID:29324686

  4. Theoretical Analysis and Design of Ultrathin Broadband Optically Transparent Microwave Metamaterial Absorbers.

    PubMed

    Deng, Ruixiang; Li, Meiling; Muneer, Badar; Zhu, Qi; Shi, Zaiying; Song, Lixin; Zhang, Tao

    2018-01-11

    Optically Transparent Microwave Metamaterial Absorber (OTMMA) is of significant use in both civil and military field. In this paper, equivalent circuit model is adopted as springboard to navigate the design of OTMMA. The physical model and absorption mechanisms of ideal lightweight ultrathin OTMMA are comprehensively researched. Both the theoretical value of equivalent resistance and the quantitative relation between the equivalent inductance and equivalent capacitance are derived for design. Frequency-dependent characteristics of theoretical equivalent resistance are also investigated. Based on these theoretical works, an effective and controllable design approach is proposed. To validate the approach, a wideband OTMMA is designed, fabricated, analyzed and tested. The results reveal that high absorption more than 90% can be achieved in the whole 6~18 GHz band. The fabricated OTMMA also has an optical transparency up to 78% at 600 nm and is much thinner and lighter than its counterparts.

  5. Quantitative Model to Predict Melts on the Ol-Opx Saturation Boundary during Mantle Melting: The Role of H2O

    NASA Astrophysics Data System (ADS)

    Andrews, A. L.; Grove, T. L.

    2014-12-01

    Two quantitative, empirical models are presented that predict mantle melt compositions in equilibrium with olivine (ol) + orthopyroxene (opx) ± spinel (sp) as a function of variable pressure and H2O content. The models consist of multiple linear regressions calibrated using new data from H2O-undersaturated primitive and depleted mantle lherzolite melting experiments as well as experimental literature data. The models investigate the roles of H2O, Pressure, 1-Mg# (1-[XMg/(XMg+XFe)]), NaK# ((Na2O+K2O)/(Na2O+K2O+CaO)), TiO2, and Cr2O3 on mantle melt compositions. Melts are represented by the pseudoternary endmembers Clinopyroxene (Cpx), Olivine (Ol), Plagioclase (Plag), and Quartz (Qz) of Tormey et al. (1987). Model A returns predictive equations for the four endmembers with identical predictor variables, whereas Model B chooses predictor variables for the four compositional endmember equations and temperature independently. We employ the use of Akaike Information Criteria (Akaike, 1974) to determine the best predictor variables from initial variables chosen through thermodynamic reasoning and by previous models. In both Models A and B, the coefficients for H2O show that increasing H2O drives the melt to more Qz normative space, as the Qz component increases by +0.012(3) per 1 wt.% H2O. The other endmember components decrease and are all three times less affected by H2O (Ol: -0.004(2); Cpx: -0.004(2); Plag: -0.004(3)). Consistent with previous models and experimental data, increasing pressure moves melt compositions to more Ol normative space at the expense of the Qz component. The models presented quantitatively determine the influence of H2O, Pressure, 1-Mg#, NaK#, TiO2, and Cr2O3 on mantle melts in equilibrium with ol+opx±sp; the equations presented can be used to predict melts of known mantle source compositions saturated in ol+opx±sp. References Tormey, Grove, & Bryan (1987), doi: 10.1007/BF00375227. Akaike (1974), doi: 10.1109/TAC.1974.1100705.

  6. Emotion in Action: A Predictive Processing Perspective and Theoretical Synthesis

    PubMed Central

    Ridderinkhof, K. Richard

    2017-01-01

    Starting from a decidedly Frijdian perspective on emotion in action, we adopt neurocognitive theories of action control to analyze the mechanisms through which emotional action arises. Appraisal of events vis-à-vis concerns gives rise to a determinate motive to establish a specific state of the world; the pragmatic idea of the action’s effects incurs the valuation of action options and a change in action readiness in the form of incipient ideomotor capture of the selected action. Forward modeling of the sensory consequences of the selected action option allows for the evaluation and fine-tuning of anticipated action effects, which renders the emotional action impulsive yet purposive. This novel theoretical synthesis depicts the cornerstone principles for a mechanistic view on emotion in action. PMID:29098017

  7. New theoretical results in synchrotron radiation

    NASA Astrophysics Data System (ADS)

    Bagrov, V. G.; Gitman, D. M.; Tlyachev, V. B.; Jarovoi, A. T.

    2005-11-01

    One of the remarkable features of the relativistic electron synchrotron radiation is its concentration in small angle Δ ≈ 1/γ (here γ-relativistic factor: γ = E/mc2, E energy, m electron rest mass, c light velocity) near rotation orbit plane [V.G. Bagrov, V.A. Bordovitsyn, V.G. Bulenok, V. Ya. Epp, Kinematical projection of pulsar synchrotron radiation profiles, in: Proceedings of IV ISTC Scientific Advisory Commitee Seminar on Basic Science in ISTC Aktivities, Akademgorodok, Novosibirsk, April 23 27, 2001, p. 293 300]. This theoretically predicted and experimentally confirmed feature is peculiar to total (spectrum summarized) radiating intensity. This angular distribution property has been supposed to be (at least qualitatively) conserved and for separate spectrum synchrotron radiation components. In the work of V.G. Bagrov, V.A. Bordovitsyn, V. Ch. Zhukovskii, Development of the theory of synchrotron radiation and related processes. Synchrotron source of JINR: the perspective of research, in: The Materials of the Second International Work Conference, Dubna, April 2 6, 2001, pp. 15 30 and in Angular dependence of synchrotron radiation intensity. http://lanl.arXiv.org/abs/physics/0209097, it is shown that the angular distribution of separate synchrotron radiation spectrum components demonstrates directly inverse tendency the angular distribution deconcentration relatively the orbit plane takes place with electron energy growth. The present work is devoted to detailed investigation of this situation. For exact quantitative estimation of angular concentration degree of synchrotron radiation the definition of radiation effective angle and deviation angle is proposed. For different polarization components of radiation the dependence of introduced characteristics was investigated as a functions of electron energy and number of spectrum component.

  8. Reality-Theoretical Models-Mathematics: A Ternary Perspective on Physics Lessons in Upper-Secondary School

    ERIC Educational Resources Information Center

    Hansson, Lena; Hansson, Örjan; Juter, Kristina; Redfors, Andreas

    2015-01-01

    This article discusses the role of mathematics during physics lessons in upper-secondary school. Mathematics is an inherent part of theoretical models in physics and makes powerful predictions of natural phenomena possible. Ability to use both theoretical models and mathematics is central in physics. This paper takes as a starting point that the…

  9. Support vector regression-guided unravelling: antioxidant capacity and quantitative structure-activity relationship predict reduction and promotion effects of flavonoids on acrylamide formation

    PubMed Central

    Huang, Mengmeng; Wei, Yan; Wang, Jun; Zhang, Yu

    2016-01-01

    We used the support vector regression (SVR) approach to predict and unravel reduction/promotion effect of characteristic flavonoids on the acrylamide formation under a low-moisture Maillard reaction system. Results demonstrated the reduction/promotion effects by flavonoids at addition levels of 1–10000 μmol/L. The maximal inhibition rates (51.7%, 68.8% and 26.1%) and promote rates (57.7%, 178.8% and 27.5%) caused by flavones, flavonols and isoflavones were observed at addition levels of 100 μmol/L and 10000 μmol/L, respectively. The reduction/promotion effects were closely related to the change of trolox equivalent antioxidant capacity (ΔTEAC) and well predicted by triple ΔTEAC measurements via SVR models (R: 0.633–0.900). Flavonols exhibit stronger effects on the acrylamide formation than flavones and isoflavones as well as their O-glycosides derivatives, which may be attributed to the number and position of phenolic and 3-enolic hydroxyls. The reduction/promotion effects were well predicted by using optimized quantitative structure-activity relationship (QSAR) descriptors and SVR models (R: 0.926–0.994). Compared to artificial neural network and multi-linear regression models, SVR models exhibited better fitting performance for both TEAC-dependent and QSAR descriptor-dependent predicting work. These observations demonstrated that the SVR models are competent for predicting our understanding on the future use of natural antioxidants for decreasing the acrylamide formation. PMID:27586851

  10. Support vector regression-guided unravelling: antioxidant capacity and quantitative structure-activity relationship predict reduction and promotion effects of flavonoids on acrylamide formation

    NASA Astrophysics Data System (ADS)

    Huang, Mengmeng; Wei, Yan; Wang, Jun; Zhang, Yu

    2016-09-01

    We used the support vector regression (SVR) approach to predict and unravel reduction/promotion effect of characteristic flavonoids on the acrylamide formation under a low-moisture Maillard reaction system. Results demonstrated the reduction/promotion effects by flavonoids at addition levels of 1-10000 μmol/L. The maximal inhibition rates (51.7%, 68.8% and 26.1%) and promote rates (57.7%, 178.8% and 27.5%) caused by flavones, flavonols and isoflavones were observed at addition levels of 100 μmol/L and 10000 μmol/L, respectively. The reduction/promotion effects were closely related to the change of trolox equivalent antioxidant capacity (ΔTEAC) and well predicted by triple ΔTEAC measurements via SVR models (R: 0.633-0.900). Flavonols exhibit stronger effects on the acrylamide formation than flavones and isoflavones as well as their O-glycosides derivatives, which may be attributed to the number and position of phenolic and 3-enolic hydroxyls. The reduction/promotion effects were well predicted by using optimized quantitative structure-activity relationship (QSAR) descriptors and SVR models (R: 0.926-0.994). Compared to artificial neural network and multi-linear regression models, SVR models exhibited better fitting performance for both TEAC-dependent and QSAR descriptor-dependent predicting work. These observations demonstrated that the SVR models are competent for predicting our understanding on the future use of natural antioxidants for decreasing the acrylamide formation.

  11. A Theoretical Model for Predicting Residual Stress Generation in Fabrication Process of Double-Ceramic-Layer Thermal Barrier Coating System.

    PubMed

    Song, Yan; Wu, Weijie; Xie, Feng; Liu, Yilun; Wang, Tiejun

    2017-01-01

    Residual stress arisen in fabrication process of Double-Ceramic-Layer Thermal Barrier Coating System (DCL-TBCs) has a significant effect on its quality and reliability. In this work, based on the practical fabrication process of DCL-TBCs and the force and moment equilibrium, a theoretical model was proposed at first to predict residual stress generation in its fabrication process, in which the temperature dependent material properties of DCL-TBCs were incorporated. Then, a Finite Element method (FEM) has been carried out to verify our theoretical model. Afterwards, some important geometric parameters for DCL-TBCs, such as the thickness ratio of stabilized Zirconia (YSZ, ZrO2-8%Y2O3) layer to Lanthanum Zirconate (LZ, La2Zr2O7) layer, which is adjustable in a wide range in the fabrication process, have a remarkable effect on its performance, therefore, the effect of this thickness ratio on residual stress generation in the fabrication process of DCL-TBCs has been systematically studied. In addition, some thermal spray treatment, such as the pre-heating treatment, its effect on residual stress generation has also been studied in this work. It is found that, the final residual stress mainly comes from the cooling down process in the fabrication of DCL-TBCs. Increasing the pre-heating temperature can obviously decrease the magnitude of residual stresses in LZ layer, YSZ layer and substrate. With the increase of the thickness ratio of YSZ layer to LZ layer, magnitudes of residual stresses arisen in LZ layer and YSZ layer will increase while residual stress in substrate will decrease.

  12. Elastic anisotropy of layered rocks: Ultrasonic measurements of plagioclase-biotite-muscovite (sillimanite) gneiss versus texture-based theoretical predictions (effective media modeling)

    NASA Astrophysics Data System (ADS)

    Ivankina, T. I.; Zel, I. Yu.; Lokajicek, T.; Kern, H.; Lobanov, K. V.; Zharikov, A. V.

    2017-08-01

    In this paper we present experimental and theoretical studies on a highly anisotropic layered rock sample characterized by alternating layers of biotite and muscovite (retrogressed from sillimanite) and plagioclase and quartz, respectively. We applied two different experimental methods to determine seismic anisotropy at pressures up to 400 MPa: (1) measurement of P- and S-wave phase velocities on a cube in three foliation-related orthogonal directions and (2) measurement of P-wave group velocities on a sphere in 132 directions The combination of the spatial distribution of P-wave velocities on the sphere (converted to phase velocities) with S-wave velocities of three orthogonal structural directions on the cube made it possible to calculate the bulk elastic moduli of the anisotropic rock sample. On the basis of the crystallographic preferred orientations (CPOs) of major minerals obtained by time-of-flight neutron diffraction, effective media modeling was performed using different inclusion methods and averaging procedures. The implementation of a nonlinear approximation of the P-wave velocity-pressure relation was applied to estimate the mineral matrix properties and the orientation distribution of microcracks. Comparison of theoretical calculations of elastic properties of the mineral matrix with those derived from the nonlinear approximation showed discrepancies in elastic moduli and P-wave velocities of about 10%. The observed discrepancies between the effective media modeling and ultrasonic velocity data are a consequence of the inhomogeneous structure of the sample and inability to perform long-wave approximation. Furthermore, small differences between elastic moduli predicted by the different theoretical models, including specific fabric characteristics such as crystallographic texture, grain shape and layering were observed. It is shown that the bulk elastic anisotropy of the sample is basically controlled by the CPO of biotite and muscovite and their volume

  13. Quantitative prediction of ionization effect on human skin permeability.

    PubMed

    Baba, Hiromi; Ueno, Yusuke; Hashida, Mitsuru; Yamashita, Fumiyoshi

    2017-04-30

    Although skin permeability of an active ingredient can be severely affected by its ionization in a dose solution, most of the existing prediction models cannot predict such impacts. To provide reliable predictors, we curated a novel large dataset of in vitro human skin permeability coefficients for 322 entries comprising chemically diverse permeants whose ionization fractions can be calculated. Subsequently, we generated thousands of computational descriptors, including LogD (octanol-water distribution coefficient at a specific pH), and analyzed the dataset using nonlinear support vector regression (SVR) and Gaussian process regression (GPR) combined with greedy descriptor selection. The SVR model was slightly superior to the GPR model, with externally validated squared correlation coefficient, root mean square error, and mean absolute error values of 0.94, 0.29, and 0.21, respectively. These models indicate that Log D is effective for a comprehensive prediction of ionization effects on skin permeability. In addition, the proposed models satisfied the statistical criteria endorsed in recent model validation studies. These models can evaluate virtually generated compounds at any pH; therefore, they can be used for high-throughput evaluations of numerous active ingredients and optimization of their skin permeability with respect to permeant ionization. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Quantitative surface topography determination by Nomarski reflection microscopy. 2: Microscope modification, calibration, and planar sample experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hartman, J.S.; Gordon, R.L.; Lessor, D.L.

    1980-09-01

    The application of reflective Nomarski differential interference contrast microscopy for the determination of quantitative sample topography data is presented. The discussion includes a review of key theoretical results presented previously plus the experimental implementation of the concepts using a commercial Momarski microscope. The experimental work included the modification and characterization of a commercial microscope to allow its use for obtaining quantitative sample topography data. System usage for the measurement of slopes on flat planar samples is also discussed. The discussion has been designed to provide the theoretical basis, a physical insight, and a cookbook procedure for implementation to allow thesemore » results to be of value to both those interested in the microscope theory and its practical usage in the metallography laboratory.« less

  15. Future missions studies: Combining Schatten's solar activity prediction model with a chaotic prediction model

    NASA Technical Reports Server (NTRS)

    Ashrafi, S.

    1991-01-01

    K. Schatten (1991) recently developed a method for combining his prediction model with our chaotic model. The philosophy behind this combined model and his method of combination is explained. Because the Schatten solar prediction model (KS) uses a dynamo to mimic solar dynamics, accurate prediction is limited to long-term solar behavior (10 to 20 years). The Chaotic prediction model (SA) uses the recently developed techniques of nonlinear dynamics to predict solar activity. It can be used to predict activity only up to the horizon. In theory, the chaotic prediction should be several orders of magnitude better than statistical predictions up to that horizon; beyond the horizon, chaotic predictions would theoretically be just as good as statistical predictions. Therefore, chaos theory puts a fundamental limit on predictability.

  16. Effects of the Forecasting Methods, Precipitation Character, and Satellite Resolution on the Predictability of Short-Term Quantitative Precipitation Nowcasting (QPN) from a Geostationary Satellite.

    PubMed

    Liu, Yu; Xi, Du-Gang; Li, Zhao-Liang; Ji, Wei

    2015-01-01

    The prediction of the short-term quantitative precipitation nowcasting (QPN) from consecutive gestational satellite images has important implications for hydro-meteorological modeling and forecasting. However, the systematic analysis of the predictability of QPN is limited. The objective of this study is to evaluate effects of the forecasting model, precipitation character, and satellite resolution on the predictability of QPN using images of a Chinese geostationary meteorological satellite Fengyun-2F (FY-2F) which covered all intensive observation since its launch despite of only a total of approximately 10 days. In the first step, three methods were compared to evaluate the performance of the QPN methods: a pixel-based QPN using the maximum correlation method (PMC); the Horn-Schunck optical-flow scheme (PHS); and the Pyramid Lucas-Kanade Optical Flow method (PPLK), which is newly proposed here. Subsequently, the effect of the precipitation systems was indicated by 2338 imageries of 8 precipitation periods. Then, the resolution dependence was demonstrated by analyzing the QPN with six spatial resolutions (0.1atial, 0.3a, 0.4atial rand 0.6). The results show that the PPLK improves the predictability of QPN with better performance than the other comparison methods. The predictability of the QPN is significantly determined by the precipitation system, and a coarse spatial resolution of the satellite reduces the predictability of QPN.

  17. Effects of the Forecasting Methods, Precipitation Character, and Satellite Resolution on the Predictability of Short-Term Quantitative Precipitation Nowcasting (QPN) from a Geostationary Satellite

    PubMed Central

    Liu, Yu; Xi, Du-Gang; Li, Zhao-Liang; Ji, Wei

    2015-01-01

    The prediction of the short-term quantitative precipitation nowcasting (QPN) from consecutive gestational satellite images has important implications for hydro-meteorological modeling and forecasting. However, the systematic analysis of the predictability of QPN is limited. The objective of this study is to evaluate effects of the forecasting model, precipitation character, and satellite resolution on the predictability of QPN usingimages of a Chinese geostationary meteorological satellite Fengyun-2F (FY-2F) which covered all intensive observation since its launch despite of only a total of approximately 10 days. In the first step, three methods were compared to evaluate the performance of the QPN methods: a pixel-based QPN using the maximum correlation method (PMC); the Horn-Schunck optical-flow scheme (PHS); and the Pyramid Lucas-Kanade Optical Flow method (PPLK), which is newly proposed here. Subsequently, the effect of the precipitation systems was indicated by 2338 imageries of 8 precipitation periods. Then, the resolution dependence was demonstrated by analyzing the QPN with six spatial resolutions (0.1atial, 0.3a, 0.4atial rand 0.6). The results show that the PPLK improves the predictability of QPN with better performance than the other comparison methods. The predictability of the QPN is significantly determined by the precipitation system, and a coarse spatial resolution of the satellite reduces the predictability of QPN. PMID:26447470

  18. Experimental and theoretical study of magnetohydrodynamic ship models.

    PubMed

    Cébron, David; Viroulet, Sylvain; Vidal, Jérémie; Masson, Jean-Paul; Viroulet, Philippe

    2017-01-01

    Magnetohydrodynamic (MHD) ships represent a clear demonstration of the Lorentz force in fluids, which explains the number of students practicals or exercises described on the web. However, the related literature is rather specific and no complete comparison between theory and typical small scale experiments is currently available. This work provides, in a self-consistent framework, a detailed presentation of the relevant theoretical equations for small MHD ships and experimental measurements for future benchmarks. Theoretical results of the literature are adapted to these simple battery/magnets powered ships moving on salt water. Comparison between theory and experiments are performed to validate each theoretical step such as the Tafel and the Kohlrausch laws, or the predicted ship speed. A successful agreement is obtained without any adjustable parameter. Finally, based on these results, an optimal design is then deduced from the theory. Therefore this work provides a solid theoretical and experimental ground for small scale MHD ships, by presenting in detail several approximations and how they affect the boat efficiency. Moreover, the theory is general enough to be adapted to other contexts, such as large scale ships or industrial flow measurement techniques.

  19. Experimental and theoretical study of magnetohydrodynamic ship models

    PubMed Central

    Viroulet, Sylvain; Vidal, Jérémie; Masson, Jean-Paul; Viroulet, Philippe

    2017-01-01

    Magnetohydrodynamic (MHD) ships represent a clear demonstration of the Lorentz force in fluids, which explains the number of students practicals or exercises described on the web. However, the related literature is rather specific and no complete comparison between theory and typical small scale experiments is currently available. This work provides, in a self-consistent framework, a detailed presentation of the relevant theoretical equations for small MHD ships and experimental measurements for future benchmarks. Theoretical results of the literature are adapted to these simple battery/magnets powered ships moving on salt water. Comparison between theory and experiments are performed to validate each theoretical step such as the Tafel and the Kohlrausch laws, or the predicted ship speed. A successful agreement is obtained without any adjustable parameter. Finally, based on these results, an optimal design is then deduced from the theory. Therefore this work provides a solid theoretical and experimental ground for small scale MHD ships, by presenting in detail several approximations and how they affect the boat efficiency. Moreover, the theory is general enough to be adapted to other contexts, such as large scale ships or industrial flow measurement techniques. PMID:28665941

  20. Comparison of prediction methods for octanol-air partition coefficients of diverse organic compounds.

    PubMed

    Fu, Zhiqiang; Chen, Jingwen; Li, Xuehua; Wang, Ya'nan; Yu, Haiying

    2016-04-01

    The octanol-air partition coefficient (KOA) is needed for assessing multimedia transport and bioaccumulability of organic chemicals in the environment. As experimental determination of KOA for various chemicals is costly and laborious, development of KOA estimation methods is necessary. We investigated three methods for KOA prediction, conventional quantitative structure-activity relationship (QSAR) models based on molecular structural descriptors, group contribution models based on atom-centered fragments, and a novel model that predicts KOA via solvation free energy from air to octanol phase (ΔGO(0)), with a collection of 939 experimental KOA values for 379 compounds at different temperatures (263.15-323.15 K) as validation or training sets. The developed models were evaluated with the OECD guidelines on QSAR models validation and applicability domain (AD) description. Results showed that although the ΔGO(0) model is theoretically sound and has a broad AD, the prediction accuracy of the model is the poorest. The QSAR models perform better than the group contribution models, and have similar predictability and accuracy with the conventional method that estimates KOA from the octanol-water partition coefficient and Henry's law constant. One QSAR model, which can predict KOA at different temperatures, was recommended for application as to assess the long-range transport potential of chemicals. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Combine experimental and theoretical investigation on an alkaloid-Dimethylisoborreverine

    NASA Astrophysics Data System (ADS)

    Singh, Swapnil; Singh, Harshita; Karthick, T.; Agarwal, Parag; Erande, Rohan D.; Dethe, Dattatraya H.; Tandon, Poonam

    2016-01-01

    A combined experimental (FT-IR, 1H and 13C NMR) and theoretical approach is used to study the structure and properties of antimalarial drug dimethylisoborreverine (DMIB). Conformational analysis, has been performed by plotting one dimensional potential energy curve that was computed using density functional theory (DFT) with B3LYP/6-31G method and predicted conformer A1 as the most stable conformer. After full geometry optimization, harmonic wavenumbers were computed for conformer A1 at the DFT/B3LYP/6-311++G(d,P) level. A complete vibrational assignment of all the vibrational modes have been performed on the bases of the potential energy distribution (PED) and theoretical results were found to be in good agreement with the observed data. To predict the solvent effect, the UV-Vis spectra were calculated in different solvents by polarizable continuum model using TD-DFT method. Molecular docking studies were performed to test the biological activity of the sample using SWISSDOCK web server and Hex 8.0.0 software. The molecular electrostatic potential (MESP) was plotted to identify the reactive sites of the molecule. Natural bond orbital (NBO) analysis was performed to get a deep insight of intramolecular charge transfer. Thermodynamical parameters were calculated to predict the direction of chemical reaction.

  2. An appraisal of theoretical approaches to examining behaviours in relation to Human Papillomavirus (HPV) vaccination of young women

    PubMed Central

    Batista Ferrer, Harriet; Audrey, Suzanne; Trotter, Caroline; Hickman, Matthew

    2015-01-01

    Background Interventions to increase uptake of Human Papillomavirus (HPV) vaccination by young women may be more effective if they are underpinned by an appropriate theoretical model or framework. The aims of this review were: to describe the theoretical models or frameworks used to explain behaviours in relation to HPV vaccination of young women, and: to consider the appropriateness of the theoretical models or frameworks used for informing the development of interventions to increase uptake. Methods Primary studies were identified through a comprehensive search of databases from inception to December 2013. Results Thirty-four relevant studies were identified, of which 31 incorporated psychological health behaviour models or frameworks and three used socio-cultural models or theories. The primary studies used a variety of approaches to measure a diverse range of outcomes in relation to behaviours of professionals, parents, and young women. The majority appeared to use theory appropriately throughout. About half of the quantitative studies presented data in relation to goodness of fit tests and the proportion of the variability in the data. Conclusion Due to diverse approaches and inconsistent findings across studies, the current contribution of theory to understanding and promoting HPV vaccination uptake is difficult to assess. Ecological frameworks encourage the integration of individual and social approaches by encouraging exploration of the intrapersonal, interpersonal, organisational, community and policy levels when examining public health issues. Given the small number of studies using such approach, combined with the importance of these factors in predicting behaviour, more research in this area is warranted. PMID:26314783

  3. Reference dosimetry at the Australian Synchrotron's imaging and medical beamline using free-air ionization chamber measurements and theoretical predictions of air kerma rate and half value layer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crosbie, Jeffrey C.; Rogers, Peter A. W.; Stevenson, Andrew W.

    2013-06-15

    Purpose: Novel, preclinical radiotherapy modalities are being developed at synchrotrons around the world, most notably stereotactic synchrotron radiation therapy and microbeam radiotherapy at the European Synchrotron Radiation Facility in Grenoble, France. The imaging and medical beamline (IMBL) at the Australian Synchrotron has recently become available for preclinical radiotherapy and imaging research with clinical trials, a distinct possibility in the coming years. The aim of this present study was to accurately characterize the synchrotron-generated x-ray beam for the purposes of air kerma-based absolute dosimetry. Methods: The authors used a theoretical model of the energy spectrum from the wiggler source and validatedmore » this model by comparing the transmission through copper absorbers (0.1-3.0 mm) against real measurements conducted at the beamline. The authors used a low energy free air ionization chamber (LEFAC) from the Australian Radiation Protection and Nuclear Safety Agency and a commercially available free air chamber (ADC-105) for the measurements. The dimensions of these two chambers are different from one another requiring careful consideration of correction factors. Results: Measured and calculated half value layer (HVL) and air kerma rates differed by less than 3% for the LEFAC when the ion chamber readings were corrected for electron energy loss and ion recombination. The agreement between measured and predicted air kerma rates was less satisfactory for the ADC-105 chamber, however. The LEFAC and ADC measurements produced a first half value layer of 0.405 {+-} 0.015 and 0.412 {+-} 0.016 mm Cu, respectively, compared to the theoretical prediction of 0.427 {+-} 0.012 mm Cu. The theoretical model based upon a spectrum calculator derived a mean beam energy of 61.4 keV with a first half value layer of approximately 30 mm in water. Conclusions: The authors showed in this study their ability to verify the predicted air kerma rate and x

  4. Propagation studies using a theoretical ionosphere model

    NASA Technical Reports Server (NTRS)

    Lee, M.

    1973-01-01

    The mid-latitude ionospheric and neutral atmospheric models are coupled with an advanced three dimensional ray tracing program to see what success would be obtained in predicting the wave propagation conditions and to study to what extent the use of theoretical ionospheric models is practical. The Penn State MK 1 ionospheric model, the Mitra-Rowe D region model, and the Groves' neutral atmospheric model are used throughout this work to represent the real electron densities and collision frequencies. The Faraday rotation and differential Doppler velocities from satellites, the propagation modes for long distance high frequency propagation, the group delays for each mode, the ionospheric absorption, and the spatial loss are all predicted.

  5. Usefulness of a semi-quantitative procalcitonin test and the A-DROP Japanese prognostic scale for predicting mortality among adults hospitalized with community-acquired pneumonia.

    PubMed

    Kasamatsu, Yu; Yamaguchi, Toshimasa; Kawaguchi, Takashi; Tanaka, Nagaaki; Oka, Hiroko; Nakamura, Tomoyuki; Yamagami, Keiko; Yoshioka, Katsunobu; Imanishi, Masahito

    2012-02-01

    The solid-phase immunoassay, semi-quantitative procalcitonin (PCT) test (B R A H M S PCT-Q) can be used to rapidly categorize PCT levels into four grades. However, the usefulness of this kit for determining the prognosis of adult patients with community-acquired pneumonia (CAP) is unclear. A prospective study was conducted in two Japanese hospitals to evaluate the usefulness of this PCT test in determining the prognosis of adult patients with CAP. The accuracy of the age, dehydration, respiratory failure, orientation disturbance, pressure (A-DROP) scale proposed by the Japanese Respiratory Society for prediction of mortality due to CAP was also investigated. Hospitalized CAP patients (n = 226) were enrolled in the study. Comprehensive examinations were performed to determine PCT and CRP concentrations, disease severity based on the A-DROP, pneumonia severity index (PSI) and confusion, urea, respiratory rate, blood pressure, age ≥65 (CURB-65) scales and the causative pathogens. The usefulness of the biomarkers and prognostic scales for predicting each outcome were then examined. Twenty of the 170 eligible patients died. PCT levels were strongly positively correlated with PSI (ρ = 0.56, P < 0.0001), A-DROP (ρ = 0.61, P < 0.0001) and CURB-65 scores (ρ = 0.58, P < 0.0001). The areas under the receiver operating characteristic curves (95% CI) for prediction of survival, for CRP, PCT, A-DROP, CURB-65, and PSI were 0.54 (0.42-0.67), 0.80 (0.70-0.90), 0.88 (0.82-0.94), 0.88 (0.82-0.94), and 0.89 (0.85-0.94), respectively. The 30-day mortality among patients who were PCT-positive (≥0.5 ng/mL) was significantly higher than that among PCT-negative patients (log-rank test, P < 0.001). The semi-quantitative PCT test and the A-DROP scale were found to be useful for predicting mortality in adult patients with CAP. © 2011 The Authors. Respirology © 2011 Asian Pacific Society of Respirology.

  6. Quantitative evaluation of simulated functional brain networks in graph theoretical analysis.

    PubMed

    Lee, Won Hee; Bullmore, Ed; Frangou, Sophia

    2017-02-01

    There is increasing interest in the potential of whole-brain computational models to provide mechanistic insights into resting-state brain networks. It is therefore important to determine the degree to which computational models reproduce the topological features of empirical functional brain networks. We used empirical connectivity data derived from diffusion spectrum and resting-state functional magnetic resonance imaging data from healthy individuals. Empirical and simulated functional networks, constrained by structural connectivity, were defined based on 66 brain anatomical regions (nodes). Simulated functional data were generated using the Kuramoto model in which each anatomical region acts as a phase oscillator. Network topology was studied using graph theory in the empirical and simulated data. The difference (relative error) between graph theory measures derived from empirical and simulated data was then estimated. We found that simulated data can be used with confidence to model graph measures of global network organization at different dynamic states and highlight the sensitive dependence of the solutions obtained in simulated data on the specified connection densities. This study provides a method for the quantitative evaluation and external validation of graph theory metrics derived from simulated data that can be used to inform future study designs. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  7. Harnessing quantitative genetics and genomics for understanding and improving complex traits in crops

    USDA-ARS?s Scientific Manuscript database

    Classical quantitative genetics aids crop improvement by providing the means to estimate heritability, genetic correlations, and predicted responses to various selection schemes. Genomics has the potential to aid quantitative genetics and applied crop improvement programs via large-scale, high-thro...

  8. Theoretical prediction of honeycomb carbon as Li-ion batteries anode material

    NASA Astrophysics Data System (ADS)

    Hu, Junping; Zhang, Xiaohang

    2018-05-01

    First principles calculations are performed to study the electronic properties and Li storage capability of honeycomb carbon. We find its right model consistent with the experimental result, the honeycomb carbon and its Li-intercalated configurations are all metallic which is beneficial to the electrode materials for lithium-ion batteries. The model 1 configuration shows fast Li diffusion and theoretical Li storage capacity of 319 mAh/g. Moreover, the average intercalation potentials for honeycomb carbon material is calculated to be low relatively. Our results suggest that the honeycomb carbon would be a new promising pure carbon anode material for Li-ion batteries.

  9. Theoretical Study on Stress Sensitivity of Fractal Porous Media with Irreducible Water

    NASA Astrophysics Data System (ADS)

    Lei, Gang; Dong, Zhenzhen; Li, Weirong; Wen, Qingzhi; Wang, Cai

    The couple flow deformation behavior in porous media has drawn tremendous attention in various scientific and engineering fields. However, though the coupled flow deformation mechanism has been intensively investigated in the last decades, the essential controls on stress sensitivity are not determined. It is of practical significance to use analytic methods to study stress sensitivity of porous media. Unfortunately, because of the disordered and extremely complicated microstructures of porous media, the theoretical model for stress sensitivity is scarce. The goal of this work is to establish a novel and reasonable quantitative model to determine the essential controls on stress sensitivity. The predictions of the theoretical model, derived from the Hertzian contact theory and fractal geometry, agree well with the available experimental data. Compared with the previous models, our model takes into account more factors, including the influence of the water saturation and the microstructural parameters of the pore space. The proposed models can reveal more mechanisms that affect the coupled flow deformation behavior in fractal porous media. The results show that the irreducible water saturation increases with the increase of effective stress, and decreases with the increased rock elastic modulus (or increased power law index) at a given effective stress. The effect of stress variation on porosity is smaller than that on permeability. Under a given effective stress, the normalized permeability (or the normalized porosity) becomes smaller with the decrease of rock elastic modulus (or the decrease of power law index). And a lower capillary pressure will correspond to an increased rock elastic modulus (or an increased power law index) under a given water saturation.

  10. Understanding Older Adults' Physical Activity Behavior: A Multi-Theoretical Approach

    ERIC Educational Resources Information Center

    Grodesky, Janene M.; Kosma, Maria; Solmon, Melinda A.

    2006-01-01

    Physical inactivity is a health issue with serious consequences for older adults. Investigating physical activity promotion within a multi-theoretical approach may increase the predictive strength of physical activity determinants and facilitate the development and implementation of effective interventions for older adults. This article examines…

  11. Experimental and Theoretical Investigations of a Mechanical Lever System Driven by a DC Motor

    NASA Astrophysics Data System (ADS)

    Nana, B.; Fautso Kuiate, G.; Yamgoué, S. B.

    This paper presents theoretical and experimental results on the investigation of the dynamics of a nonlinear electromechanical system made of a lever arm actuated by a DC motor and controlled through a repulsive magnetic force. We use the method of harmonic balance to derive oscillatory solutions. Theoretical tools such as, bifurcation diagrams, Lyapunov exponents, phase portraits, are used to unveil the rich nonlinear behavior of the system including chaos and hysteresis. The experimental results are in close accordance with the theoretical predictions.

  12. Towards a quantitative description of tunneling conductance of superconductors: Application to LiFeAs

    DOE PAGES

    Kreisel, A.; Nelson, R.; Berlijn, T.; ...

    2016-12-27

    Since the discovery of iron-based superconductors, a number of theories have been put forward to explain the qualitative origin of pairing, but there have been few attempts to make quantitative, material-specific comparisons to experimental results. The spin-fluctuation theory of electronic pairing, based on first-principles electronic structure calculations, makes predictions for the superconducting gap. Within the same framework, the surface wave functions may also be calculated, allowing, e.g., for detailed comparisons between theoretical results and measured scanning tunneling topographs and spectra. We present such a comparison between theory and experiment on the Fe-based superconductor LiFeAs. Our results for the homogeneous surfacemore » as well as impurity states are presented as a benchmark test of the theory. For the homogeneous system, we argue that the maxima of topographic image intensity may be located at positions above either the As or Li atoms, depending on tip height and the setpoint current of the measurement. We further report the experimental observation of transitions between As- and Li-registered lattices as functions of both tip height and setpoint bias, in agreement with this prediction. Next, we give a detailed comparison between the simulated scanning tunneling microscopy images of transition-metal defects with experiment. Finally, we discuss possible extensions of the current framework to obtain a theory with true predictive power for scanning tunneling microscopy in Fe-based systems.« less

  13. Towards a quantitative description of tunneling conductance of superconductors: Application to LiFeAs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kreisel, A.; Nelson, R.; Berlijn, T.

    Since the discovery of iron-based superconductors, a number of theories have been put forward to explain the qualitative origin of pairing, but there have been few attempts to make quantitative, material-specific comparisons to experimental results. The spin-fluctuation theory of electronic pairing, based on first-principles electronic structure calculations, makes predictions for the superconducting gap. Within the same framework, the surface wave functions may also be calculated, allowing, e.g., for detailed comparisons between theoretical results and measured scanning tunneling topographs and spectra. We present such a comparison between theory and experiment on the Fe-based superconductor LiFeAs. Our results for the homogeneous surfacemore » as well as impurity states are presented as a benchmark test of the theory. For the homogeneous system, we argue that the maxima of topographic image intensity may be located at positions above either the As or Li atoms, depending on tip height and the setpoint current of the measurement. We further report the experimental observation of transitions between As- and Li-registered lattices as functions of both tip height and setpoint bias, in agreement with this prediction. Next, we give a detailed comparison between the simulated scanning tunneling microscopy images of transition-metal defects with experiment. Finally, we discuss possible extensions of the current framework to obtain a theory with true predictive power for scanning tunneling microscopy in Fe-based systems.« less

  14. Quantitative analysis of night skyglow amplification under cloudy conditions

    NASA Astrophysics Data System (ADS)

    Kocifaj, Miroslav; Solano Lamphar, Héctor Antonio

    2014-10-01

    The radiance produced by artificial light is a major source of nighttime over-illumination. It can, however, be treated experimentally using ground-based and satellite data. These two types of data complement each other and together have a high information content. For instance, the satellite data enable upward light emissions to be normalized, and this in turn allows skyglow levels at the ground to be modelled under cloudy or overcast conditions. Excessive night lighting imposes an unacceptable burden on nature, humans and professional astronomy. For this reason, there is a pressing need to determine the total amount of downwelling diffuse radiation. Undoubtedly, cloudy periods can cause a significant increase in skyglow as a result of amplification owing to diffuse reflection from clouds. While it is recognized that the amplification factor (AF) varies with cloud cover, the effects of different types of clouds, of atmospheric turbidity and of the geometrical relationships between the positions of an individual observer, the cloud layer, and the light source are in general poorly known. In this paper the AF is quantitatively analysed considering different aerosol optical depths (AODs), urban layout sizes and cloud types with specific albedos and altitudes. The computational results show that the AF peaks near the edges of a city rather than at its centre. In addition, the AF appears to be a decreasing function of AOD, which is particularly important when modelling the skyglow in regions with apparent temporal or seasonal variability of atmospheric turbidity. The findings in this paper will be useful to those designing engineering applications or modelling light pollution, as well as to astronomers and environmental scientists who aim to predict the amplification of skyglow caused by clouds. In addition, the semi-analytical formulae can be used to estimate the AF levels, especially in densely populated metropolitan regions for which detailed computations may be CPU

  15. NNAlign: A Web-Based Prediction Method Allowing Non-Expert End-User Discovery of Sequence Motifs in Quantitative Peptide Data

    PubMed Central

    Andreatta, Massimo; Schafer-Nielsen, Claus; Lund, Ole; Buus, Søren; Nielsen, Morten

    2011-01-01

    Recent advances in high-throughput technologies have made it possible to generate both gene and protein sequence data at an unprecedented rate and scale thereby enabling entirely new “omics”-based approaches towards the analysis of complex biological processes. However, the amount and complexity of data that even a single experiment can produce seriously challenges researchers with limited bioinformatics expertise, who need to handle, analyze and interpret the data before it can be understood in a biological context. Thus, there is an unmet need for tools allowing non-bioinformatics users to interpret large data sets. We have recently developed a method, NNAlign, which is generally applicable to any biological problem where quantitative peptide data is available. This method efficiently identifies underlying sequence patterns by simultaneously aligning peptide sequences and identifying motifs associated with quantitative readouts. Here, we provide a web-based implementation of NNAlign allowing non-expert end-users to submit their data (optionally adjusting method parameters), and in return receive a trained method (including a visual representation of the identified motif) that subsequently can be used as prediction method and applied to unknown proteins/peptides. We have successfully applied this method to several different data sets including peptide microarray-derived sets containing more than 100,000 data points. NNAlign is available online at http://www.cbs.dtu.dk/services/NNAlign. PMID:22073191

  16. NNAlign: a web-based prediction method allowing non-expert end-user discovery of sequence motifs in quantitative peptide data.

    PubMed

    Andreatta, Massimo; Schafer-Nielsen, Claus; Lund, Ole; Buus, Søren; Nielsen, Morten

    2011-01-01

    Recent advances in high-throughput technologies have made it possible to generate both gene and protein sequence data at an unprecedented rate and scale thereby enabling entirely new "omics"-based approaches towards the analysis of complex biological processes. However, the amount and complexity of data that even a single experiment can produce seriously challenges researchers with limited bioinformatics expertise, who need to handle, analyze and interpret the data before it can be understood in a biological context. Thus, there is an unmet need for tools allowing non-bioinformatics users to interpret large data sets. We have recently developed a method, NNAlign, which is generally applicable to any biological problem where quantitative peptide data is available. This method efficiently identifies underlying sequence patterns by simultaneously aligning peptide sequences and identifying motifs associated with quantitative readouts. Here, we provide a web-based implementation of NNAlign allowing non-expert end-users to submit their data (optionally adjusting method parameters), and in return receive a trained method (including a visual representation of the identified motif) that subsequently can be used as prediction method and applied to unknown proteins/peptides. We have successfully applied this method to several different data sets including peptide microarray-derived sets containing more than 100,000 data points. NNAlign is available online at http://www.cbs.dtu.dk/services/NNAlign.

  17. Predictability and Prediction for an Experimental Cultural Market

    NASA Astrophysics Data System (ADS)

    Colbaugh, Richard; Glass, Kristin; Ormerod, Paul

    Individuals are often influenced by the behavior of others, for instance because they wish to obtain the benefits of coordinated actions or infer otherwise inaccessible information. In such situations this social influence decreases the ex ante predictability of the ensuing social dynamics. We claim that, interestingly, these same social forces can increase the extent to which the outcome of a social process can be predicted very early in the process. This paper explores this claim through a theoretical and empirical analysis of the experimental music market described and analyzed in [1]. We propose a very simple model for this music market, assess the predictability of market outcomes through formal analysis of the model, and use insights derived through this analysis to develop algorithms for predicting market share winners, and their ultimate market shares, in the very early stages of the market. The utility of these predictive algorithms is illustrated through analysis of the experimental music market data sets [2].

  18. Quantitative computed tomography-based predictions of vertebral strength in anterior bending.

    PubMed

    Buckley, Jenni M; Cheng, Liu; Loo, Kenneth; Slyfield, Craig; Xu, Zheng

    2007-04-20

    This study examined the ability of QCT-based structural assessment techniques to predict vertebral strength in anterior bending. The purpose of this study was to compare the abilities of QCT-based bone mineral density (BMD), mechanics of solids models (MOS), e.g., bending rigidity, and finite element analyses (FE) to predict the strength of isolated vertebral bodies under anterior bending boundary conditions. Although the relative performance of QCT-based structural measures is well established for uniform compression, the ability of these techniques to predict vertebral strength under nonuniform loading conditions has not yet been established. Thirty human thoracic vertebrae from 30 donors (T9-T10, 20 female, 10 male; 87 +/- 5 years of age) were QCT scanned and destructively tested in anterior bending using an industrial robot arm. The QCT scans were processed to generate specimen-specific FE models as well as trabecular bone mineral density (tBMD), integral bone mineral density (iBMD), and MOS measures, such as axial and bending rigidities. Vertebral strength in anterior bending was poorly to moderately predicted by QCT-based BMD and MOS measures (R2 = 0.14-0.22). QCT-based FE models were better strength predictors (R2 = 0.34-0.40); however, their predictive performance was not statistically different from MOS bending rigidity (P > 0.05). Our results suggest that the poor clinical performance of noninvasive structural measures may be due to their inability to predict vertebral strength under bending loads. While their performance was not statistically better than MOS bending rigidities, QCT-based FE models were moderate predictors of both compressive and bending loads at failure, suggesting that this technique has the potential for strength prediction under nonuniform loads. The current FE modeling strategy is insufficient, however, and significant modifications must be made to better mimic whole bone elastic and inelastic material behavior.

  19. Nonesterified fatty acid determination for functional lipidomics: comprehensive ultrahigh performance liquid chromatography-tandem mass spectrometry quantitation, qualification, and parameter prediction.

    PubMed

    Hellmuth, Christian; Weber, Martina; Koletzko, Berthold; Peissner, Wolfgang

    2012-02-07

    Despite their central importance for lipid metabolism, straightforward quantitative methods for determination of nonesterified fatty acid (NEFA) species are still missing. The protocol presented here provides unbiased quantitation of plasma NEFA species by liquid chromatography-tandem mass spectrometry (LC-MS/MS). Simple deproteination of plasma in organic solvent solution yields high accuracy, including both the unbound and initially protein-bound fractions, while avoiding interferences from hydrolysis of esterified fatty acids from other lipid classes. Sample preparation is fast and nonexpensive, hence well suited for automation and high-throughput applications. Separation of isotopologic NEFA is achieved using ultrahigh-performance liquid chromatography (UPLC) coupled to triple quadrupole LC-MS/MS detection. In combination with automated liquid handling, total assay time per sample is less than 15 min. The analytical spectrum extends beyond readily available NEFA standard compounds by a regression model predicting all the relevant analytical parameters (retention time, ion path settings, and response factor) of NEFA species based on chain length and number of double bonds. Detection of 50 NEFA species and accurate quantification of 36 NEFA species in human plasma is described, the highest numbers ever reported for a LC-MS application. Accuracy and precision are within widely accepted limits. The use of qualifier ions supports unequivocal analyte verification. © 2012 American Chemical Society

  20. Computational Study of Chemical Reactivity Using Information-Theoretic Quantities from Density Functional Reactivity Theory for Electrophilic Aromatic Substitution Reactions.

    PubMed

    Wu, Wenjie; Wu, Zemin; Rong, Chunying; Lu, Tian; Huang, Ying; Liu, Shubin

    2015-07-23

    The electrophilic aromatic substitution for nitration, halogenation, sulfonation, and acylation is a vastly important category of chemical transformation. Its reactivity and regioselectivity is predominantly determined by nucleophilicity of carbon atoms on the aromatic ring, which in return is immensely influenced by the group that is attached to the aromatic ring a priori. In this work, taking advantage of recent developments in quantifying nucleophilicity (electrophilicity) with descriptors from the information-theoretic approach in density functional reactivity theory, we examine the reactivity properties of this reaction system from three perspectives. These include scaling patterns of information-theoretic quantities such as Shannon entropy, Fisher information, Ghosh-Berkowitz-Parr entropy and information gain at both molecular and atomic levels, quantitative predictions of the barrier height with both Hirshfeld charge and information gain, and energetic decomposition analyses of the barrier height for the reactions. To that end, we focused in this work on the identity reaction of the monosubstituted-benzene molecule reacting with hydrogen fluoride using boron trifluoride as the catalyst in the gas phase. We also considered 19 substituting groups, 9 of which are ortho/para directing and the other 9 meta directing, besides the case of R = -H. Similar scaling patterns for these information-theoretic quantities found for stable species elsewhere were disclosed for these reactions systems. We also unveiled novel scaling patterns for information gain at the atomic level. The barrier height of the reactions can reliably be predicted by using both the Hirshfeld charge and information gain at the regioselective carbon atom. The energy decomposition analysis ensued yields an unambiguous picture about the origin of the barrier height, where we showed that it is the electrostatic interaction that plays the dominant role, while the roles played by exchange-correlation and

  1. Theoretical studies on lattice-oriented growth of single-walled carbon nanotubes on sapphire

    NASA Astrophysics Data System (ADS)

    Li, Zhengwei; Meng, Xianhong; Xiao, Jianliang

    2017-09-01

    Due to their excellent mechanical and electrical properties, single-walled carbon nanotubes (SWNTs) can find broad applications in many areas, such as field-effect transistors, logic circuits, sensors and flexible electronics. High-density, horizontally aligned arrays of SWNTs are essential for high performance electronics. Many experimental studies have demonstrated that chemical vapor deposition growth of nanotubes on crystalline substrates such as sapphire offers a promising route to achieve such dense, perfectly aligned arrays. In this work, a theoretical study is performed to quantitatively understand the van der Waals interactions between SWNTs and sapphire substrates. The energetically preferred alignment directions of SWNTs on A-, R- and M-planes and the random alignment on the C-plane predicted by this study are all in good agreement with experiments. It is also shown that smaller SWNTs have better alignment than larger SWNTs due to their stronger interaction with sapphire substrate. The strong vdW interactions along preferred alignment directions can be intuitively explained by the nanoscale ‘grooves’ formed by atomic lattice structures on the surface of sapphire. This study provides important insights to the controlled growth of nanotubes and potentially other nanomaterials.

  2. Structural parameterization and functional prediction of antigenic polypeptome sequences with biological activity through quantitative sequence-activity models (QSAM) by molecular electronegativity edge-distance vector (VMED).

    PubMed

    Li, ZhiLiang; Wu, ShiRong; Chen, ZeCong; Ye, Nancy; Yang, ShengXi; Liao, ChunYang; Zhang, MengJun; Yang, Li; Mei, Hu; Yang, Yan; Zhao, Na; Zhou, Yuan; Zhou, Ping; Xiong, Qing; Xu, Hong; Liu, ShuShen; Ling, ZiHua; Chen, Gang; Li, GenRong

    2007-10-01

    Only from the primary structures of peptides, a new set of descriptors called the molecular electronegativity edge-distance vector (VMED) was proposed and applied to describing and characterizing the molecular structures of oligopeptides and polypeptides, based on the electronegativity of each atom or electronic charge index (ECI) of atomic clusters and the bonding distance between atom-pairs. Here, the molecular structures of antigenic polypeptides were well expressed in order to propose the automated technique for the computerized identification of helper T lymphocyte (Th) epitopes. Furthermore, a modified MED vector was proposed from the primary structures of polypeptides, based on the ECI and the relative bonding distance of the fundamental skeleton groups. The side-chains of each amino acid were here treated as a pseudo-atom. The developed VMED was easy to calculate and able to work. Some quantitative model was established for 28 immunogenic or antigenic polypeptides (AGPP) with 14 (1-14) A(d) and 14 other restricted activities assigned as "1"(+) and "0"(-), respectively. The latter comprised 6 A(b)(15-20), 3 A(k)(21-23), 2 E(k)(24-26), 2 H-2(k)(27 and 28) restricted sequences. Good results were obtained with 90% correct classification (only 2 wrong ones for 20 training samples) and 100% correct prediction (none wrong for 8 testing samples); while contrastively 100% correct classification (none wrong for 20 training samples) and 88% correct classification (1 wrong for 8 testing samples). Both stochastic samplings and cross validations were performed to demonstrate good performance. The described method may also be suitable for estimation and prediction of classes I and II for major histocompatibility antigen (MHC) epitope of human. It will be useful in immune identification and recognition of proteins and genes and in the design and development of subunit vaccines. Several quantitative structure activity relationship (QSAR) models were developed for various

  3. Three-dimensional quantitative structure-property relationship (3D-QSPR) models for prediction of thermodynamic properties of polychlorinated biphenyls (PCBs): enthalpy of vaporization.

    PubMed

    Puri, Swati; Chickos, James S; Welsh, William J

    2002-01-01

    Three-dimensional Quantitative Structure-Property Relationship (QSPR) models have been derived using Comparative Molecular Field Analysis (CoMFA) to correlate the vaporization enthalpies of a representative set of polychlorinated biphenyls (PCBs) at 298.15 K with their CoMFA-calculated physicochemical properties. Various alignment schemes, such as inertial, as is, and atom fit, were employed in this study. The CoMFA models were also developed using different partial charge formalisms, namely, electrostatic potential (ESP) charges and Gasteiger-Marsili (GM) charges. The most predictive model for vaporization enthalpy (Delta(vap)H(m)(298.15 K)), with atom fit alignment and Gasteiger-Marsili charges, yielded r2 values 0.852 (cross-validated) and 0.996 (conventional). The vaporization enthalpies of PCBs increased with the number of chlorine atoms and were found to be larger for the meta- and para-substituted isomers. This model was used to predict Delta(vap)H(m)(298.15 K) of the entire set of 209 PCB congeners.

  4. Multinational Corporations, Democracy and Child Mortality: A Quantitative, Cross-National Analysis of Developing Countries

    ERIC Educational Resources Information Center

    Shandra, John M.; Nobles, Jenna E.; London, Bruce; Williamson, John B.

    2005-01-01

    This study presents quantitative, sociological models designed to account for cross-national variation in child mortality. We consider variables linked to five different theoretical perspectives that include the economic modernization, social modernization, political modernization, ecological-evolutionary, and dependency perspectives. The study is…

  5. Quantitative characterisation of sedimentary grains

    NASA Astrophysics Data System (ADS)

    Tunwal, Mohit; Mulchrone, Kieran F.; Meere, Patrick A.

    2016-04-01

    Analysis of sedimentary texture helps in determining the formation, transportation and deposition processes of sedimentary rocks. Grain size analysis is traditionally quantitative, whereas grain shape analysis is largely qualitative. A semi-automated approach to quantitatively analyse shape and size of sand sized sedimentary grains is presented. Grain boundaries are manually traced from thin section microphotographs in the case of lithified samples and are automatically identified in the case of loose sediments. Shape and size paramters can then be estimated using a software package written on the Mathematica platform. While automated methodology already exists for loose sediment analysis, the available techniques for the case of lithified samples are limited to cases of high definition thin section microphotographs showing clear contrast between framework grains and matrix. Along with the size of grain, shape parameters such as roundness, angularity, circularity, irregularity and fractal dimension are measured. A new grain shape parameter developed using Fourier descriptors has also been developed. To test this new approach theoretical examples were analysed and produce high quality results supporting the accuracy of the algorithm. Furthermore sandstone samples from known aeolian and fluvial environments from the Dingle Basin, County Kerry, Ireland were collected and analysed. Modern loose sediments from glacial till from County Cork, Ireland and aeolian sediments from Rajasthan, India have also been collected and analysed. A graphical summary of the data is presented and allows for quantitative distinction between samples extracted from different sedimentary environments.

  6. Quantitative Prediction of Systemic Toxicity Points of Departure (OpenTox USA 2017)

    EPA Science Inventory

    Human health risk assessment associated with environmental chemical exposure is limited by the tens of thousands of chemicals little or no experimental in vivo toxicity data. Data gap filling techniques, such as quantitative models based on chemical structure information, are c...

  7. The Gist of Delay of Gratification: Understanding and Predicting Problem Behaviors

    PubMed Central

    REYNA, VALERIE F.; WILHELMS, EVAN A.

    2017-01-01

    Delay of gratification captures elements of temptation and self-denial that characterize real-life problems with money and other problem behaviors such as unhealthy risk taking. According to fuzzy-trace theory, decision makers mentally represent social values such as delay of gratification in a coarse but meaningful form of memory called “gist.” Applying this theory, we developed a gist measure of delay of gratification that does not involve quantitative trade-offs (as delay discounting does) and hypothesize that this construct explains unique variance beyond sensation seeking and inhibition in accounting for problem behaviors. Across four studies, we examine this Delay-of-gratification Gist Scale by using principal components analyses and evaluating convergent and divergent validity with other potentially related scales such as Future Orientation, Propensity to Plan, Time Perspectives Inventory, Spendthrift-Tightwad, Sensation Seeking, Cognitive Reflection, Barratt Impulsiveness, and the Monetary Choice Questionnaire (delay discounting). The new 12-item measure captured a single dimension of delay of gratification, correlated as predicted with other scales, but accounted for unique variance in predicting such outcomes as overdrawing bank accounts, substance abuse, and overall subjective well-being. Results support a theoretical distinction between reward-related approach motivation, including sensation seeking, and inhibitory faculties, including cognitive reflection. However, individuals’ agreement with the qualitative gist of delay of gratification, as expressed in many cultural traditions, could not be reduced to such dualist distinctions nor to quantitative conceptions of delay discounting, shedding light on mechanisms of self-control and risk taking. PMID:28808356

  8. Theoretical prediction of low-density hexagonal ZnO hollow structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuoc, Vu Ngoc, E-mail: tuoc.vungoc@hust.edu.vn; Huan, Tran Doan; Thao, Nguyen Thi

    2016-10-14

    Along with wurtzite and zinc blende, zinc oxide (ZnO) has been found in a large number of polymorphs with substantially different properties and, hence, applications. Therefore, predicting and synthesizing new classes of ZnO polymorphs are of great significance and have been gaining considerable interest. Herein, we perform a density functional theory based tight-binding study, predicting several new series of ZnO hollow structures using the bottom-up approach. The geometry of the building blocks allows for obtaining a variety of hexagonal, low-density nanoporous, and flexible ZnO hollow structures. Their stability is discussed by means of the free energy computed within the lattice-dynamicsmore » approach. Our calculations also indicate that all the reported hollow structures are wide band gap semiconductors in the same fashion with bulk ZnO. The electronic band structures of the ZnO hollow structures are finally examined in detail.« less

  9. Non-interferometric quantitative phase imaging of yeast cells

    NASA Astrophysics Data System (ADS)

    Poola, Praveen K.; Pandiyan, Vimal Prabhu; John, Renu

    2015-12-01

    Real-time imaging of live cells is quite difficult without the addition of external contrast agents. Various methods for quantitative phase imaging of living cells have been proposed like digital holographic microscopy and diffraction phase microscopy. In this paper, we report theoretical and experimental results of quantitative phase imaging of live yeast cells with nanometric precision using transport of intensity equations (TIE). We demonstrate nanometric depth sensitivity in imaging live yeast cells using this technique. This technique being noninterferometric, does not need any coherent light sources and images can be captured through a regular bright-field microscope. This real-time imaging technique would deliver the depth or 3-D volume information of cells and is highly promising in real-time digital pathology applications, screening of pathogens and staging of diseases like malaria as it does not need any preprocessing of samples.

  10. Experimental and theoretical characterization of an AC electroosmotic micromixer.

    PubMed

    Sasaki, Naoki; Kitamori, Takehiko; Kim, Haeng-Boo

    2010-01-01

    We have reported on a novel microfluidic mixer based on AC electroosmosis. To elucidate the mixer characteristics, we performed detailed measurements of mixing under various experimental conditions including applied voltage, frequency and solution viscosity. The results are discussed through comparison with results obtained from a theoretical model of AC electroosmosis. As predicted from the theoretical model, we found that a larger voltage (approximately 20 V(p-p)) led to more rapid mixing, while the dependence of the mixing on frequency (1-5 kHz) was insignificant under the present experimental conditions. Furthermore, the dependence of the mixing on viscosity was successfully explained by the theoretical model, and the applicability of the mixer in viscous solution (2.83 mPa s) was confirmed experimentally. By using these results, it is possible to estimate the mixing performance under given conditions. These estimations can provide guidelines for using the mixer in microfluidic chemical analysis.

  11. Descriptive and predictive validity of somatic attributions in patients with somatoform disorders: a systematic review of quantitative research.

    PubMed

    Douzenis, Athanassios; Seretis, Dionysis

    2013-09-01

    Research on hypochondriasis and other somatoform disorders (SFD) has provided evidence that patients with SFD tend to attribute their symptoms to organic dysfunctions or disease. However, recent studies appear to discredit this. There is no systematic evidence on whether patients with SFD predominantly rely on somatic attributions, despite calls to include somatic attributions as a positive criterion of somatic symptom disorder (SSD) in the upcoming Diagnostic and Statistical Manual of Mental Disorders (DSM-5). This study is a systematic review of quantitative studies which assess the descriptive and predictive validity of somatic attribution in SFD. The literature search was restricted to studies with patients who met the DSM-IV criteria for SFD. Somatic attribution style in SFD has acceptable descriptive but insufficient predictive validity. This confirms that the overlap between somatic and psychological attributions is often substantial. Attribution style can discriminate between SFD patients with and without comorbidity. A somatic attribution style does not qualify as a positive criterion in SSD. However, there is an urgent need for further research on causal illness perceptions in the full spectrum of medically unexplained symptoms in order to confirm this result. Given its high prevalence, research on psychological attribution style is warranted. Re-attribution does not provide a framework sophisticated enough to address the needs of patients in primary care. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. Association of pain ratings with the prediction of early physical recovery after general and orthopaedic surgery-A quantitative study with repeated measures.

    PubMed

    Eriksson, Kerstin; Wikström, Lotta; Fridlund, Bengt; Årestedt, Kristofer; Broström, Anders

    2017-11-01

    To compare different levels of self-rated pain and determine if they predict anticipated early physical recovery in patients undergoing general and orthopaedic surgery. Previous research has indicated that average self-rated pain reflects patients' ability to recover the same day. However, there is a knowledge gap about the feasibility of using average pain ratings to predict patients' physical recovery for the next day. Descriptive, quantitative repeated measures. General and orthopaedic inpatients (n = 479) completed a questionnaire (October 2012-January 2015) about pain and recovery. Average pain intensity at rest and during activity was based on the Numeric Rating Scale and divided into three levels (0-3, 4-6, 7-10). Three out of five dimensions from the tool "Postoperative Recovery Profile" were used. Because few suffered severe pain, general and orthopaedic patients were analysed together. Binary logistic regression analysis showed that average pain intensity postoperative day 1 significantly predicted the impact on recovery day 2, except nausea, gastrointestinal function and bladder function when pain at rest and also nausea, appetite changes, and bladder function when pain during activity. High pain ratings (NRS 7-10) demonstrated to be a better predictor for recovery compared with moderate ratings (NRS 4-6), day 2, as it significantly predicted more items in recovery. Pain intensity reflected general and orthopaedic patients' physical recovery postoperative day 1 and predicted recovery for day 2. By monitoring patients' pain and impact on recovery, patients' need for support becomes visible which is valuable during hospital stays. © 2017 John Wiley & Sons Ltd.

  13. An exploratory study of death anxiety and trainees' choice of theoretical orientation.

    PubMed

    Belviso, Francesco; Gaubatz, Michael D

    2013-01-01

    This study investigated the association between therapist trainees' death anxiety and their preference for "objective" (i.e., quantitative and rational) over "subjective" (i.e., experiential and symbolic) theoretical orientations. In this correlational investigation, 303 clinical psychology and counseling trainees at a Midwestern school of professional psychology completed instruments assessing their fear of personal death and their endorsement of superordinate dimensions of psychotherapy orientations. As hypothesized, trainees who reported higher levels of death anxiety displayed a stronger preference for objective over subjective orientations, a relationship that was found in post hoc analyses to be particularly salient for male trainees. These findings suggest that trainees' death anxiety, and their attempts to control it, could influence their choice of a theoretical orientation. Potential implications for training institutions are discussed.

  14. An exploratory study of death anxiety and trainees' choice of theoretical orientation.

    PubMed

    Belviso, Francesco; Gaubatz, Michael D

    This study investigated the association between therapist-trainees' death anxiety and their preference for "objective" (i.e., quantitative and rational) over "subjective" (i.e., experiential and symbolic) theoretical orientations. In this correlational investigation, 303 clinical psychology and counseling trainees at a Midwestern school of professional psychology completed instruments assessing their fear of personal death and their endorsement of superordinate dimensions of psychotherapy orientations. As hypothesized, trainees who reported higher levels of death anxiety displayed a stronger preference for objective over subjective orientations, a relationship that was found in post-hoc analyses to be particularly salient for male trainees. These findings suggest that trainees' death anxiety, and their attempts to control it, could influence their choice of a theoretical orientation. Potential implications for training institutions are discussed.

  15. Cerebellar peduncle injury predicts motor impairments in preterm infants: A quantitative tractography study at term-equivalent age.

    PubMed

    Hasegawa, Tatsuji; Yamada, Kei; Tozawa, Takenori; Chiyonobu, Tomohiro; Tokuda, Sachiko; Nishimura, Akira; Hosoi, Hajime; Morimoto, Masafumi

    2018-05-15

    Cerebellar injury is well established as an important finding in preterm infants with cerebral palsy (CP). In this study, we investigated associations between injury to the cerebellar peduncles and motor impairments in preterm infants using quantitative tractography at term-equivalent age, which represents an early phase before the onset of motor impairments. We studied 64 preterm infants who were born at <33 weeks gestational age. These infants were divided into three groups: CP, Non-CP (defined as infants with periventricular leukomalacia but having normal motor function), and a Normal group. Diffusion tensor imaging was performed at term-equivalent age and motor function was assessed no earlier than a corrected age of 2 years. Using tractography, we measured fractional anisotropy (FA) and apparent diffusion coefficient (ADC) of the superior cerebellar peduncles (SCP) and middle cerebellar peduncles (MCP), as well as the motor/sensory tracts. The infants in the CP group had significantly lower FA of the SCP and sensory tract than those in the other groups. There was no significant difference in FA and ADC of the motor tract among the three groups. Severity of CP had a significant correlation with FA of the MCP, but not with the FA of other white matter tracts. Our results suggested that the infants with CP had injuries of the ascending tracts (e.g. the SCP and sensory tract), and that additional MCP injury might increase the severity of CP. Quantitative tractography assessment at term-equivalent age may be useful for screening preterm infants for prediction of future motor impairments. Copyright © 2018 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.

  16. Predictive Model of Systemic Toxicity (SOT)

    EPA Science Inventory

    In an effort to ensure chemical safety in light of regulatory advances away from reliance on animal testing, USEPA and L’Oréal have collaborated to develop a quantitative systemic toxicity prediction model. Prediction of human systemic toxicity has proved difficult and remains a ...

  17. Assessment of two theoretical methods to estimate potentiometrictitration curves of peptides: comparison with experiment

    PubMed Central

    Makowska, Joanna; Bagiñska, Katarzyna; Makowski, Mariusz; Jagielska, Anna; Liwo, Adam; Kasprzykowski, Franciszek; Chmurzyñski, Lech; Scheraga, Harold A.

    2008-01-01

    dissociation constants. Nevertheless, quantitative agreement between theoretically predicted and experimental titration curves is not achieved in all three solvents even with the MD-based approach which is manifested by a smaller pH range of the calculated titration curves with respect to the experimental curves. The poorer agreement obtained for water than for the non-aqueous solvents suggests a significant role of specific solvation in water, which cannot be accounted for by the mean-field solvation models. PMID:16509748

  18. A Theoretical Model for Predicting Residual Stress Generation in Fabrication Process of Double-Ceramic-Layer Thermal Barrier Coating System

    PubMed Central

    Song, Yan; Wu, Weijie; Xie, Feng; Liu, Yilun; Wang, Tiejun

    2017-01-01

    Residual stress arisen in fabrication process of Double-Ceramic-Layer Thermal Barrier Coating System (DCL-TBCs) has a significant effect on its quality and reliability. In this work, based on the practical fabrication process of DCL-TBCs and the force and moment equilibrium, a theoretical model was proposed at first to predict residual stress generation in its fabrication process, in which the temperature dependent material properties of DCL-TBCs were incorporated. Then, a Finite Element method (FEM) has been carried out to verify our theoretical model. Afterwards, some important geometric parameters for DCL-TBCs, such as the thickness ratio of stabilized Zirconia (YSZ, ZrO2-8%Y2O3) layer to Lanthanum Zirconate (LZ, La2Zr2O7) layer, which is adjustable in a wide range in the fabrication process, have a remarkable effect on its performance, therefore, the effect of this thickness ratio on residual stress generation in the fabrication process of DCL-TBCs has been systematically studied. In addition, some thermal spray treatment, such as the pre-heating treatment, its effect on residual stress generation has also been studied in this work. It is found that, the final residual stress mainly comes from the cooling down process in the fabrication of DCL-TBCs. Increasing the pre-heating temperature can obviously decrease the magnitude of residual stresses in LZ layer, YSZ layer and substrate. With the increase of the thickness ratio of YSZ layer to LZ layer, magnitudes of residual stresses arisen in LZ layer and YSZ layer will increase while residual stress in substrate will decrease. PMID:28103275

  19. A systematic review of quantitative burn wound microbiology in the management of burns patients.

    PubMed

    Halstead, Fenella D; Lee, Kwang Chear; Kwei, Johnny; Dretzke, Janine; Oppenheim, Beryl A; Moiemen, Naiem S

    2018-02-01

    The early diagnosis of infection or sepsis in burns are important for patient care. Globally, a large number of burn centres advocate quantitative cultures of wound biopsies for patient management, since there is assumed to be a direct link between the bioburden of a burn wound and the risk of microbial invasion. Given the conflicting study findings in this area, a systematic review was warranted. Bibliographic databases were searched with no language restrictions to August 2015. Study selection, data extraction and risk of bias assessment were performed in duplicate using pre-defined criteria. Substantial heterogeneity precluded quantitative synthesis, and findings were described narratively, sub-grouped by clinical question. Twenty six laboratory and/or clinical studies were included. Substantial heterogeneity hampered comparisons across studies and interpretation of findings. Limited evidence suggests that (i) more than one quantitative microbiology sample is required to obtain reliable estimates of bacterial load; (ii) biopsies are more sensitive than swabs in diagnosing or predicting sepsis; (iii) high bacterial loads may predict worse clinical outcomes, and (iv) both quantitative and semi-quantitative culture reports need to be interpreted with caution and in the context of other clinical risk factors. The evidence base for the utility and reliability of quantitative microbiology for diagnosing or predicting clinical outcomes in burns patients is limited and often poorly reported. Consequently future research is warranted. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  20. Theoretical derivation of anodizing current and comparison between fitted curves and measured curves under different conditions.

    PubMed

    Chong, Bin; Yu, Dongliang; Jin, Rong; Wang, Yang; Li, Dongdong; Song, Ye; Gao, Mingqi; Zhu, Xufei

    2015-04-10

    Anodic TiO2 nanotubes have been studied extensively for many years. However, the growth kinetics still remains unclear. The systematic study of the current transient under constant anodizing voltage has not been mentioned in the original literature. Here, a derivation and its corresponding theoretical formula are proposed to overcome this challenge. In this paper, the theoretical expressions for the time dependent ionic current and electronic current are derived to explore the anodizing process of Ti. The anodizing current-time curves under different anodizing voltages and different temperatures are experimentally investigated in the anodization of Ti. Furthermore, the quantitative relationship between the thickness of the barrier layer and anodizing time, and the relationships between the ionic/electronic current and temperatures are proposed in this paper. All of the current-transient plots can be fitted consistently by the proposed theoretical expressions. Additionally, it is the first time that the coefficient A of the exponential relationship (ionic current j(ion) = A exp(BE)) has been determined under various temperatures and voltages. And the results indicate that as temperature and voltage increase, ionic current and electronic current both increase. The temperature has a larger effect on electronic current than ionic current. These results can promote the research of kinetics from a qualitative to quantitative level.

  1. Theoretical derivation of anodizing current and comparison between fitted curves and measured curves under different conditions

    NASA Astrophysics Data System (ADS)

    Chong, Bin; Yu, Dongliang; Jin, Rong; Wang, Yang; Li, Dongdong; Song, Ye; Gao, Mingqi; Zhu, Xufei

    2015-04-01

    Anodic TiO2 nanotubes have been studied extensively for many years. However, the growth kinetics still remains unclear. The systematic study of the current transient under constant anodizing voltage has not been mentioned in the original literature. Here, a derivation and its corresponding theoretical formula are proposed to overcome this challenge. In this paper, the theoretical expressions for the time dependent ionic current and electronic current are derived to explore the anodizing process of Ti. The anodizing current-time curves under different anodizing voltages and different temperatures are experimentally investigated in the anodization of Ti. Furthermore, the quantitative relationship between the thickness of the barrier layer and anodizing time, and the relationships between the ionic/electronic current and temperatures are proposed in this paper. All of the current-transient plots can be fitted consistently by the proposed theoretical expressions. Additionally, it is the first time that the coefficient A of the exponential relationship (ionic current jion = A exp(BE)) has been determined under various temperatures and voltages. And the results indicate that as temperature and voltage increase, ionic current and electronic current both increase. The temperature has a larger effect on electronic current than ionic current. These results can promote the research of kinetics from a qualitative to quantitative level.

  2. Quantitative predictions of streamflow variability in the Susquehanna River Basin

    NASA Astrophysics Data System (ADS)

    Alexander, R.; Boyer, E. W.; Leonard, L. N.; Duffy, C.; Schwarz, G. E.; Smith, R. A.

    2012-12-01

    Hydrologic researchers and water managers have increasingly sought an improved understanding of the major processes that control fluxes of water and solutes across diverse environmental settings and large spatial scales. Regional analyses of observed streamflow data have led to advances in our knowledge of relations among land use, climate, and streamflow, with methodologies ranging from statistical assessments of multiple monitoring sites to the regionalization of the parameters of catchment-scale mechanistic simulation models. However, gaps remain in our understanding of the best ways to transfer the knowledge of hydrologic response and governing processes among locations, including methods for regionalizing streamflow measurements and model predictions. We developed an approach to predict variations in streamflow using the SPARROW (SPAtially Referenced Regression On Watershed attributes) modeling infrastructure, with mechanistic functions, mass conservation constraints, and statistical estimation of regional and sub-regional parameters. We used the model to predict discharge in the Susquehanna River Basin (SRB) under varying hydrological regimes that are representative of contemporary flow conditions. The resulting basin-scale water balance describes mean monthly flows in stream reaches throughout the entire SRB (represented at a 1:100,000 scale using the National Hydrologic Data network), with water supply and demand components that are inclusive of a range of hydrologic, climatic, and cultural properties (e.g., precipitation, evapotranspiration, soil and groundwater storage, runoff, baseflow, water use). We compare alternative models of varying complexity that reflect differences in the number and types of explanatory variables and functional expressions as well as spatial and temporal variability in the model parameters. Statistical estimation of the models reveals the levels of complexity that can be uniquely identified, subject to the information content

  3. [Theoretical modeling and experimental research on direct compaction characteristics of multi-component pharmaceutical powders based on the Kawakita equation].

    PubMed

    Si, Guo-Ning; Chen, Lan; Li, Bao-Guo

    2014-04-01

    Base on the Kawakita powder compression equation, a general theoretical model for predicting the compression characteristics of multi-components pharmaceutical powders with different mass ratios was developed. The uniaxial flat-face compression tests of powder lactose, starch and microcrystalline cellulose were carried out, separately. Therefore, the Kawakita equation parameters of the powder materials were obtained. The uniaxial flat-face compression tests of the powder mixtures of lactose, starch, microcrystalline cellulose and sodium stearyl fumarate with five mass ratios were conducted, through which, the correlation between mixture density and loading pressure and the Kawakita equation curves were obtained. Finally, the theoretical prediction values were compared with experimental results. The analysis showed that the errors in predicting mixture densities were less than 5.0% and the errors of Kawakita vertical coordinate were within 4.6%, which indicated that the theoretical model could be used to predict the direct compaction characteristics of multi-component pharmaceutical powders.

  4. Quantitation of cholesterol incorporation into extruded lipid bilayers.

    PubMed

    Ibarguren, Maitane; Alonso, Alicia; Tenchov, Boris G; Goñi, Felix M

    2010-09-01

    Cholesterol incorporation into lipid bilayers, in the form of multilamellar vesicles or extruded large unilamellar vesicles, has been quantitated. To this aim, the cholesterol contents of bilayers prepared from phospholipid:cholesterol mixtures 33-75 mol% cholesterol have been measured and compared with the original mixture before lipid hydration. There is a great diversity of cases, but under most conditions the actual cholesterol proportion present in the extruded bilayers is much lower than predicted. A quantitative analysis of the vesicles is thus required before any experimental study is undertaken. 2010 Elsevier B.V. All rights reserved.

  5. Growth of wormlike micelles in nonionic surfactant solutions: Quantitative theory vs. experiment.

    PubMed

    Danov, Krassimir D; Kralchevsky, Peter A; Stoyanov, Simeon D; Cook, Joanne L; Stott, Ian P; Pelan, Eddie G

    2018-06-01

    Despite the considerable advances of molecular-thermodynamic theory of micelle growth, agreement between theory and experiment has been achieved only in isolated cases. A general theory that can provide self-consistent quantitative description of the growth of wormlike micelles in mixed surfactant solutions, including the experimentally observed high peaks in viscosity and aggregation number, is still missing. As a step toward the creation of such theory, here we consider the simplest system - nonionic wormlike surfactant micelles from polyoxyethylene alkyl ethers, C i E j . Our goal is to construct a molecular-thermodynamic model that is in agreement with the available experimental data. For this goal, we systematized data for the micelle mean mass aggregation number, from which the micelle growth parameter was determined at various temperatures. None of the available models can give a quantitative description of these data. We constructed a new model, which is based on theoretical expressions for the interfacial-tension, headgroup-steric and chain-conformation components of micelle free energy, along with appropriate expressions for the parameters of the model, including their temperature and curvature dependencies. Special attention was paid to the surfactant chain-conformation free energy, for which a new more general formula was derived. As a result, relatively simple theoretical expressions are obtained. All parameters that enter these expressions are known, which facilitates the theoretical modeling of micelle growth for various nonionic surfactants in excellent agreement with the experiment. The constructed model can serve as a basis that can be further upgraded to obtain quantitative description of micelle growth in more complicated systems, including binary and ternary mixtures of nonionic, ionic and zwitterionic surfactants, which determines the viscosity and stability of various formulations in personal-care and house-hold detergency. Copyright © 2018

  6. Theoretical prediction of pullout strengths for dental and orthopaedic screws with conical profile and buttress threads.

    PubMed

    Shih, Kao-Shang; Hou, Sheng-Mou; Lin, Shang-Chih

    2017-12-01

    The pullout strength of a screw is an indicator of how secure bone fragments are being held in place. Such bone-purchasing ability is sensitive to bone quality, thread design, and the pilot hole, and is often evaluated by experimental and numerical methods. Historically, there are some mathematical formulae to simulate the screw withdrawal from the synthetic bone. There are great variations in screw specifications. However, extensive investigation of the correlation between experimental and analytical results has not been reported in literature. Referring to the literature formulae, this study aims to evaluate the differences in the calculated pullout strengths. The pullout tests of the surgical screws are measured and the sawbone is used as the testing block. The absolute errors and correlation coefficients of the experimental and analytical results are calculated as the comparison baselines of the formulae. The absolute error of the dental, traumatic, and spinal groups are 21.7%, 95.5%, and 37.0%, respectively. For the screws with a conical profile and/or tiny threads, the calculated and measured results are not well correlated. The formulae are not accurate indicators of the pullout strengths of the screws where the design parameters are slightly varied. However, the experimental and numerical results are highly correlated for the cylindrical screws. The pullout strength of a conical screw is higher than that of its counterpart, but all formulae consistently predict the opposite results. In general, the bony purchase of the buttress threads is securer than that of the symmetric thread. An absolute error of up to 51.4% indicates the theoretical results cannot predict the actual value of the pullout strength. Only thread diameter, pitch, and depth are considered in the investigated formulae. The thread profile and shape should be formulated to modify the slippage mechanism at the bone-screw interfaces and simulate the strength change in the squeezed bones

  7. Towards a chromatographic similarity index to establish localised quantitative structure-retention relationships for retention prediction. II Use of Tanimoto similarity index in ion chromatography.

    PubMed

    Park, Soo Hyun; Talebi, Mohammad; Amos, Ruth I J; Tyteca, Eva; Haddad, Paul R; Szucs, Roman; Pohl, Christopher A; Dolan, John W

    2017-11-10

    Quantitative Structure-Retention Relationships (QSRR) are used to predict retention times of compounds based only on their chemical structures encoded by molecular descriptors. The main concern in QSRR modelling is to build models with high predictive power, allowing reliable retention prediction for the unknown compounds across the chromatographic space. With the aim of enhancing the prediction power of the models, in this work, our previously proposed QSRR modelling approach called "federation of local models" is extended in ion chromatography to predict retention times of unknown ions, where a local model for each target ion (unknown) is created using only structurally similar ions from the dataset. A Tanimoto similarity (TS) score was utilised as a measure of structural similarity and training sets were developed by including ions that were similar to the target ion, as defined by a threshold value. The prediction of retention parameters (a- and b-values) in the linear solvent strength (LSS) model in ion chromatography, log k=a - blog[eluent], allows the prediction of retention times under all eluent concentrations. The QSRR models for a- and b-values were developed by a genetic algorithm-partial least squares method using the retention data of inorganic and small organic anions and larger organic cations (molecular mass up to 507) on four Thermo Fisher Scientific columns (AS20, AS19, AS11HC and CS17). The corresponding predicted retention times were calculated by fitting the predicted a- and b-values of the models into the LSS model equation. The predicted retention times were also plotted against the experimental values to evaluate the goodness of fit and the predictive power of the models. The application of a TS threshold of 0.6 was found to successfully produce predictive and reliable QSRR models (Q ext(F2) 2 >0.8 and Mean Absolute Error<0.1), and hence accurate retention time predictions with an average Mean Absolute Error of 0.2min. Crown Copyright

  8. Theoretical and experimental studies of reentry plasmas

    NASA Technical Reports Server (NTRS)

    Dunn, M. G.; Kang, S.

    1973-01-01

    A viscous shock-layer analysis was developed and used to calculate nonequilibrium-flow species distributions in the plasma layer of the RAM vehicle. The theoretical electron-density results obtained are in good agreement with those measured in flight. A circular-aperture flush-mounted antenna was used to obtain a comparison between theoretical and experimental antenna admittance in the presence of ionized boundary layers of low collision frequency. The electron-temperature and electron-density distributions in the boundary layer were independently measured. The antenna admittance was measured using a four-probe microwave reflectometer and these measured values were found to be in good agreement with those predicted. Measurements were also performed with another type of circular-aperture antenna and good agreement was obtained between the calculations and the experimental results. A theoretical analysis has been completed which permits calculation of the nonequilibrium, viscous shock-layer flow field for a sphere-cone body. Results are presented for two different bodies at several different altitudes illustrating the influences of bluntness and chemical nonequilibrium on several gas dynamic parameters of interest. Plane-wave transmission coefficients were calculated for an approximate space-shuttle body using a typical trajectory.

  9. Microcomputer Calculation of Theoretical Pre-Exponential Factors for Bimolecular Reactions.

    ERIC Educational Resources Information Center

    Venugopalan, Mundiyath

    1991-01-01

    Described is the application of microcomputers to predict reaction rates based on theoretical atomic and molecular properties taught in undergraduate physical chemistry. Listed is the BASIC program which computes the partition functions for any specific bimolecular reactants. These functions are then used to calculate the pre-exponential factor of…

  10. Path Analysis Tests of Theoretical Models of Children's Memory Performance

    ERIC Educational Resources Information Center

    DeMarie, Darlene; Miller, Patricia H.; Ferron, John; Cunningham, Walter R.

    2004-01-01

    Path analysis was used to test theoretical models of relations among variables known to predict differences in children's memory--strategies, capacity, and metamemory. Children in kindergarten to fourth grade (chronological ages 5 to 11) performed different memory tasks. Several strategies (i.e., sorting, clustering, rehearsal, and self-testing)…

  11. A quantitative description for efficient financial markets

    NASA Astrophysics Data System (ADS)

    Immonen, Eero

    2015-09-01

    In this article we develop a control system model for describing efficient financial markets. We define the efficiency of a financial market in quantitative terms by robust asymptotic price-value equality in this model. By invoking the Internal Model Principle of robust output regulation theory we then show that under No Bubble Conditions, in the proposed model, the market is efficient if and only if the following conditions hold true: (1) the traders, as a group, can identify any mispricing in asset value (even if no one single trader can do it accurately), and (2) the traders, as a group, incorporate an internal model of the value process (again, even if no one single trader knows it). This main result of the article, which deliberately avoids the requirement for investor rationality, demonstrates, in quantitative terms, that the more transparent the markets are, the more efficient they are. An extensive example is provided to illustrate the theoretical development.

  12. Global analysis of seasonal streamflow predictability using an ensemble prediction system and observations from 6192 small catchments worldwide

    NASA Astrophysics Data System (ADS)

    van Dijk, Albert I. J. M.; Peña-Arancibia, Jorge L.; Wood, Eric F.; Sheffield, Justin; Beck, Hylke E.

    2013-05-01

    Ideally, a seasonal streamflow forecasting system would ingest skilful climate forecasts and propagate these through calibrated hydrological models initialized with observed catchment conditions. At global scale, practical problems exist in each of these aspects. For the first time, we analyzed theoretical and actual skill in bimonthly streamflow forecasts from a global ensemble streamflow prediction (ESP) system. Forecasts were generated six times per year for 1979-2008 by an initialized hydrological model and an ensemble of 1° resolution daily climate estimates for the preceding 30 years. A post-ESP conditional sampling method was applied to 2.6% of forecasts, based on predictive relationships between precipitation and 1 of 21 climate indices prior to the forecast date. Theoretical skill was assessed against a reference run with historic forcing. Actual skill was assessed against streamflow records for 6192 small (<10,000 km2) catchments worldwide. The results show that initial catchment conditions provide the main source of skill. Post-ESP sampling enhanced skill in equatorial South America and Southeast Asia, particularly in terms of tercile probability skill, due to the persistence and influence of the El Niño Southern Oscillation. Actual skill was on average 54% of theoretical skill but considerably more for selected regions and times of year. The realized fraction of the theoretical skill probably depended primarily on the quality of precipitation estimates. Forecast skill could be predicted as the product of theoretical skill and historic model performance. Increases in seasonal forecast skill are likely to require improvement in the observation of precipitation and initial hydrological conditions.

  13. Prediction of the dollar to the ruble rate. A system-theoretic approach

    NASA Astrophysics Data System (ADS)

    Borodachev, Sergey M.

    2017-07-01

    Proposed a simple state-space model of dollar rate formation based on changes in oil prices and some mechanisms of money transfer between monetary and stock markets. Comparison of predictions by means of input-output model and state-space model is made. It concludes that with proper use of statistical data (Kalman filter) the second approach provides more adequate predictions of the dollar rate.

  14. Further evaluation of quantitative structure--activity relationship models for the prediction of the skin sensitization potency of selected fragrance allergens.

    PubMed

    Patlewicz, Grace Y; Basketter, David A; Pease, Camilla K Smith; Wilson, Karen; Wright, Zoe M; Roberts, David W; Bernard, Guillaume; Arnau, Elena Giménez; Lepoittevin, Jean-Pierre

    2004-02-01

    Fragrance substances represent a very diverse group of chemicals; a proportion of them are associated with the ability to cause allergic reactions in the skin. Efforts to find substitute materials are hindered by the need to undertake animal testing for determining both skin sensitization hazard and potency. One strategy to avoid such testing is through an understanding of the relationships between chemical structure and skin sensitization, so-called structure-activity relationships. In recent work, we evaluated 2 groups of fragrance chemicals -- saturated aldehydes and alpha,beta-unsaturated aldehydes. Simple quantitative structure-activity relationship (QSAR) models relating the EC3 values [derived from the local lymph node assay (LLNA)] to physicochemical properties were developed for both sets of aldehydes. In the current study, we evaluated an additional group of carbonyl-containing compounds to test the predictive power of the developed QSARs and to extend their scope. The QSAR models were used to predict EC3 values of 10 newly selected compounds. Local lymph node assay data generated for these compounds demonstrated that the original QSARs were fairly accurate, but still required improvement. Development of these QSAR models has provided us with a better understanding of the potential mechanisms of action for aldehydes, and hence how to avoid or limit allergy. Knowledge generated from this work is being incorporated into new/improved rules for sensitization in the expert toxicity prediction system, deductive estimation of risk from existing knowledge (DEREK).

  15. Qualitative Versus Quantitative Social Support as a Predictor of Depression in the Elderly.

    ERIC Educational Resources Information Center

    Chwalisz, Kathleen D.; And Others

    This study examined the relationship between qualitative and quantitative indicators of social support in the prediction of depression. Quantitative indicators were examined with regard to their direct effects on depression as well as their indirect effects through their relationship to perceived social support. Subjects were 301…

  16. Pseudoracemic amino acid complexes: blind predictions for flexible two-component crystals.

    PubMed

    Görbitz, Carl Henrik; Dalhus, Bjørn; Day, Graeme M

    2010-08-14

    Ab initio prediction of the crystal packing in complexes between two flexible molecules is a particularly challenging computational chemistry problem. In this work we present results of single crystal structure determinations as well as theoretical predictions for three 1 ratio 1 complexes between hydrophobic l- and d-amino acids (pseudoracemates), known from previous crystallographic work to form structures with one of two alternative hydrogen bonding arrangements. These are accurately reproduced in the theoretical predictions together with a series of patterns that have never been observed experimentally. In this bewildering forest of potential polymorphs, hydrogen bonding arrangements and molecular conformations, the theoretical predictions succeeded, for all three complexes, in finding the correct hydrogen bonding pattern. For two of the complexes, the calculations also reproduce the exact space group and side chain orientations in the best ranked predicted structure. This includes one complex for which the observed crystal packing clearly contradicted previous experience based on experimental data for a substantial number of related amino acid complexes. The results highlight the significant recent advances that have been made in computational methods for crystal structure prediction.

  17. Studying Biology to Understand Risk: Dosimetry Models and Quantitative Adverse Outcome Pathways

    EPA Science Inventory

    Confidence in the quantitative prediction of risk is increased when the prediction is based to as great an extent as possible on the relevant biological factors that constitute the pathway from exposure to adverse outcome. With the first examples now over 40 years old, physiologi...

  18. Vessel wall characterization using quantitative MRI: what's in a number?

    PubMed

    Coolen, Bram F; Calcagno, Claudia; van Ooij, Pim; Fayad, Zahi A; Strijkers, Gustav J; Nederveen, Aart J

    2018-02-01

    The past decade has witnessed the rapid development of new MRI technology for vessel wall imaging. Today, with advances in MRI hardware and pulse sequences, quantitative MRI of the vessel wall represents a real alternative to conventional qualitative imaging, which is hindered by significant intra- and inter-observer variability. Quantitative MRI can measure several important morphological and functional characteristics of the vessel wall. This review provides a detailed introduction to novel quantitative MRI methods for measuring vessel wall dimensions, plaque composition and permeability, endothelial shear stress and wall stiffness. Together, these methods show the versatility of non-invasive quantitative MRI for probing vascular disease at several stages. These quantitative MRI biomarkers can play an important role in the context of both treatment response monitoring and risk prediction. Given the rapid developments in scan acceleration techniques and novel image reconstruction, we foresee the possibility of integrating the acquisition of multiple quantitative vessel wall parameters within a single scan session.

  19. Theoretical Analysis of an Iron Mineral-Based Magnetoreceptor Model in Birds

    PubMed Central

    Solov'yov, Ilia A.; Greiner, Walter

    2007-01-01

    Sensing the magnetic field has been established as an essential part of navigation and orientation of various animals for many years. Only recently has the first detailed receptor concept for magnetoreception been published based on histological and physical results. The considered mechanism involves two types of iron minerals (magnetite and maghemite) that were found in subcellular compartments within sensory dendrites of the upper beak of several bird species. But so far a quantitative evaluation of the proposed receptor is missing. In this article, we develop a theoretical model to quantitatively and qualitatively describe the magnetic field effects among particles containing iron minerals. The analysis of forces acting between these subcellular compartments shows a particular dependence on the orientation of the external magnetic field. The iron minerals in the beak are found in the form of crystalline maghemite platelets and assemblies of magnetite nanoparticles. We demonstrate that the pull or push to the magnetite assemblies, which are connected to the cell membrane, may reach a value of 0.2 pN—sufficient to excite specific mechanoreceptive membrane channels in the nerve cell. The theoretical analysis of the assumed magnetoreceptor system in the avian beak skin clearly shows that it might indeed be a sensitive biological magnetometer providing an essential part of the magnetic map for navigation. PMID:17496012

  20. A quantitative microscopic approach to predict local recurrence based on in vivo intraoperative imaging of sarcoma tumor margins

    PubMed Central

    Mueller, Jenna L.; Fu, Henry L.; Mito, Jeffrey K.; Whitley, Melodi J.; Chitalia, Rhea; Erkanli, Alaattin; Dodd, Leslie; Cardona, Diana M.; Geradts, Joseph; Willett, Rebecca M.; Kirsch, David G.; Ramanujam, Nimmi

    2015-01-01

    The goal of resection of soft tissue sarcomas located in the extremity is to preserve limb function while completely excising the tumor with a margin of normal tissue. With surgery alone, one-third of patients with soft tissue sarcoma of the extremity will have local recurrence due to microscopic residual disease in the tumor bed. Currently, a limited number of intraoperative pathology-based techniques are used to assess margin status; however, few have been widely adopted due to sampling error and time constraints. To aid in intraoperative diagnosis, we developed a quantitative optical microscopy toolbox, which includes acriflavine staining, fluorescence microscopy, and analytic techniques called sparse component analysis and circle transform to yield quantitative diagnosis of tumor margins. A series of variables were quantified from images of resected primary sarcomas and used to optimize a multivariate model. The sensitivity and specificity for differentiating positive from negative ex vivo resected tumor margins was 82% and 75%. The utility of this approach was tested by imaging the in vivo tumor cavities from 34 mice after resection of a sarcoma with local recurrence as a bench mark. When applied prospectively to images from the tumor cavity, the sensitivity and specificity for differentiating local recurrence was 78% and 82%. For comparison, if pathology was used to predict local recurrence in this data set, it would achieve a sensitivity of 29% and a specificity of 71%. These results indicate a robust approach for detecting microscopic residual disease, which is an effective predictor of local recurrence. PMID:25994353

  1. Limits of quantitation - Yet another suggestion

    NASA Astrophysics Data System (ADS)

    Carlson, Jill; Wysoczanski, Artur; Voigtman, Edward

    2014-06-01

    The work presented herein suggests that the limit of quantitation concept may be rendered substantially less ambiguous and ultimately more useful as a figure of merit by basing it upon the significant figure and relative measurement error ideas due to Coleman, Auses and Gram, coupled with the correct instantiation of Currie's detection limit methodology. Simple theoretical results are presented for a linear, univariate chemical measurement system with homoscedastic Gaussian noise, and these are tested against both Monte Carlo computer simulations and laser-excited molecular fluorescence experimental results. Good agreement among experiment, theory and simulation is obtained and an easy extension to linearly heteroscedastic Gaussian noise is also outlined.

  2. Quantitative radiomic profiling of glioblastoma represents transcriptomic expression.

    PubMed

    Kong, Doo-Sik; Kim, Junhyung; Ryu, Gyuha; You, Hye-Jin; Sung, Joon Kyung; Han, Yong Hee; Shin, Hye-Mi; Lee, In-Hee; Kim, Sung-Tae; Park, Chul-Kee; Choi, Seung Hong; Choi, Jeong Won; Seol, Ho Jun; Lee, Jung-Il; Nam, Do-Hyun

    2018-01-19

    Quantitative imaging biomarkers have increasingly emerged in the field of research utilizing available imaging modalities. We aimed to identify good surrogate radiomic features that can represent genetic changes of tumors, thereby establishing noninvasive means for predicting treatment outcome. From May 2012 to June 2014, we retrospectively identified 65 patients with treatment-naïve glioblastoma with available clinical information from the Samsung Medical Center data registry. Preoperative MR imaging data were obtained for all 65 patients with primary glioblastoma. A total of 82 imaging features including first-order statistics, volume, and size features, were semi-automatically extracted from structural and physiologic images such as apparent diffusion coefficient and perfusion images. Using commercially available software, NordicICE, we performed quantitative imaging analysis and collected the dataset composed of radiophenotypic parameters. Unsupervised clustering methods revealed that the radiophenotypic dataset was composed of three clusters. Each cluster represented a distinct molecular classification of glioblastoma; classical type, proneural and neural types, and mesenchymal type. These clusters also reflected differential clinical outcomes. We found that extracted imaging signatures does not represent copy number variation and somatic mutation. Quantitative radiomic features provide a potential evidence to predict molecular phenotype and treatment outcome. Radiomic profiles represents transcriptomic phenotypes more well.

  3. A new theory of plant-microbe nutrient competition resolves inconsistencies between observations and model predictions.

    PubMed

    Zhu, Qing; Riley, William J; Tang, Jinyun

    2017-04-01

    Terrestrial plants assimilate anthropogenic CO 2 through photosynthesis and synthesizing new tissues. However, sustaining these processes requires plants to compete with microbes for soil nutrients, which therefore calls for an appropriate understanding and modeling of nutrient competition mechanisms in Earth System Models (ESMs). Here, we survey existing plant-microbe competition theories and their implementations in ESMs. We found no consensus regarding the representation of nutrient competition and that observational and theoretical support for current implementations are weak. To reconcile this situation, we applied the Equilibrium Chemistry Approximation (ECA) theory to plant-microbe nitrogen competition in a detailed grassland 15 N tracer study and found that competition theories in current ESMs fail to capture observed patterns and the ECA prediction simplifies the complex nature of nutrient competition and quantitatively matches the 15 N observations. Since plant carbon dynamics are strongly modulated by soil nutrient acquisition, we conclude that (1) predicted nutrient limitation effects on terrestrial carbon accumulation by existing ESMs may be biased and (2) our ECA-based approach may improve predictions by mechanistically representing plant-microbe nutrient competition. © 2016 by the Ecological Society of America.

  4. Comparisons Between Experimental and Semi-theoretical Cutting Forces of CCS Disc Cutters

    NASA Astrophysics Data System (ADS)

    Xia, Yimin; Guo, Ben; Tan, Qing; Zhang, Xuhui; Lan, Hao; Ji, Zhiyong

    2018-05-01

    This paper focuses on comparisons between the experimental and semi-theoretical forces of CCS disc cutters acting on different rocks. The experimental forces obtained from LCM tests were used to evaluate the prediction accuracy of a semi-theoretical CSM model. The results show that the CSM model reliably predicts the normal forces acting on red sandstone and granite, but underestimates the normal forces acting on marble. Some additional LCM test data from the literature were collected to further explore the ability of the CSM model to predict the normal forces acting on rocks of different strengths. The CSM model underestimates the normal forces acting on soft rocks, semi-hard rocks and hard rocks by approximately 38, 38 and 10%, respectively, but very accurately predicts those acting on very hard and extremely hard rocks. A calibration factor is introduced to modify the normal forces estimated by the CSM model. The overall trend of the calibration factor is characterized by an exponential decrease with increasing rock uniaxial compressive strength. The mean fitting ratios between the normal forces estimated by the modified CSM model and the experimental normal forces acting on soft rocks, semi-hard rocks and hard rocks are 1.076, 0.879 and 1.013, respectively. The results indicate that the prediction accuracy and the reliability of the CSM model have been improved.

  5. Diphosphoglycerate and Inosine Hexaphosphate Control of Oxygen Binding by Hemoglobin: A Theoretical Interpretation of Experimental Data*

    PubMed Central

    Ling, Gilbert N.

    1970-01-01

    A theoretical equation is presented for the control of cooperative adsorption on proteins and other linear macromolecules by hormones, drugs, ATP, and other „cardinal adsorbents.” With reasonable accuracy, this equation describes quantitatively the control of oxygen binding to hemoglobin by 2,3-diphosphoglycerate and by inosine hexaphosphate. PMID:5272319

  6. Predicting excitonic gaps of semiconducting single-walled carbon nanotubes from a field theoretic analysis

    DOE PAGES

    Konik, Robert M.; Sfeir, Matthew Y.; Misewich, James A.

    2015-02-17

    We demonstrate that a non-perturbative framework for the treatment of the excitations of single walled carbon nanotubes based upon a field theoretic reduction is able to accurately describe experiment observations of the absolute values of excitonic energies. This theoretical framework yields a simple scaling function from which the excitonic energies can be read off. This scaling function is primarily determined by a single parameter, the charge Luttinger parameter of the tube, which is in turn a function of the tube chirality, dielectric environment, and the tube's dimensions, thus expressing disparate influences on the excitonic energies in a unified fashion. Asmore » a result, we test this theory explicitly on the data reported in [NanoLetters 5, 2314 (2005)] and [Phys. Rev. B 82, 195424 (2010)] and so demonstrate the method works over a wide range of reported excitonic spectra.« less

  7. Quantitative somatosensory testing of the penis: optimizing the clinical neurological examination.

    PubMed

    Bleustein, Clifford B; Eckholdt, Haftan; Arezzo, Joseph C; Melman, Arnold

    2003-06-01

    Quantitative somatosensory testing, including vibration, pressure, spatial perception and thermal thresholds of the penis, has demonstrated neuropathy in patients with a history of erectile dysfunction of all etiologies. We evaluated which measurement of neurological function of the penis was best at predicting erectile dysfunction and examined the impact of location on the penis for quantitative somatosensory testing measurements. A total of 107 patients were evaluated. All patients were required to complete the erectile function domain of the International Index of Erectile Function (IIEF) questionnaire, of whom 24 had no complaints of erectile dysfunction and scored within the "normal" range on the IIEF. Patients were subsequently tested on ventral middle penile shaft, proximal dorsal midline penile shaft and glans penis (with foreskin retracted) for vibration, pressure, spatial perception, and warm and cold thermal thresholds. Mixed models repeated measures analysis of variance controlling for age, diabetes and hypertension revealed that method of measurement (quantitative somatosensory testing) was predictive of IIEF score (F = 209, df = 4,1315, p <0.001), while site of measurement on the penis was not. To determine the best method of measurement, we used hierarchical regression, which revealed that warm temperature was the best predictor of erectile dysfunction with pseudo R(2) = 0.19, p <0.0007. There was no significant improvement in predicting erectile dysfunction when another test was added. Using 37C and greater as the warm thermal threshold yielded a sensitivity of 88.5%, specificity 70.0% and positive predictive value 85.5%. Quantitative somatosensory testing using warm thermal threshold measurements taken at the glans penis can be used alone to assess the neurological status of the penis. Warm thermal thresholds alone offer a quick, noninvasive accurate method of evaluating penile neuropathy in an office setting.

  8. WE-FG-207B-12: Quantitative Evaluation of a Spectral CT Scanner in a Phantom Study: Results of Spectral Reconstructions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duan, X; Arbique, G; Guild, J

    Purpose: To evaluate the quantitative image quality of spectral reconstructions of phantom data from a spectral CT scanner. Methods: The spectral CT scanner (IQon Spectral CT, Philips Healthcare) is equipped with a dual-layer detector and generates conventional 80-140 kVp images and variety of spectral reconstructions, e.g., virtual monochromatic (VM) images, virtual non-contrast (VNC) images, iodine maps, and effective atomic number (Z) images. A cylindrical solid water phantom (Gammex 472, 33 cm diameter and 5 cm thick) with iodine (2.0-20.0 mg I/ml) and calcium (50-600 mg/ml) rod inserts was scanned at 120 kVp and 27 mGy CTDIvol. Spectral reconstructions were evaluatedmore » by comparing image measurements with theoretical values calculated from nominal rod compositions provided by the phantom manufacturer. The theoretical VNC was calculated using water and iodine basis material decomposition, and the theoretical Z was calculated using two common methods, the chemical formula method (Z1) and the dual-energy ratio method (Z2). Results: Beam-hardening-like artifacts between high-attenuation calcium rods (≥300 mg/ml, >800 HU) influenced quantitative measurements, so the quantitative analysis was only performed on iodine rods using the images from the scan with all the calcium rods removed. The CT numbers of the iodine rods in the VM images (50∼150 keV) were close to theoretical values with average difference of 2.4±6.9 HU. Compared with theoretical values, the average difference for iodine concentration, VNC CT number and effective Z of iodine rods were −0.10±0.38 mg/ml, −0.1±8.2 HU, 0.25±0.06 (Z1) and −0.23±0.07 (Z2). Conclusion: The results indicate that the spectral CT scanner generates quantitatively accurate spectral reconstructions at clinically relevant iodine concentrations. Beam-hardening-like artifacts still exist when high-attenuation objects are present and their impact on patient images needs further investigation. YY is an employee of

  9. Neurocognitive mechanisms of perception-action coordination: a review and theoretical integration.

    PubMed

    Ridderinkhof, K Richard

    2014-10-01

    The present analysis aims at a theoretical integration of, and a systems-neuroscience perspective on, a variety of historical and contemporary views on perception-action coordination (PAC). We set out to determine the common principles or lawful linkages between sensory and motor systems that explain how perception is action-oriented and how action is perceptually guided. To this end, we analyze the key ingredients to such an integrated framework, examine the architecture of dual-system conjectures of PAC, and endeavor in an historical analysis of the key characteristics, mechanisms, and phenomena of PACs. This analysis will reveal that dual-systems views are in need of fundamental re-thinking, and its elements will be amalgamated with current views on action-oriented predictive processing into a novel integrative theoretical framework (IMPPACT: Impetus, Motivation, and Prediction in Perception-Action Coordination theory). From this framework and its neurocognitive architecture we derive a number of non-trivial predictions regarding conative, motive-driven PAC. We end by presenting a brief outlook on how IMPPACT might present novel insights into certain pathologies and into action expertise. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. A theoretical analysis of deformation behavior of auxetic plied yarn structure

    NASA Astrophysics Data System (ADS)

    Zeng, Jifang; Hu, Hong

    2018-07-01

    This paper presents a theoretical analysis of the auxetic plied yarn (APY) structure formed with two types of single yarns having different diameter and modulus. A model which can be used to predict its deformation behavior under axial extension is developed based on the theoretical analysis. The developed model is first compared with the experimental data obtained in the previous study, and then used to predict the effects of different structural and material parameters on the auxetic behavior of the APY. The calculation results show that the developed model can correctly predict the variation trend of the auxetic behavior of the APY, which first increases and then decrease with the increase of the axial strain. The calculation results also indicate that the auxetic behavior of the APY simultaneously depends on the diameter ratio of the soft yarn and stiff yarn as well as the ratio between the pitch length and stiff yarn diameter. The study provides a way to design and fabricate APYs with the same auxetic behavior by using different soft and stiff yarns as long as these two ratios are kept unchanged.

  11. Predicting Job Satisfaction.

    ERIC Educational Resources Information Center

    Blai, Boris, Jr.

    Psychological theories about human motivation and accommodation to environment can be used to achieve a better understanding of the human factors that function in the work environment. Maslow's theory of human motivational behavior provided a theoretical framework for an empirically-derived method to predict job satisfaction and explore the…

  12. Theoretical prediction of the vibrational spectra of group IB trimers

    PubMed Central

    Richtsmeier, Steven C.; Gole, James L.; Dixon, David A.

    1980-01-01

    The molecular structures of the group IB trimers, Cu3, Ag3, and Au3, have been determined by using the semi-empirical diatomics-in-molecules theory. The trimers are found to have C2v symmetry with bond angles between 65° and 80°. The trimers are bound with respect to dissociation to the asymptotic limit of an atom plus a diatom. The binding energies per atom for Cu3, Ag3, and Au3 are 1.08, 0.75, and 1.16 eV, respectively. The vibrational frequencies of the trimers have been determined for comparison with experimental results. The vibrational frequencies are characterized by low values for the bending and asymmetric stretch modes. The frequency of the symmetric stretch of the trimer is higher than the stretching frequency of the corresponding diatomic. A detailed comparison of the theoretical results with the previously measured Raman spectra of matrix isolated Ag3 is presented. PMID:16592885

  13. Research on Improved Depth Belief Network-Based Prediction of Cardiovascular Diseases

    PubMed Central

    Zhang, Hongpo

    2018-01-01

    Quantitative analysis and prediction can help to reduce the risk of cardiovascular disease. Quantitative prediction based on traditional model has low accuracy. The variance of model prediction based on shallow neural network is larger. In this paper, cardiovascular disease prediction model based on improved deep belief network (DBN) is proposed. Using the reconstruction error, the network depth is determined independently, and unsupervised training and supervised optimization are combined. It ensures the accuracy of model prediction while guaranteeing stability. Thirty experiments were performed independently on the Statlog (Heart) and Heart Disease Database data sets in the UCI database. Experimental results showed that the mean of prediction accuracy was 91.26% and 89.78%, respectively. The variance of prediction accuracy was 5.78 and 4.46, respectively. PMID:29854369

  14. A theory of utility conditionals: Paralogical reasoning from decision-theoretic leakage.

    PubMed

    Bonnefon, Jean-François

    2009-10-01

    Many "if p, then q" conditionals have decision-theoretic features, such as antecedents or consequents that relate to the utility functions of various agents. These decision-theoretic features leak into reasoning processes, resulting in various paralogical conclusions. The theory of utility conditionals offers a unified account of the various forms that this phenomenon can take. The theory is built on 2 main components: (1) a representational tool (the utility grid), which summarizes in compact form the decision-theoretic features of a conditional, and (2) a set of folk axioms of decision, which reflect reasoners' beliefs about the way most agents make their decisions. Applying the folk axioms to the utility grid of a conditional allows for the systematic prediction of the paralogical conclusions invited by the utility grid's decision-theoretic features. The theory of utility conditionals significantly extends the scope of current theories of conditional inference and moves reasoning research toward a greater integration with decision-making research.

  15. A method to explore the quantitative interactions between metal and ceria for M/CeO2 catalysts

    NASA Astrophysics Data System (ADS)

    Zhu, Kong-Jie; Liu, Jie; Yang, Yan-Ju; Xu, Yu-Xing; Teng, Bo-Tao; Wen, Xiao-Dong; Fan, Maohong

    2018-03-01

    To explore the quantitative relationship of metal interaction with ceria plays a key role in the theoretical design of M/CeO2 catalysts, especially for the new hot topic of atomically dispersed catalysts. A method to quantitatively explore the interactions between metal and ceria is proposed in the present work on the basis of the qualitative analysis of the effects of different factors on metal adsorption at different ceria surfaces by using Ag/CeO2 as a case. Two parameters are firstly presented, Ep which converts the total adsorption energy into the interaction energy per Agsbnd O bond, and θdiff which measures the deviation of Agsbnd Osbnd Ce bond angle from the angle of the sp3 orbital hybridization of O atom. Using the two parameters, the quantitative relationship of the interaction energy between Ag and ceria is established. There is a linear correlation between Ep and dAgsbndO with θdiff. The higher θdiff, the weaker Ep, and the longer Agsbnd O bond. This method is also suitable for other metals (Cu, Ni, Pd, and Rh, etc.) on ceria. It is the first time to establish the quantitative relationship for the interaction between metal and ceria, and sheds light into the theoretical design of M/CeO2 catalysts.

  16. Quantitative analysis of intermolecular interactions in orthorhombic rubrene

    DOE PAGES

    Hathwar, Venkatesha R.; Sist, Mattia; Jørgensen, Mads R. V.; ...

    2015-08-14

    Rubrene is one of the most studied organic semiconductors to date due to its high charge carrier mobility which makes it a potentially applicable compound in modern electronic devices. Previous electronic device characterizations and first principles theoretical calculations assigned the semiconducting properties of rubrene to the presence of a large overlap of the extended π-conjugated core between molecules. We present here the electron density distribution in rubrene at 20 K and at 100 K obtained using a combination of high-resolution X-ray and neutron diffraction data. The topology of the electron density and energies of intermolecular interactions are studied quantitatively. Specifically,more » the presence of C π...C πinteractions between neighbouring tetracene backbones of the rubrene molecules is experimentally confirmed from a topological analysis of the electron density, Non-Covalent Interaction (NCI) analysis and the calculated interaction energy of molecular dimers. A significant contribution to the lattice energy of the crystal is provided by H—H interactions. The electron density features of H—H bonding, and the interaction energy of molecular dimers connected by H—H interaction clearly demonstrate an importance of these weak interactions in the stabilization of the crystal structure. Finally, the quantitative nature of the intermolecular interactions is virtually unchanged between 20 K and 100 K suggesting that any changes in carrier transport at these low temperatures would have a different origin. The obtained experimental results are further supported by theoretical calculations.« less

  17. Prediction of crosslink density of solid propellant binders. [curing of elastomers

    NASA Technical Reports Server (NTRS)

    Marsh, H. E., Jr.

    1976-01-01

    A quantitative theory is outlined which allows calculation of crosslink density of solid propellant binders from a small number of predetermined parameters such as the binder composition, the functionality distributions of the ingredients, and the extent of the curing reaction. The parameter which is partly dependent on process conditions is the extent of reaction. The proposed theoretical model is verified by independent measurement of effective chain concentration and sol and gel fractions in simple compositions prepared from model compounds. The model is shown to correlate tensile data with composition in the case of urethane-cured polyether and certain solid propellants. A formula for the branching coefficient is provided according to which if one knows the functionality distributions of the ingredients and the corresponding equivalent weights and can measure or predict the extent of reaction, he can calculate the branching coefficient of such a system for any desired composition.

  18. Quantitative Approach to Collaborative Learning: Performance Prediction, Individual Assessment, and Group Composition

    ERIC Educational Resources Information Center

    Cen, Ling; Ruta, Dymitr; Powell, Leigh; Hirsch, Benjamin; Ng, Jason

    2016-01-01

    The benefits of collaborative learning, although widely reported, lack the quantitative rigor and detailed insight into the dynamics of interactions within the group, while individual contributions and their impacts on group members and their collaborative work remain hidden behind joint group assessment. To bridge this gap we intend to address…

  19. A Novel Two-Step Hierarchical Quantitative Structure–Activity Relationship Modeling Work Flow for Predicting Acute Toxicity of Chemicals in Rodents

    PubMed Central

    Zhu, Hao; Ye, Lin; Richard, Ann; Golbraikh, Alexander; Wright, Fred A.; Rusyn, Ivan; Tropsha, Alexander

    2009-01-01

    Background Accurate prediction of in vivo toxicity from in vitro testing is a challenging problem. Large public–private consortia have been formed with the goal of improving chemical safety assessment by the means of high-throughput screening. Objective A wealth of available biological data requires new computational approaches to link chemical structure, in vitro data, and potential adverse health effects. Methods and results A database containing experimental cytotoxicity values for in vitro half-maximal inhibitory concentration (IC50) and in vivo rodent median lethal dose (LD50) for more than 300 chemicals was compiled by Zentralstelle zur Erfassung und Bewertung von Ersatz- und Ergaenzungsmethoden zum Tierversuch (ZEBET; National Center for Documentation and Evaluation of Alternative Methods to Animal Experiments). The application of conventional quantitative structure–activity relationship (QSAR) modeling approaches to predict mouse or rat acute LD50 values from chemical descriptors of ZEBET compounds yielded no statistically significant models. The analysis of these data showed no significant correlation between IC50 and LD50. However, a linear IC50 versus LD50 correlation could be established for a fraction of compounds. To capitalize on this observation, we developed a novel two-step modeling approach as follows. First, all chemicals are partitioned into two groups based on the relationship between IC50 and LD50 values: One group comprises compounds with linear IC50 versus LD50 relationships, and another group comprises the remaining compounds. Second, we built conventional binary classification QSAR models to predict the group affiliation based on chemical descriptors only. Third, we developed k-nearest neighbor continuous QSAR models for each subclass to predict LD50 values from chemical descriptors. All models were extensively validated using special protocols. Conclusions The novelty of this modeling approach is that it uses the relationships

  20. A novel two-step hierarchical quantitative structure-activity relationship modeling work flow for predicting acute toxicity of chemicals in rodents.

    PubMed

    Zhu, Hao; Ye, Lin; Richard, Ann; Golbraikh, Alexander; Wright, Fred A; Rusyn, Ivan; Tropsha, Alexander

    2009-08-01

    Accurate prediction of in vivo toxicity from in vitro testing is a challenging problem. Large public-private consortia have been formed with the goal of improving chemical safety assessment by the means of high-throughput screening. A wealth of available biological data requires new computational approaches to link chemical structure, in vitro data, and potential adverse health effects. A database containing experimental cytotoxicity values for in vitro half-maximal inhibitory concentration (IC(50)) and in vivo rodent median lethal dose (LD(50)) for more than 300 chemicals was compiled by Zentralstelle zur Erfassung und Bewertung von Ersatz- und Ergaenzungsmethoden zum Tierversuch (ZEBET; National Center for Documentation and Evaluation of Alternative Methods to Animal Experiments). The application of conventional quantitative structure-activity relationship (QSAR) modeling approaches to predict mouse or rat acute LD(50) values from chemical descriptors of ZEBET compounds yielded no statistically significant models. The analysis of these data showed no significant correlation between IC(50) and LD(50). However, a linear IC(50) versus LD(50) correlation could be established for a fraction of compounds. To capitalize on this observation, we developed a novel two-step modeling approach as follows. First, all chemicals are partitioned into two groups based on the relationship between IC(50) and LD(50) values: One group comprises compounds with linear IC(50) versus LD(50) relationships, and another group comprises the remaining compounds. Second, we built conventional binary classification QSAR models to predict the group affiliation based on chemical descriptors only. Third, we developed k-nearest neighbor continuous QSAR models for each subclass to predict LD(50) values from chemical descriptors. All models were extensively validated using special protocols. The novelty of this modeling approach is that it uses the relationships between in vivo and in vitro data only