Evaluating Rapid Models for High-Throughput Exposure Forecasting (SOT)
High throughput exposure screening models can provide quantitative predictions for thousands of chemicals; however these predictions must be systematically evaluated for predictive ability. Without the capability to make quantitative, albeit uncertain, forecasts of exposure, the ...
Quantitative Comparison of Photothermal Heat Generation between Gold Nanospheres and Nanorods.
Qin, Zhenpeng; Wang, Yiru; Randrianalisoa, Jaona; Raeesi, Vahid; Chan, Warren C W; Lipiński, Wojciech; Bischof, John C
2016-07-21
Gold nanoparticles (GNPs) are widely used for biomedical applications due to unique optical properties, established synthesis methods, and biological compatibility. Despite important applications of plasmonic heating in thermal therapy, imaging, and diagnostics, the lack of quantification in heat generation leads to difficulties in comparing the heating capability for new plasmonic nanostructures and predicting the therapeutic and diagnostic outcome. This study quantifies GNP heat generation by experimental measurements and theoretical predictions for gold nanospheres (GNS) and nanorods (GNR). Interestingly, the results show a GNP-type dependent agreement between experiment and theory. The measured heat generation of GNS matches well with theory, while the measured heat generation of GNR is only 30% of that predicted theoretically at peak absorption. This then leads to a surprising finding that the polydispersity, the deviation of nanoparticle size and shape from nominal value, significantly influences GNR heat generation (>70% reduction), while having a limited effect for GNS (<10% change). This work demonstrates that polydispersity is an important metric in quantitatively predicting plasmonic heat generation and provides a validated framework to quantitatively compare the heating capabilities between gold and other plasmonic nanostructures.
Quantitative Comparison of Photothermal Heat Generation between Gold Nanospheres and Nanorods
NASA Astrophysics Data System (ADS)
Qin, Zhenpeng; Wang, Yiru; Randrianalisoa, Jaona; Raeesi, Vahid; Chan, Warren C. W.; Lipiński, Wojciech; Bischof, John C.
2016-07-01
Gold nanoparticles (GNPs) are widely used for biomedical applications due to unique optical properties, established synthesis methods, and biological compatibility. Despite important applications of plasmonic heating in thermal therapy, imaging, and diagnostics, the lack of quantification in heat generation leads to difficulties in comparing the heating capability for new plasmonic nanostructures and predicting the therapeutic and diagnostic outcome. This study quantifies GNP heat generation by experimental measurements and theoretical predictions for gold nanospheres (GNS) and nanorods (GNR). Interestingly, the results show a GNP-type dependent agreement between experiment and theory. The measured heat generation of GNS matches well with theory, while the measured heat generation of GNR is only 30% of that predicted theoretically at peak absorption. This then leads to a surprising finding that the polydispersity, the deviation of nanoparticle size and shape from nominal value, significantly influences GNR heat generation (>70% reduction), while having a limited effect for GNS (<10% change). This work demonstrates that polydispersity is an important metric in quantitatively predicting plasmonic heat generation and provides a validated framework to quantitatively compare the heating capabilities between gold and other plasmonic nanostructures.
Parrish, Rudolph S.; Smith, Charles N.
1990-01-01
A quantitative method is described for testing whether model predictions fall within a specified factor of true values. The technique is based on classical theory for confidence regions on unknown population parameters and can be related to hypothesis testing in both univariate and multivariate situations. A capability index is defined that can be used as a measure of predictive capability of a model, and its properties are discussed. The testing approach and the capability index should facilitate model validation efforts and permit comparisons among competing models. An example is given for a pesticide leaching model that predicts chemical concentrations in the soil profile.
Chen, Shangying; Zhang, Peng; Liu, Xin; Qin, Chu; Tao, Lin; Zhang, Cheng; Yang, Sheng Yong; Chen, Yu Zong; Chui, Wai Keung
2016-06-01
The overall efficacy and safety profile of a new drug is partially evaluated by the therapeutic index in clinical studies and by the protective index (PI) in preclinical studies. In-silico predictive methods may facilitate the assessment of these indicators. Although QSAR and QSTR models can be used for predicting PI, their predictive capability has not been evaluated. To test this capability, we developed QSAR and QSTR models for predicting the activity and toxicity of anticonvulsants at accuracy levels above the literature-reported threshold (LT) of good QSAR models as tested by both the internal 5-fold cross validation and external validation method. These models showed significantly compromised PI predictive capability due to the cumulative errors of the QSAR and QSTR models. Therefore, in this investigation a new quantitative structure-index relationship (QSIR) model was devised and it showed improved PI predictive capability that superseded the LT of good QSAR models. The QSAR, QSTR and QSIR models were developed using support vector regression (SVR) method with the parameters optimized by using the greedy search method. The molecular descriptors relevant to the prediction of anticonvulsant activities, toxicities and PIs were analyzed by a recursive feature elimination method. The selected molecular descriptors are primarily associated with the drug-like, pharmacological and toxicological features and those used in the published anticonvulsant QSAR and QSTR models. This study suggested that QSIR is useful for estimating the therapeutic index of drug candidates. Copyright © 2016. Published by Elsevier Inc.
Advances and Computational Tools towards Predictable Design in Biological Engineering
2014-01-01
The design process of complex systems in all the fields of engineering requires a set of quantitatively characterized components and a method to predict the output of systems composed by such elements. This strategy relies on the modularity of the used components or the prediction of their context-dependent behaviour, when parts functioning depends on the specific context. Mathematical models usually support the whole process by guiding the selection of parts and by predicting the output of interconnected systems. Such bottom-up design process cannot be trivially adopted for biological systems engineering, since parts function is hard to predict when components are reused in different contexts. This issue and the intrinsic complexity of living systems limit the capability of synthetic biologists to predict the quantitative behaviour of biological systems. The high potential of synthetic biology strongly depends on the capability of mastering this issue. This review discusses the predictability issues of basic biological parts (promoters, ribosome binding sites, coding sequences, transcriptional terminators, and plasmids) when used to engineer simple and complex gene expression systems in Escherichia coli. A comparison between bottom-up and trial-and-error approaches is performed for all the discussed elements and mathematical models supporting the prediction of parts behaviour are illustrated. PMID:25161694
NASA Astrophysics Data System (ADS)
Wang, Yujie; Pan, Rui; Liu, Chang; Chen, Zonghai; Ling, Qiang
2018-01-01
The battery power capability is intimately correlated with the climbing, braking and accelerating performance of the electric vehicles. Accurate power capability prediction can not only guarantee the safety but also regulate driving behavior and optimize battery energy usage. However, the nonlinearity of the battery model is very complex especially for the lithium iron phosphate batteries. Besides, the hysteresis loop in the open-circuit voltage curve is easy to cause large error in model prediction. In this work, a multi-parameter constraints dynamic estimation method is proposed to predict the battery continuous period power capability. A high-fidelity battery model which considers the battery polarization and hysteresis phenomenon is presented to approximate the high nonlinearity of the lithium iron phosphate battery. Explicit analyses of power capability with multiple constraints are elaborated, specifically the state-of-energy is considered in power capability assessment. Furthermore, to solve the problem of nonlinear system state estimation, and suppress noise interference, the UKF based state observer is employed for power capability prediction. The performance of the proposed methodology is demonstrated by experiments under different dynamic characterization schedules. The charge and discharge power capabilities of the lithium iron phosphate batteries are quantitatively assessed under different time scales and temperatures.
NASA Astrophysics Data System (ADS)
Badgett, Majors J.; Boyes, Barry; Orlando, Ron
2017-05-01
Peptides with deamidated asparagine residues and oxidized methionine residues are often not resolved sufficiently to allow quantitation of their native and modified forms using reversed phase (RP) chromatography. The accurate quantitation of these modifications is vital in protein biotherapeutic analysis because they can affect a protein's function, activity, and stability. We demonstrate here that hydrophilic interaction liquid chromatography (HILIC) adequately and predictably separates peptides with these modifications from their native counterparts. Furthermore, coefficients describing the extent of the hydrophilicity of these modifications have been derived and were incorporated into a previously made peptide retention prediction model that is capable of predicting the retention times of peptides with and without these modifications.
Badgett, Majors J; Boyes, Barry; Orlando, Ron
2017-05-01
Peptides with deamidated asparagine residues and oxidized methionine residues are often not resolved sufficiently to allow quantitation of their native and modified forms using reversed phase (RP) chromatography. The accurate quantitation of these modifications is vital in protein biotherapeutic analysis because they can affect a protein's function, activity, and stability. We demonstrate here that hydrophilic interaction liquid chromatography (HILIC) adequately and predictably separates peptides with these modifications from their native counterparts. Furthermore, coefficients describing the extent of the hydrophilicity of these modifications have been derived and were incorporated into a previously made peptide retention prediction model that is capable of predicting the retention times of peptides with and without these modifications. Graphical Abstract ᅟ.
Thermally induced oscillations in fluid flow
NASA Technical Reports Server (NTRS)
Zuber, N.
1970-01-01
Theoretical investigation distinguishes the various mechanisms responsible for oscillations of pressure, temperature, and flow velocity, derives a quantitative description of the most troublesome mechanisms, and develops a capability to predict the occurrence of unstable flow.
Norinder, U; Högberg, T
1992-04-01
The advantageous approach of using an experimentally designed training set as the basis for establishing a quantitative structure-activity relationship with good predictive capability is described. The training set was selected from a fractional factorial design scheme based on a principal component description of physico-chemical parameters of aromatic substituents. The derived model successfully predicts the activities of additional substituted benzamides of 6-methoxy-N-(4-piperidyl)salicylamide type. The major influence on activity of the 3-substituent is demonstrated.
Takenaka, Daisuke; Ohno, Yoshiharu; Koyama, Hisanobu; Nogami, Munenobu; Onishi, Yumiko; Matsumoto, Keiko; Yoshikawa, Takeshi; Matsumoto, Sumiaki; Sugimura, Kazuro
2010-06-01
To directly compare the capabilities of perfusion scan, SPECT, co-registered SPECT/CT, and quantitatively and qualitatively assessed MDCT (i.e. quantitative CT and qualitative CT) for predicting postoperative clinical outcome for lung volume reduction surgery (LVRS) candidates. Twenty-five consecutive candidates (19 men and six women, age range: 42-72 years) for LVRS underwent preoperative CT and perfusion scan with SPECT. Clinical outcome of LVRS for all subjects was also assessed by determining the difference between pre- and postoperative forced expiratory volume in 1s (FEV(1)) and 6-min walking distance (6MWD). All SPECT examinations were performed on a SPECT scanner, and co-registered to thin-section CT by using commercially available software. On planar imaging, SPECT and SPECT/CT, upper versus lower zone or lobe ratios (U/Ls) were calculated from regional uptakes between upper and lower lung fields in the operated lung. On quantitatively assessed CT, U/L for all subjects was assessed from regional functional lung volumes. On qualitatively assessed CT, planar imaging, SPECT and co-registered SPECT/CT, U/Ls were assessed with a 4-point visual scoring system. To compare capabilities of predicting clinical outcome, each U/L was statistically correlated with the corresponding clinical outcome. Significantly fair or moderate correlations were observed between quantitatively and qualitatively assessed U/Ls obtained with all four methods and clinical outcomes (-0.60
DOE R&D Accomplishments Database
Phelps, M. E.; Hoffman, E. J.; Huang, S. C.; Schelbert, H. R.; Kuhl, D. E.
1978-01-01
Emission computed tomography can provide a quantitative in vivo measurement of regional tissue radionuclide tracer concentrations. This facility when combined with physiologic models and radioactively labeled physiologic tracers that behave in a predictable manner allow measurement of a wide variety of physiologic variables. This integrated technique has been referred to as Physiologic Tomography (PT). PT requires labeled compounds which trace physiologic processes in a known and predictable manner, and physiologic models which are appropriately formulated and validated to derive physiologic variables from ECT data. In order to effectively achieve this goal, PT requires an ECT system that is capable of performing truly quantitative or analytical measurements of tissue tracer concentrations and which has been well characterized in terms of spatial resolution, sensitivity and signal to noise ratios in the tomographic image. This paper illustrates the capabilities of emission computed tomography and provides examples of physiologic tomography for the regional measurement of cerebral and myocardial metabolic rate for glucose, regional measurement of cerebral blood volume, gated cardiac blood pools and capillary perfusion in brain and heart. Studies on patients with stroke and myocardial ischemia are also presented.
Roff, Derek A; Fairbairn, Daphne J
2007-01-01
Predicting evolutionary change is the central goal of evolutionary biology because it is the primary means by which we can test evolutionary hypotheses. In this article, we analyze the pattern of evolutionary change in a laboratory population of the wing-dimorphic sand cricket Gryllus firmus resulting from relaxation of selection favoring the migratory (long-winged) morph. Based on a well-characterized trade-off between fecundity and flight capability, we predict that evolution in the laboratory environment should result in a reduction in the proportion of long-winged morphs. We also predict increased fecundity and reduced functionality and weight of the major flight muscles in long-winged females but little change in short-winged (flightless) females. Based on quantitative genetic theory, we predict that the regression equation describing the trade-off between ovary weight and weight of the major flight muscles will show a change in its intercept but not in its slope. Comparisons across generations verify all of these predictions. Further, using values of genetic parameters estimated from previous studies, we show that a quantitative genetic simulation model can account for not only the qualitative changes but also the evolutionary trajectory. These results demonstrate the power of combining quantitative genetic and physiological approaches for understanding the evolution of complex traits.
Foot-and-mouth disease virus during the incubation period in pigs
USDA-ARS?s Scientific Manuscript database
Understanding the quantitative characteristics of a pathogen’s capability to transmit during distinct phases of infection is important to enable accurate predictions of the spread and impact of a disease outbreak. In the current investigation, the potential for transmission of foot-and-mouth disease...
NASA Technical Reports Server (NTRS)
Lyle, Karen H.
2008-01-01
The Space Shuttle Columbia Accident Investigation Board recommended that NASA develop, validate, and maintain a modeling tool capable of predicting the damage threshold for debris impacts on the Space Shuttle Reinforced Carbon-Carbon (RCC) wing leading edge and nosecap assembly. The results presented in this paper are one part of a multi-level approach that supported the development of the predictive tool used to recertify the shuttle for flight following the Columbia Accident. The assessment of predictive capability was largely based on test analysis comparisons for simpler component structures. This paper provides comparisons of finite element simulations with test data for external tank foam debris impacts onto 6-in. square RCC flat panels. Both quantitative displacement and qualitative damage assessment correlations are provided. The comparisons show good agreement and provided the Space Shuttle Program with confidence in the predictive tool.
NASA Astrophysics Data System (ADS)
Wang, Yujie; Zhang, Xu; Liu, Chang; Pan, Rui; Chen, Zonghai
2018-06-01
The power capability and maximum charge and discharge energy are key indicators for energy management systems, which can help the energy storage devices work in a suitable area and prevent them from over-charging and over-discharging. In this work, a model based power and energy assessment approach is proposed for the lithium-ion battery and supercapacitor hybrid system. The model framework of the lithium-ion battery and supercapacitor hybrid system is developed based on the equivalent circuit model, and the model parameters are identified by regression method. Explicit analyses of the power capability and maximum charge and discharge energy prediction with multiple constraints are elaborated. Subsequently, the extended Kalman filter is employed for on-board power capability and maximum charge and discharge energy prediction to overcome estimation error caused by system disturbance and sensor noise. The charge and discharge power capability, and the maximum charge and discharge energy are quantitatively assessed under both the dynamic stress test and the urban dynamometer driving schedule. The maximum charge and discharge energy prediction of the lithium-ion battery and supercapacitor hybrid system with different time scales are explored and discussed.
Machine Learning Meta-analysis of Large Metagenomic Datasets: Tools and Biological Insights.
Pasolli, Edoardo; Truong, Duy Tin; Malik, Faizan; Waldron, Levi; Segata, Nicola
2016-07-01
Shotgun metagenomic analysis of the human associated microbiome provides a rich set of microbial features for prediction and biomarker discovery in the context of human diseases and health conditions. However, the use of such high-resolution microbial features presents new challenges, and validated computational tools for learning tasks are lacking. Moreover, classification rules have scarcely been validated in independent studies, posing questions about the generality and generalization of disease-predictive models across cohorts. In this paper, we comprehensively assess approaches to metagenomics-based prediction tasks and for quantitative assessment of the strength of potential microbiome-phenotype associations. We develop a computational framework for prediction tasks using quantitative microbiome profiles, including species-level relative abundances and presence of strain-specific markers. A comprehensive meta-analysis, with particular emphasis on generalization across cohorts, was performed in a collection of 2424 publicly available metagenomic samples from eight large-scale studies. Cross-validation revealed good disease-prediction capabilities, which were in general improved by feature selection and use of strain-specific markers instead of species-level taxonomic abundance. In cross-study analysis, models transferred between studies were in some cases less accurate than models tested by within-study cross-validation. Interestingly, the addition of healthy (control) samples from other studies to training sets improved disease prediction capabilities. Some microbial species (most notably Streptococcus anginosus) seem to characterize general dysbiotic states of the microbiome rather than connections with a specific disease. Our results in modelling features of the "healthy" microbiome can be considered a first step toward defining general microbial dysbiosis. The software framework, microbiome profiles, and metadata for thousands of samples are publicly available at http://segatalab.cibio.unitn.it/tools/metaml.
NASA Astrophysics Data System (ADS)
Ragno, Rino; Ballante, Flavio; Pirolli, Adele; Wickersham, Richard B.; Patsilinakos, Alexandros; Hesse, Stéphanie; Perspicace, Enrico; Kirsch, Gilbert
2015-08-01
Vascular endothelial growth factor receptor-2, (VEGFR-2), is a key element in angiogenesis, the process by which new blood vessels are formed, and is thus an important pharmaceutical target. Here, 3-D quantitative structure-activity relationship (3-D QSAR) were used to build a quantitative screening and pharmacophore model of the VEGFR-2 receptors for design of inhibitors with improved activities. Most of available experimental data information has been used as training set to derive optimized and fully cross-validated eight mono-probe and a multi-probe quantitative models. Notable is the use of 262 molecules, aligned following both structure-based and ligand-based protocols, as external test set confirming the 3-D QSAR models' predictive capability and their usefulness in design new VEGFR-2 inhibitors. From a survey on literature, this is the first generation of a wide-ranging computational medicinal chemistry application on VEGFR2 inhibitors.
The human adrenocortical carcinoma cell line H295R is being used as an in vitro steroidogenesis screening assay to assess the impact of endocrine active chemicals (EACs) capable of altering steroid biosynthesis. To enhance the interpretation and quantitative application of measur...
NASA Astrophysics Data System (ADS)
Hirst, Jonathan D.; King, Ross D.; Sternberg, Michael J. E.
1994-08-01
One of the largest available data sets for developing a quantitative structure-activity relationship (QSAR) — the inhibition of dihydrofolate reductase (DHFR) by 2,4-diamino-6,6-dimethyl-5-phenyl-dihydrotriazine derivatives — has been used for a sixfold cross-validation trial of neural networks, inductive logic programming (ILP) and linear regression. No statistically significant difference was found between the predictive capabilities of the methods. However, the representation of molecules by attributes, which is integral to the ILP approach, provides understandable rules about drug-receptor interactions.
Ohno, Yoshiharu; Koyama, Hisanobu; Nogami, Munenobu; Takenaka, Daisuke; Onishi, Yumiko; Matsumoto, Keiko; Matsumoto, Sumiaki; Maniwa, Yoshimasa; Yoshimura, Masahiro; Nishimura, Yoshihiro; Sugimura, Kazuro
2011-01-01
The purpose of this study was to compare predictive capabilities for postoperative lung function in non-small cell lung cancer (NSCLC) patients of the state-of-the-art radiological methods including perfusion MRI, quantitative CT and SPECT/CT with that of anatomical method (i.e. qualitative CT) and traditional nuclear medicine methods such as planar imaging and SPECT. Perfusion MRI, CT, nuclear medicine study and measurements of %FEV(1) before and after lung resection were performed for 229 NSCLC patients (125 men and 104 women). For perfusion MRI, postoperative %FEV(1) (po%FEV(1)) was predicted from semi-quantitatively assessed blood volumes within total and resected lungs, for quantitative CT, it was predicted from the functional lung volumes within total and resected lungs, for qualitative CT, from the number of segments of total and resected lungs, and for nuclear medicine studies, from uptakes within total and resected lungs. All SPECTs were automatically co-registered with CTs for preparation of SPECT/CTs. Predicted po%FEV(1)s were then correlated with actual po%FEV(1)s, which were measured %FEV(1)s after operation. The limits of agreement were also evaluated. All predicted po%FEV(1)s showed good correlation with actual po%FEV(1)s (0.83≤r≤0.88, p<0.0001). Perfusion MRI, quantitative CT and SPECT/CT demonstrated better correlation than other methods. The limits of agreement of perfusion MRI (4.4±14.2%), quantitative CT (4.7±14.2%) and SPECT/CT (5.1±14.7%) were less than those of qualitative CT (6.0±17.4%), planar imaging (5.8±18.2%), and SPECT (5.5±16.8%). State-of-the-art radiological methods can predict postoperative lung function in NSCLC patients more accurately than traditional methods. Copyright © 2009 Elsevier Ireland Ltd. All rights reserved.
LEWICE 2.2 Capabilities and Thermal Validation
NASA Technical Reports Server (NTRS)
Wright, William B.
2002-01-01
A computational model of bleed air anti-icing and electrothermal de-icing have been added to the LEWICE 2.0 software by integrating the capabilities of two previous programs, ANTICE and LEWICE/ Thermal. This combined model has been released as LEWICE version 2.2. Several advancements have also been added to the previous capabilities of each module. This report will present the capabilities of the software package and provide results for both bleed air and electrothermal cases. A comprehensive validation effort has also been performed to compare the predictions to an existing electrothermal database. A quantitative comparison shows that for deicing cases, the average difference is 9.4 F (26%) compared to 3 F for the experimental data while for evaporative cases the average difference is 2 F (32%) compared to an experimental error of 4 F.
Heritage, Brody; Gilbert, Jessica M.; Roberts, Lynne D.
2016-01-01
Job embeddedness is a construct that describes the manner in which employees can be enmeshed in their jobs, reducing their turnover intentions. Recent questions regarding the properties of quantitative job embeddedness measures, and their predictive utility, have been raised. Our study compared two competing reflective measures of job embeddedness, examining their convergent, criterion, and incremental validity, as a means of addressing these questions. Cross-sectional quantitative data from 246 Australian university employees (146 academic; 100 professional) was gathered. Our findings indicated that the two compared measures of job embeddedness were convergent when total scale scores were examined. Additionally, job embeddedness was capable of demonstrating criterion and incremental validity, predicting unique variance in turnover intention. However, this finding was not readily apparent with one of the compared job embeddedness measures, which demonstrated comparatively weaker evidence of validity. We discuss the theoretical and applied implications of these findings, noting that job embeddedness has a complementary place among established determinants of turnover intention. PMID:27199817
The Potential for Predicting Precipitation on Seasonal-to-Interannual Timescales
NASA Technical Reports Server (NTRS)
Koster, R. D.
1999-01-01
The ability to predict precipitation several months in advance would have a significant impact on water resource management. This talk provides an overview of a project aimed at developing this prediction capability. NASA's Seasonal-to-Interannual Prediction Project (NSIPP) will generate seasonal-to-interannual sea surface temperature predictions through detailed ocean circulation modeling and will then translate these SST forecasts into forecasts of continental precipitation through the application of an atmospheric general circulation model and a "SVAT"-type land surface model. As part of the process, ocean variables (e.g., height) and land variables (e.g., soil moisture) will be updated regularly via data assimilation. The overview will include a discussion of the variability inherent in such a modeling system and will provide some quantitative estimates of the absolute upper limits of seasonal-to-interannual precipitation predictability.
Morris, Melody K.; Saez-Rodriguez, Julio; Clarke, David C.; Sorger, Peter K.; Lauffenburger, Douglas A.
2011-01-01
Predictive understanding of cell signaling network operation based on general prior knowledge but consistent with empirical data in a specific environmental context is a current challenge in computational biology. Recent work has demonstrated that Boolean logic can be used to create context-specific network models by training proteomic pathway maps to dedicated biochemical data; however, the Boolean formalism is restricted to characterizing protein species as either fully active or inactive. To advance beyond this limitation, we propose a novel form of fuzzy logic sufficiently flexible to model quantitative data but also sufficiently simple to efficiently construct models by training pathway maps on dedicated experimental measurements. Our new approach, termed constrained fuzzy logic (cFL), converts a prior knowledge network (obtained from literature or interactome databases) into a computable model that describes graded values of protein activation across multiple pathways. We train a cFL-converted network to experimental data describing hepatocytic protein activation by inflammatory cytokines and demonstrate the application of the resultant trained models for three important purposes: (a) generating experimentally testable biological hypotheses concerning pathway crosstalk, (b) establishing capability for quantitative prediction of protein activity, and (c) prediction and understanding of the cytokine release phenotypic response. Our methodology systematically and quantitatively trains a protein pathway map summarizing curated literature to context-specific biochemical data. This process generates a computable model yielding successful prediction of new test data and offering biological insight into complex datasets that are difficult to fully analyze by intuition alone. PMID:21408212
Validation Process for LEWICE Coupled by Use of a Navier-stokes Solver
NASA Technical Reports Server (NTRS)
Wright, William B.
2016-01-01
A research project is underway at NASA Glenn to produce computer software that can accurately predict ice growth for many meteorological conditions for any aircraft surface. This report will present results from the latest LEWICE release, version 3.5. This program differs from previous releases in its ability to model mixed phase and ice crystal conditions such as those encountered inside an engine. It also has expanded capability to use structured grids and a new capability to use results from unstructured grid flow solvers. An extensive comparison of the results in a quantifiable manner against the database of ice shapes that have been generated in the NASA Glenn Icing Research Tunnel (IRT) has also been performed. This paper will show the differences in ice shape between LEWICE 3.5 and experimental data. In addition, comparisons will be made between the lift and drag calculated on the ice shapes from experiment and those produced by LEWICE. This report will also provide a description of both programs. Quantitative geometric comparisons are shown for horn height, horn angle, icing limit, area and leading edge thickness. Quantitative comparisons of calculated lift and drag will also be shown. The results show that the predicted results are within the accuracy limits of the experimental data for the majority of cases.
Nuclear Reactions in Micro/Nano-Scale Metal Particles
NASA Astrophysics Data System (ADS)
Kim, Y. E.
2013-03-01
Low-energy nuclear reactions in micro/nano-scale metal particles are described based on the theory of Bose-Einstein condensation nuclear fusion (BECNF). The BECNF theory is based on a single basic assumption capable of explaining the observed LENR phenomena; deuterons in metals undergo Bose-Einstein condensation. The BECNF theory is also a quantitative predictive physical theory. Experimental tests of the basic assumption and theoretical predictions are proposed. Potential application to energy generation by ignition at low temperatures is described. Generalized theory of BECNF is used to carry out theoretical analyses of recently reported experimental results for hydrogen-nickel system.
NASA Astrophysics Data System (ADS)
Poveromo, Scott; Malcolm, Doug; Earthman, James
Conventional nondestructive (NDT) techniques used to detect defects in composites are not able to determine intact bond integrity within a composite structure and are costly to use on large and complex shaped surfaces. To overcome current NDT limitations, a new technology was adopted based on quantitative percussion diagnostics (QPD) to better quantify bond quality in fiber reinforced composite materials. Results indicate that this technology is capable of detecting weak (`kiss') bonds between flat composite laminates. Specifically, the local value of the probe force determined from quantitative percussion testing was predicted to be significantly lower for a laminate that contained a `kiss' bond compared to that for a well-bonded sample, which is in agreement with experimental findings. Experimental results were compared to a finite element analysis (FEA) using MSC PATRAN/NASTRAN to understand the visco-elastic behavior of the laminates during percussion testing. The dynamic FEA models were used to directly predict changes in the probe force, as well as effective stress distributions across the bonded panels as a function of time.
Quantitative Reactivity Scales for Dynamic Covalent and Systems Chemistry.
Zhou, Yuntao; Li, Lijie; Ye, Hebo; Zhang, Ling; You, Lei
2016-01-13
Dynamic covalent chemistry (DCC) has become a powerful tool for the creation of molecular assemblies and complex systems in chemistry and materials science. Herein we developed for the first time quantitative reactivity scales capable of correlation and prediction of the equilibrium of dynamic covalent reactions (DCRs). The reference reactions are based upon universal DCRs between imines, one of the most utilized structural motifs in DCC, and a series of O-, N-, and S- mononucleophiles. Aromatic imines derived from pyridine-2-carboxyaldehyde exhibit capability for controlling the equilibrium through distinct substituent effects. Electron-donating groups (EDGs) stabilize the imine through quinoidal resonance, while electron-withdrawing groups (EWGs) stabilize the adduct by enhancing intramolecular hydrogen bonding, resulting in curvature in Hammett analysis. Notably, unique nonlinearity induced by both EDGs and EWGs emerged in Hammett plot when cyclic secondary amines were used. This is the first time such a behavior is observed in a thermodynamically controlled system, to the best of our knowledge. Unified quantitative reactivity scales were proposed for DCC and defined by the correlation log K = S(N) (R(N) + R(E)). Nucleophilicity parameters (R(N) and S(N)) and electrophilicity parameters (R(E)) were then developed from DCRs discovered. Furthermore, the predictive power of those parameters was verified by successful correlation of other DCRs, validating our reactivity scales as a general and useful tool for the evaluation and modeling of DCRs. The reactivity parameters proposed here should be complementary to well-established kinetics based parameters and find applications in many aspects, such as DCR discovery, bioconjugation, and catalysis.
NASA Astrophysics Data System (ADS)
Podestà, M.; Gorelenkova, M.; Gorelenkov, N. N.; White, R. B.
2017-09-01
Alfvénic instabilities (AEs) are well known as a potential cause of enhanced fast ion transport in fusion devices. Given a specific plasma scenario, quantitative predictions of (i) expected unstable AE spectrum and (ii) resulting fast ion transport are required to prevent or mitigate the AE-induced degradation in fusion performance. Reduced models are becoming an attractive tool to analyze existing scenarios as well as for scenario prediction in time-dependent simulations. In this work, a neutral beam heated NSTX discharge is used as reference to illustrate the potential of a reduced fast ion transport model, known as kick model, that has been recently implemented for interpretive and predictive analysis within the framework of the time-dependent tokamak transport code TRANSP. Predictive capabilities for AE stability and saturation amplitude are first assessed, based on given thermal plasma profiles only. Predictions are then compared to experimental results, and the interpretive capabilities of the model further discussed. Overall, the reduced model captures the main properties of the instabilities and associated effects on the fast ion population. Additional information from the actual experiment enables further tuning of the model’s parameters to achieve a close match with measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Podestà, M.; Gorelenkova, M.; Gorelenkov, N. N.
Alfvénic instabilities (AEs) are well known as a potential cause of enhanced fast ion transport in fusion devices. Given a specific plasma scenario, quantitative predictions of (i) expected unstable AE spectrum and (ii) resulting fast ion transport are required to prevent or mitigate the AE-induced degradation in fusion performance. Reduced models are becoming an attractive tool to analyze existing scenarios as well as for scenario prediction in time-dependent simulations. Here, in this work, a neutral beam heated NSTX discharge is used as reference to illustrate the potential of a reduced fast ion transport model, known as kick model, that hasmore » been recently implemented for interpretive and predictive analysis within the framework of the time-dependent tokamak transport code TRANSP. Predictive capabilities for AE stability and saturation amplitude are first assessed, based on given thermal plasma profiles only. Predictions are then compared to experimental results, and the interpretive capabilities of the model further discussed. Overall, the reduced model captures the main properties of the instabilities and associated effects on the fast ion population. Finally, additional information from the actual experiment enables further tuning of the model's parameters to achieve a close match with measurements.« less
Podestà, M.; Gorelenkova, M.; Gorelenkov, N. N.; ...
2017-07-20
Alfvénic instabilities (AEs) are well known as a potential cause of enhanced fast ion transport in fusion devices. Given a specific plasma scenario, quantitative predictions of (i) expected unstable AE spectrum and (ii) resulting fast ion transport are required to prevent or mitigate the AE-induced degradation in fusion performance. Reduced models are becoming an attractive tool to analyze existing scenarios as well as for scenario prediction in time-dependent simulations. Here, in this work, a neutral beam heated NSTX discharge is used as reference to illustrate the potential of a reduced fast ion transport model, known as kick model, that hasmore » been recently implemented for interpretive and predictive analysis within the framework of the time-dependent tokamak transport code TRANSP. Predictive capabilities for AE stability and saturation amplitude are first assessed, based on given thermal plasma profiles only. Predictions are then compared to experimental results, and the interpretive capabilities of the model further discussed. Overall, the reduced model captures the main properties of the instabilities and associated effects on the fast ion population. Finally, additional information from the actual experiment enables further tuning of the model's parameters to achieve a close match with measurements.« less
Research study demonstrates computer simulation can predict warpage and assist in its elimination
NASA Astrophysics Data System (ADS)
Glozer, G.; Post, S.; Ishii, K.
1994-10-01
Programs for predicting warpage in injection molded parts are relatively new. Commercial software for simulating the flow and cooling stages of injection molding have steadily gained acceptance; however, warpage software is not yet as readily accepted. This study focused on gaining an understanding of the predictive capabilities of the warpage software. The following aspects of this study were unique. (1) Quantitative results were found using a statistically designed set of experiments. (2) Comparisons between experimental and simulation results were made with parts produced in a well-instrumented and controlled injection molding machine. (3) The experimental parts were accurately measured on a coordinate measuring machine with a non-contact laser probe. (4) The effect of part geometry on warpage was investigated.
Ohno, Yoshiharu; Seki, Shinichiro; Koyama, Hisanobu; Yoshikawa, Takeshi; Matsumoto, Sumiaki; Takenaka, Daisuke; Kassai, Yoshimori; Yui, Masao; Sugimura, Kazuro
2015-08-01
To compare predictive capabilities of non-contrast-enhanced (CE)- and dynamic CE-perfusion MRIs, thin-section multidetector computed tomography (CT) (MDCT), and perfusion scan for postoperative lung function in non-small cell lung cancer (NSCLC) patients. Sixty consecutive pathologically diagnosed NSCLC patients were included and prospectively underwent thin-section MDCT, non-CE-, and dynamic CE-perfusion MRIs and perfusion scan, and had their pre- and postoperative forced expiratory volume in one second (FEV1 ) measured. Postoperative percent FEV1 (po%FEV1 ) was then predicted from the fractional lung volume determined on semiquantitatively assessed non-CE- and dynamic CE-perfusion MRIs, from the functional lung volumes determined on quantitative CT, from the number of segments observed on qualitative CT, and from uptakes detected on perfusion scans within total and resected lungs. Predicted po%FEV1 s were then correlated with actual po%FEV1 s, which were %FEV1 s measured postoperatively. The limits of agreement were also determined. All predicted po%FEV1 s showed significant correlation (0.73 ≤ r ≤ 0.93, P < 0.0001) and limits of agreement with actual po%FEV1 (non-CE-perfusion MRI: 0.3 ± 10.0%, dynamic CE-perfusion MRI: 1.0 ± 10.8%, perfusion scan: 2.2 ± 14.1%, quantitative CT: 1.2 ± 9.0%, qualitative CT: 1.5 ± 10.2%). Non-CE-perfusion MRI may be able to predict postoperative lung function more accurately than qualitatively assessed MDCT and perfusion scan. © 2014 Wiley Periodicals, Inc.
Advanced quantitative magnetic nondestructive evaluation methods - Theory and experiment
NASA Technical Reports Server (NTRS)
Barton, J. R.; Kusenberger, F. N.; Beissner, R. E.; Matzkanin, G. A.
1979-01-01
The paper reviews the scale of fatigue crack phenomena in relation to the size detection capabilities of nondestructive evaluation methods. An assessment of several features of fatigue in relation to the inspection of ball and roller bearings suggested the use of magnetic methods; magnetic domain phenomena including the interaction of domains and inclusions, and the influence of stress and magnetic field on domains are discussed. Experimental results indicate that simplified calculations can be used to predict many features of these results; the data predicted by analytic models which use finite element computer analysis predictions do not agree with respect to certain features. Experimental analyses obtained on rod-type fatigue specimens which show experimental magnetic measurements in relation to the crack opening displacement and volume and crack depth should provide methods for improved crack characterization in relation to fracture mechanics and life prediction.
The HYTHIRM Project: Flight Thermography of the Space Shuttle During the Hypersonic Re-entry
NASA Technical Reports Server (NTRS)
Horvath, Thomas J.; Tomek, Deborah M.; Berger, Karen T.; Zalameda, Joseph N.; Splinter, Scott C.; Krasa, Paul W.; Schwartz, Richard J.; Gibson, David M.; Tietjen, Alan B.; Tack, Steve
2010-01-01
This report describes a NASA Langley led endeavor sponsored by the NASA Engineering Safety Center, the Space Shuttle Program Office and the NASA Aeronautics Research Mission Directorate to demonstrate a quantitative thermal imaging capability. A background and an overview of several multidisciplinary efforts that culminated in the acquisition of high resolution calibrated infrared imagery of the Space Shuttle during hypervelocity atmospheric entry is presented. The successful collection of thermal data has demonstrated the feasibility of obtaining remote high-resolution infrared imagery during hypersonic flight for the accurate measurement of surface temperature. To maximize science and engineering return, the acquisition of quantitative thermal imagery and capability demonstration was targeted towards three recent Shuttle flights - two of which involved flight experiments flown on Discovery. In coordination with these two Shuttle flight experiments, a US Navy NP-3D aircraft was flown between 26-41 nautical miles below Discovery and remotely monitored surface temperature of the Orbiter at Mach 8.4 (STS-119) and Mach 14.7 (STS-128) using a long-range infrared optical package referred to as Cast Glance. This same Navy aircraft successfully monitored the Orbiter Atlantis traveling at approximately Mach 14.3 during its return from the successful Hubble repair mission (STS-125). The purpose of this paper is to describe the systematic approach used by the Hypersonic Thermodynamic Infrared Measurements team to develop and implement a set of mission planning tools designed to establish confidence in the ability of an imaging platform to reliably acquire, track and return global quantitative surface temperatures of the Shuttle during entry. The mission planning tools included a pre-flight capability to predict the infrared signature of the Shuttle. Such tools permitted optimization of the hardware configuration to increase signal-to-noise and to maximize the available dynamic range while mitigating the potential for saturation. Post flight, analysis tools were used to assess atmospheric effects and to convert the 2-D intensity images to 3-D temperature maps of the windward surface. Comparison of the spatially resolved global thermal measurements to surface thermocouples and CFD prediction is made. Successful demonstration of a quantitative, spatially resolved, global temperature measurement on the Shuttle suggests future applications towards hypersonic flight test programs within NASA, DoD and DARPA along with flight test opportunities supporting NASA's project Constellation.
Global, quantitative and dynamic mapping of protein subcellular localization.
Itzhak, Daniel N; Tyanova, Stefka; Cox, Jürgen; Borner, Georg Hh
2016-06-09
Subcellular localization critically influences protein function, and cells control protein localization to regulate biological processes. We have developed and applied Dynamic Organellar Maps, a proteomic method that allows global mapping of protein translocation events. We initially used maps statically to generate a database with localization and absolute copy number information for over 8700 proteins from HeLa cells, approaching comprehensive coverage. All major organelles were resolved, with exceptional prediction accuracy (estimated at >92%). Combining spatial and abundance information yielded an unprecedented quantitative view of HeLa cell anatomy and organellar composition, at the protein level. We subsequently demonstrated the dynamic capabilities of the approach by capturing translocation events following EGF stimulation, which we integrated into a quantitative model. Dynamic Organellar Maps enable the proteome-wide analysis of physiological protein movements, without requiring any reagents specific to the investigated process, and will thus be widely applicable in cell biology.
3D Printed Organ Models with Physical Properties of Tissue and Integrated Sensors.
Qiu, Kaiyan; Zhao, Zichen; Haghiashtiani, Ghazaleh; Guo, Shuang-Zhuang; He, Mingyu; Su, Ruitao; Zhu, Zhijie; Bhuiyan, Didarul B; Murugan, Paari; Meng, Fanben; Park, Sung Hyun; Chu, Chih-Chang; Ogle, Brenda M; Saltzman, Daniel A; Konety, Badrinath R; Sweet, Robert M; McAlpine, Michael C
2018-03-01
The design and development of novel methodologies and customized materials to fabricate patient-specific 3D printed organ models with integrated sensing capabilities could yield advances in smart surgical aids for preoperative planning and rehearsal. Here, we demonstrate 3D printed prostate models with physical properties of tissue and integrated soft electronic sensors using custom-formulated polymeric inks. The models show high quantitative fidelity in static and dynamic mechanical properties, optical characteristics, and anatomical geometries to patient tissues and organs. The models offer tissue-mimicking tactile sensation and behavior and thus can be used for the prediction of organ physical behavior under deformation. The prediction results show good agreement with values obtained from simulations. The models also allow the application of surgical and diagnostic tools to their surface and inner channels. Finally, via the conformal integration of 3D printed soft electronic sensors, pressure applied to the models with surgical tools can be quantitatively measured.
3D Printed Organ Models with Physical Properties of Tissue and Integrated Sensors
Qiu, Kaiyan; Zhao, Zichen; Haghiashtiani, Ghazaleh; Guo, Shuang-Zhuang; He, Mingyu; Su, Ruitao; Zhu, Zhijie; Bhuiyan, Didarul B.; Murugan, Paari; Meng, Fanben; Park, Sung Hyun; Chu, Chih-Chang; Ogle, Brenda M.; Saltzman, Daniel A.; Konety, Badrinath R.
2017-01-01
The design and development of novel methodologies and customized materials to fabricate patient-specific 3D printed organ models with integrated sensing capabilities could yield advances in smart surgical aids for preoperative planning and rehearsal. Here, we demonstrate 3D printed prostate models with physical properties of tissue and integrated soft electronic sensors using custom-formulated polymeric inks. The models show high quantitative fidelity in static and dynamic mechanical properties, optical characteristics, and anatomical geometries to patient tissues and organs. The models offer tissue-mimicking tactile sensation and behavior and thus can be used for the prediction of organ physical behavior under deformation. The prediction results show good agreement with values obtained from simulations. The models also allow the application of surgical and diagnostic tools to their surface and inner channels. Finally, via the conformal integration of 3D printed soft electronic sensors, pressure applied to the models with surgical tools can be quantitatively measured. PMID:29608202
Sánchez-López, E; Sánchez-Rodríguez, M I; Marinas, A; Marinas, J M; Urbano, F J; Caridad, J M; Moalem, M
2016-08-15
Authentication of extra virgin olive oil (EVOO) is an important topic for olive oil industry. The fraudulent practices in this sector are a major problem affecting both producers and consumers. This study analyzes the capability of FT-Raman combined with chemometric treatments of prediction of the fatty acid contents (quantitative information), using gas chromatography as the reference technique, and classification of diverse EVOOs as a function of the harvest year, olive variety, geographical origin and Andalusian PDO (qualitative information). The optimal number of PLS components that summarizes the spectral information was introduced progressively. For the estimation of the fatty acid composition, the lowest error (both in fitting and prediction) corresponded to MUFA, followed by SAFA and PUFA though such errors were close to zero in all cases. As regards the qualitative variables, discriminant analysis allowed a correct classification of 94.3%, 84.0%, 89.0% and 86.6% of samples for harvest year, olive variety, geographical origin and PDO, respectively. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Schmidt, R. C.; Patankar, S. V.
1991-01-01
The capability of two k-epsilon low-Reynolds number (LRN) turbulence models, those of Jones and Launder (1972) and Lam and Bremhorst (1981), to predict transition in external boundary-layer flows subject to free-stream turbulence is analyzed. Both models correctly predict the basic qualitative aspects of boundary-layer transition with free stream turbulence, but for calculations started at low values of certain defined Reynolds numbers, the transition is generally predicted at unrealistically early locations. Also, the methods predict transition lengths significantly shorter than those found experimentally. An approach to overcoming these deficiencies without abandoning the basic LRN k-epsilon framework is developed. This approach limits the production term in the turbulent kinetic energy equation and is based on a simple stability criterion. It is correlated to the free-stream turbulence value. The modification is shown to improve the qualitative and quantitative characteristics of the transition predictions.
Novel risk predictor for thrombus deposition in abdominal aortic aneurysms
NASA Astrophysics Data System (ADS)
Nestola, M. G. C.; Gizzi, A.; Cherubini, C.; Filippi, S.; Succi, S.
2015-10-01
The identification of the basic mechanisms responsible for cardiovascular diseases stands as one of the most challenging problems in modern medical research including various mechanisms which encompass a broad spectrum of space and time scales. Major implications for clinical practice and pre-emptive medicine rely on the onset and development of intraluminal thrombus in which effective clinical therapies require synthetic risk predictors/indicators capable of informing real-time decision-making protocols. In the present contribution, two novel hemodynamics synthetic indicators, based on a three-band decomposition (TBD) of the shear stress signal, are introduced. Extensive fluid-structure computer simulations of patient-specific scenarios confirm the enhanced risk-prediction capabilities of the TBD indicators. In particular, they permit a quantitative and accurate localization of the most likely thrombus deposition in realistic aortic geometries, where previous indicators would predict healthy operation. The proposed methodology is also shown to provide additional information and discrimination criteria on other factors of major clinical relevance, such as the size of the aneurysm.
Padroni, Marina; Bernardoni, Andrea; Tamborino, Carmine; Roversi, Gloria; Borrelli, Massimo; Saletti, Andrea; De Vito, Alessandro; Azzini, Cristiano; Borgatti, Luca; Marcello, Onofrio; d'Esterre, Christopher; Ceruti, Stefano; Casetta, Ilaria; Lee, Ting-Yim; Fainardi, Enrico
2016-01-01
The capability of CT perfusion (CTP) Alberta Stroke Program Early CT Score (ASPECTS) to predict outcome and identify ischemia severity in acute ischemic stroke (AIS) patients is still questioned. 62 patients with AIS were imaged within 8 hours of symptom onset by non-contrast CT, CT angiography and CTP scans at admission and 24 hours. CTP ASPECTS was calculated on the affected hemisphere using cerebral blood flow (CBF), cerebral blood volume (CBV) and mean transit time (MTT) maps by subtracting 1 point for any abnormalities visually detected or measured within multiple cortical circular regions of interest according to previously established thresholds. MTT-CBV ASPECTS was considered as CTP ASPECTS mismatch. Hemorrhagic transformation (HT), recanalization status and reperfusion grade at 24 hours, final infarct volume at 7 days and modified Rankin scale (mRS) at 3 months after onset were recorded. Semi-quantitative and quantitative CTP ASPECTS were highly correlated (p<0.00001). CBF, CBV and MTT ASPECTS were higher in patients with no HT and mRS ≤ 2 and inversely associated with final infarct volume and mRS (p values: from p<0.05 to p<0.00001). CTP ASPECTS mismatch was slightly associated with radiological and clinical outcomes (p values: from p<0.05 to p<0.02) only if evaluated quantitatively. A CBV ASPECTS of 9 was the optimal semi-quantitative value for predicting outcome. Our findings suggest that visual inspection of CTP ASPECTS recognizes infarct and ischemic absolute values. Semi-quantitative CBV ASPECTS, but not CTP ASPECTS mismatch, represents a strong prognostic indicator, implying that core extent is the main determinant of outcome, irrespective of penumbra size.
Padroni, Marina; Bernardoni, Andrea; Tamborino, Carmine; Roversi, Gloria; Borrelli, Massimo; Saletti, Andrea; De Vito, Alessandro; Azzini, Cristiano; Borgatti, Luca; Marcello, Onofrio; d’Esterre, Christopher; Ceruti, Stefano; Casetta, Ilaria; Lee, Ting-Yim; Fainardi, Enrico
2016-01-01
Introduction The capability of CT perfusion (CTP) Alberta Stroke Program Early CT Score (ASPECTS) to predict outcome and identify ischemia severity in acute ischemic stroke (AIS) patients is still questioned. Methods 62 patients with AIS were imaged within 8 hours of symptom onset by non-contrast CT, CT angiography and CTP scans at admission and 24 hours. CTP ASPECTS was calculated on the affected hemisphere using cerebral blood flow (CBF), cerebral blood volume (CBV) and mean transit time (MTT) maps by subtracting 1 point for any abnormalities visually detected or measured within multiple cortical circular regions of interest according to previously established thresholds. MTT-CBV ASPECTS was considered as CTP ASPECTS mismatch. Hemorrhagic transformation (HT), recanalization status and reperfusion grade at 24 hours, final infarct volume at 7 days and modified Rankin scale (mRS) at 3 months after onset were recorded. Results Semi-quantitative and quantitative CTP ASPECTS were highly correlated (p<0.00001). CBF, CBV and MTT ASPECTS were higher in patients with no HT and mRS≤2 and inversely associated with final infarct volume and mRS (p values: from p<0.05 to p<0.00001). CTP ASPECTS mismatch was slightly associated with radiological and clinical outcomes (p values: from p<0.05 to p<0.02) only if evaluated quantitatively. A CBV ASPECTS of 9 was the optimal semi-quantitative value for predicting outcome. Conclusions Our findings suggest that visual inspection of CTP ASPECTS recognizes infarct and ischemic absolute values. Semi-quantitative CBV ASPECTS, but not CTP ASPECTS mismatch, represents a strong prognostic indicator, implying that core extent is the main determinant of outcome, irrespective of penumbra size. PMID:26824672
Saavedra, Laura M; Romanelli, Gustavo P; Rozo, Ciro E; Duchowicz, Pablo R
2018-01-01
The insecticidal activity of a series of 62 plant derived molecules against the chikungunya, dengue and zika vector, the Aedes aegypti (Diptera:Culicidae) mosquito, is subjected to a Quantitative Structure-Activity Relationships (QSAR) analysis. The Replacement Method (RM) variable subset selection technique based on Multivariable Linear Regression (MLR) proves to be successful for exploring 4885 molecular descriptors calculated with Dragon 6. The predictive capability of the obtained models is confirmed through an external test set of compounds, Leave-One-Out (LOO) cross-validation and Y-Randomization. The present study constitutes a first necessary computational step for designing less toxic insecticides. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Husson, N.; Barbe, A.; Brown, L. R.; Carli, B.; Goldman, A.; Pickett, H. M.; Roche, A. E.; Rothman, L. S.; Smith, M. A. H.
1985-01-01
Several aspects of quantitative atmospheric spectroscopy are considered, using a classification of the molecules according to the gas amounts in the stratosphere and upper troposphere, and reviews of quantitative atmospheric high-resolution spectroscopic measurements and field measurements systems are given. Laboratory spectroscopy and spectral analysis and prediction are presented with a summary of current laboratory spectroscopy capabilities. Spectroscopic data requirements for accurate derivation of atmospheric composition are discussed, where examples are given for space-based remote sensing experiments of the atmosphere: the ATMOS (Atmospheric Trace Molecule) and UARS (Upper Atmosphere Research Satellite) experiment. A review of the basic parameters involved in the data compilations; a summary of information on line parameter compilations already in existence; and a summary of current laboratory spectroscopy studies are used to assess the data base.
Applications of Genomic Selection in Breeding Wheat for Rust Resistance.
Ornella, Leonardo; González-Camacho, Juan Manuel; Dreisigacker, Susanne; Crossa, Jose
2017-01-01
There are a lot of methods developed to predict untested phenotypes in schemes commonly used in genomic selection (GS) breeding. The use of GS for predicting disease resistance has its own particularities: (a) most populations shows additivity in quantitative adult plant resistance (APR); (b) resistance needs effective combinations of major and minor genes; and (c) phenotype is commonly expressed in ordinal categorical traits, whereas most parametric applications assume that the response variable is continuous and normally distributed. Machine learning methods (MLM) can take advantage of examples (data) that capture characteristics of interest from an unknown underlying probability distribution (i.e., data-driven). We introduce some state-of-the-art MLM capable to predict rust resistance in wheat. We also present two parametric R packages for the reader to be able to compare.
Hempel, Kristina; Herbst, Florian-Alexander; Moche, Martin; Hecker, Michael; Becher, Dörte
2011-04-01
Staphylococcus aureus is capable of colonizing and infecting humans by its arsenal of surface-exposed and secreted proteins. Iron-limited conditions in mammalian body fluids serve as a major environmental signal to bacteria to express virulence determinants. Here we present a comprehensive, gel-free, and GeLC-MS/MS-based quantitative proteome profiling of S. aureus under this infection-relevant situation. (14)N(15)N metabolic labeling and three complementing approaches were combined for relative quantitative analyses of surface-associated proteins. The surface-exposed and secreted proteome profiling approaches comprise trypsin shaving, biotinylation, and precipitation of the supernatant. By analysis of the outer subproteomic and cytoplasmic protein fraction, 1210 proteins could be identified including 221 surface-associated proteins. Thus, access was enabled to 70% of the predicted cell wall-associated proteins, 80% of the predicted sortase substrates, two/thirds of lipoproteins and more than 50% of secreted and cytoplasmic proteins. For iron-deficiency, 158 surface-associated proteins were quantified. Twenty-nine proteins were found in altered amounts showing particularly surface-exposed proteins strongly induced, such as the iron-regulated surface determinant proteins IsdA, IsdB, IsdC and IsdD as well as lipid-anchored iron compound-binding proteins. The work presents a crucial subject for understanding S. aureus pathophysiology by the use of methods that allow quantitative surface proteome profiling.
Halimi-Asl, Aliasghar; Hosseini, Amir Hossein; Nabavizadeh, Pooneh
2014-08-01
Recently, new predictors of vesicoureteral reflux (VUR) in children with a first febrile UTI such as Procalcitonin (PCT) were introduced as selective approaches for cystography. This study wants to show the capability of PCT in predicting presence of VUR at the first febrile UTI in children. Patients between 1 month and 15 years of age with febrile UTI were included in this prospective study. PCT values were measured through a semi-quantitative method in four grades comprising values less than 0.5, 0.5-2.0, 2.0-10.0 and above 10.0 ng/ml. The independence of PCT levels in predicting VUR were assessed after adjustment for all potential confounders using a logistic-regression model. A total of 68 patients, 54 (79.4%) girls and 14 (20.6%) boys were evaluated. PCT level demonstrated a significant difference between patients with positive VUR and those with negative VUR (P=0.012). To calculate the independent factors that may predict the presence of VUR, all included variables were adjusted for age and sex. Results of logistic regression showed that a PCT level between 2.0 and 10.0 ng/mL could independently predict presence of VUR (Odds ratio=6.11, CI 95%= 1.22-30.77, P=0.03). Our finding in this study showed that readily available semi-quantitative measures for PCT are feasible for detecting patients with VUR. We suggest that in semi-quantitative measurements of PCT, levels between 2.0 and 10.0 ng/ml could be an independent predictor of positive VUR.
Estimation of brain network ictogenicity predicts outcome from epilepsy surgery
NASA Astrophysics Data System (ADS)
Goodfellow, M.; Rummel, C.; Abela, E.; Richardson, M. P.; Schindler, K.; Terry, J. R.
2016-07-01
Surgery is a valuable option for pharmacologically intractable epilepsy. However, significant post-operative improvements are not always attained. This is due in part to our incomplete understanding of the seizure generating (ictogenic) capabilities of brain networks. Here we introduce an in silico, model-based framework to study the effects of surgery within ictogenic brain networks. We find that factors conventionally determining the region of tissue to resect, such as the location of focal brain lesions or the presence of epileptiform rhythms, do not necessarily predict the best resection strategy. We validate our framework by analysing electrocorticogram (ECoG) recordings from patients who have undergone epilepsy surgery. We find that when post-operative outcome is good, model predictions for optimal strategies align better with the actual surgery undertaken than when post-operative outcome is poor. Crucially, this allows the prediction of optimal surgical strategies and the provision of quantitative prognoses for patients undergoing epilepsy surgery.
Monitoring Crop Yield in USA Using a Satellite-Based Climate-Variability Impact Index
NASA Technical Reports Server (NTRS)
Zhang, Ping; Anderson, Bruce; Tan, Bin; Barlow, Mathew; Myneni, Ranga
2011-01-01
A quantitative index is applied to monitor crop growth and predict agricultural yield in continental USA. The Climate-Variability Impact Index (CVII), defined as the monthly contribution to overall anomalies in growth during a given year, is derived from 1-km MODIS Leaf Area Index. The growing-season integrated CVII can provide an estimate of the fractional change in overall growth during a given year. In turn these estimates can provide fine-scale and aggregated information on yield for various crops. Trained from historical records of crop production, a statistical model is used to produce crop yield during the growing season based upon the strong positive relationship between crop yield and the CVII. By examining the model prediction as a function of time, it is possible to determine when the in-season predictive capability plateaus and which months provide the greatest predictive capacity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cox, James V.; Wellman, Gerald William; Emery, John M.
2011-09-01
Fracture or tearing of ductile metals is a pervasive engineering concern, yet accurate prediction of the critical conditions of fracture remains elusive. Sandia National Laboratories has been developing and implementing several new modeling methodologies to address problems in fracture, including both new physical models and new numerical schemes. The present study provides a double-blind quantitative assessment of several computational capabilities including tearing parameters embedded in a conventional finite element code, localization elements, extended finite elements (XFEM), and peridynamics. For this assessment, each of four teams reported blind predictions for three challenge problems spanning crack initiation and crack propagation. After predictionsmore » had been reported, the predictions were compared to experimentally observed behavior. The metal alloys for these three problems were aluminum alloy 2024-T3 and precipitation hardened stainless steel PH13-8Mo H950. The predictive accuracies of the various methods are demonstrated, and the potential sources of error are discussed.« less
A Bayesian network to predict coastal vulnerability to sea level rise
Gutierrez, B.T.; Plant, N.G.; Thieler, E.R.
2011-01-01
Sea level rise during the 21st century will have a wide range of effects on coastal environments, human development, and infrastructure in coastal areas. The broad range of complex factors influencing coastal systems contributes to large uncertainties in predicting long-term sea level rise impacts. Here we explore and demonstrate the capabilities of a Bayesian network (BN) to predict long-term shoreline change associated with sea level rise and make quantitative assessments of prediction uncertainty. A BN is used to define relationships between driving forces, geologic constraints, and coastal response for the U.S. Atlantic coast that include observations of local rates of relative sea level rise, wave height, tide range, geomorphic classification, coastal slope, and shoreline change rate. The BN is used to make probabilistic predictions of shoreline retreat in response to different future sea level rise rates. Results demonstrate that the probability of shoreline retreat increases with higher rates of sea level rise. Where more specific information is included, the probability of shoreline change increases in a number of cases, indicating more confident predictions. A hindcast evaluation of the BN indicates that the network correctly predicts 71% of the cases. Evaluation of the results using Brier skill and log likelihood ratio scores indicates that the network provides shoreline change predictions that are better than the prior probability. Shoreline change outcomes indicating stability (-1 1 m/yr) was not well predicted. We find that BNs can assimilate important factors contributing to coastal change in response to sea level rise and can make quantitative, probabilistic predictions that can be applied to coastal management decisions. Copyright ?? 2011 by the American Geophysical Union.
Metz, Zachary P; Ding, Tong; Baumler, David J
2018-01-01
Listeria monocytogenes is a microorganism of great concern for the food industry and the cause of human foodborne disease. Therefore, novel methods of control are needed, and systems biology is one such approach to identify them. Using a combination of computational techniques and laboratory methods, genome-scale metabolic models (GEMs) can be created, validated, and used to simulate growth environments and discern metabolic capabilities of microbes of interest, including L. monocytogenes. The objective of the work presented here was to generate GEMs for six different strains of L. monocytogenes, and to both qualitatively and quantitatively validate these GEMs with experimental data to examine the diversity of metabolic capabilities of numerous strains from the three different serovar groups most associated with foodborne outbreaks and human disease. Following qualitative validation, 57 of the 95 carbon sources tested experimentally were present in the GEMs, and; therefore, these were the compounds from which comparisons could be drawn. Of these 57 compounds, agreement between in silico predictions and in vitro results for carbon source utilization ranged from 80.7% to 91.2% between strains. Nutrient utilization agreement between in silico predictions and in vitro results were also conducted for numerous nitrogen, phosphorous, and sulfur sources. Additionally, quantitative validation showed that the L. monocytogenes GEMs were able to generate in silico predictions for growth rate and growth yield that were strongly and significantly (p < 0.0013 and p < 0.0015, respectively) correlated with experimental results. These findings are significant because they show that these GEMs for L. monocytogenes are comparable to published GEMs of other organisms for agreement between in silico predictions and in vitro results. Therefore, as with the other GEMs, namely those for Escherichia coli, Staphylococcus aureus, Vibrio vulnificus, and Salmonella spp., they can be used to determine new methods of growth control and disease treatment.
Huang, Xiao Yan; Shan, Zhi Jie; Zhai, Hong Lin; Li, Li Na; Zhang, Xiao Yun
2011-08-22
Heat shock protein 90 (Hsp90) takes part in the developments of several cancers. Novobiocin, a typically C-terminal inhibitor for Hsp90, will probably used as an important anticancer drug in the future. In this work, we explored the valuable information and designed new novobiocin derivatives based on a three-dimensional quantitative structure-activity relationship (3D QSAR). The comparative molecular field analysis and comparative molecular similarity indices analysis models with high predictive capability were established, and their reliabilities are supported by the statistical parameters. Based on the several important influence factors obtained from these models, six new novobiocin derivatives with higher inhibitory activities were designed and confirmed by the molecular simulation with our models, which provide the potential anticancer drug leads for further research.
Predicting mesoscale microstructural evolution in electron beam welding
Rodgers, Theron M.; Madison, Jonathan D.; Tikare, Veena; ...
2016-03-16
Using the kinetic Monte Carlo simulator, Stochastic Parallel PARticle Kinetic Simulator, from Sandia National Laboratories, a user routine has been developed to simulate mesoscale predictions of a grain structure near a moving heat source. Here, we demonstrate the use of this user routine to produce voxelized, synthetic, three-dimensional microstructures for electron-beam welding by comparing them with experimentally produced microstructures. When simulation input parameters are matched to experimental process parameters, qualitative and quantitative agreement for both grain size and grain morphology are achieved. The method is capable of simulating both single- and multipass welds. As a result, the simulations provide anmore » opportunity for not only accelerated design but also the integration of simulation and experiments in design such that simulations can receive parameter bounds from experiments and, in turn, provide predictions of a resultant microstructure.« less
Krishnamurthy, Dilip; Sumaria, Vaidish; Viswanathan, Venkatasubramanian
2018-02-01
Density functional theory (DFT) calculations are being routinely used to identify new material candidates that approach activity near fundamental limits imposed by thermodynamics or scaling relations. DFT calculations are associated with inherent uncertainty, which limits the ability to delineate materials (distinguishability) that possess high activity. Development of error-estimation capabilities in DFT has enabled uncertainty propagation through activity-prediction models. In this work, we demonstrate an approach to propagating uncertainty through thermodynamic activity models leading to a probability distribution of the computed activity and thereby its expectation value. A new metric, prediction efficiency, is defined, which provides a quantitative measure of the ability to distinguish activity of materials and can be used to identify the optimal descriptor(s) ΔG opt . We demonstrate the framework for four important electrochemical reactions: hydrogen evolution, chlorine evolution, oxygen reduction and oxygen evolution. Future studies could utilize expected activity and prediction efficiency to significantly improve the prediction accuracy of highly active material candidates.
Kang, Bo-Sik; Lee, Jang-Eun; Park, Hyun-Jin
2014-06-01
In Korean rice wine (makgeolli) model, we tried to develop a prediction model capable of eliciting a quantitative relationship between initial amino acids in makgeolli mash and major aromatic compounds, such as fusel alcohols, their acetate esters, and ethyl esters of fatty acids, in makgeolli brewed. Mass-spectrometry-based electronic nose (MS-EN) was used to qualitatively discriminate between makgeollis made from makgeolli mashes with different amino acid compositions. Following this measurement, headspace solid-phase microextraction coupled to gas chromatography-mass spectrometry (GC-MS) combined with partial least-squares regression (PLSR) method was employed to quantitatively correlate amino acid composition of makgeolli mash with major aromatic compounds evolved during makgeolli fermentation. In qualitative prediction with MS-EN analysis, the makgeollis were well discriminated according to the volatile compounds derived from amino acids of makgeolli mash. Twenty-seven ion fragments with mass-to-charge ratio (m/z) of 55 to 98 amu were responsible for the discrimination. In GC-MS combined with PLSR method, a quantitative approach between the initial amino acids of makgeolli mash and the fusel compounds of makgeolli demonstrated that coefficient of determination (R(2)) of most of the fusel compounds ranged from 0.77 to 0.94 in good correlation, except for 2-phenylethanol (R(2) = 0.21), whereas R(2) for ethyl esters of MCFAs including ethyl caproate, ethyl caprylate, and ethyl caprate was 0.17 to 0.40 in poor correlation. The amino acids have been known to affect the aroma in alcoholic beverages. In this study, we demonstrated that an electronic nose qualitatively differentiated Korean rice wines (makgeollis) by their volatile compounds evolved from amino acids with rapidity and reproducibility and successively, a quantitative correlation with acceptable R2 between amino acids and fusel compounds could be established via HS-SPME GC-MS combined with partial least-squares regression. Our approach for predicting the quantities of volatile compounds in the finished product from initial condition of fermentation will give an insight to food researchers to modify and optimize the qualities of the corresponding products. © 2014 Institute of Food Technologists®
Roff, D A; Crnokrak, P; Fairbairn, D J
2003-07-01
Quantitative genetic theory assumes that trade-offs are best represented by bivariate normal distributions. This theory predicts that selection will shift the trade-off function itself and not just move the mean trait values along a fixed trade-off line, as is generally assumed in optimality models. As a consequence, quantitative genetic theory predicts that the trade-off function will vary among populations in which at least one of the component traits itself varies. This prediction is tested using the trade-off between call duration and flight capability, as indexed by the mass of the dorsolateral flight muscles, in the macropterous morph of the sand cricket. We use four different populations of crickets that vary in the proportion of macropterous males (Lab = 33%, Florida = 29%, Bermuda = 72%, South Carolina = 80%). We find, as predicted, that there is significant variation in the intercept of the trade-off function but not the slope, supporting the hypothesis that trade-off functions are better represented as bivariate normal distributions rather than single lines. We also test the prediction from a quantitative genetical model of the evolution of wing dimorphism that the mean call duration of macropterous males will increase with the percentage of macropterous males in the population. This prediction is also supported. Finally, we estimate the probability of a macropterous male attracting a female, P, as a function of the relative time spent calling (P = time spent calling by macropterous male/(total time spent calling by both micropterous and macropterous male). We find that in the Lab and Florida populations the probability of a female selecting the macropterous male is equal to P, indicating that preference is due simply to relative call duration. But in the Bermuda and South Carolina populations the probability of a female selecting a macropterous male is less than P, indicating a preference for the micropterous male even after differences in call duration are accounted for.
Near Real-Time Optimal Prediction of Adverse Events in Aviation Data
NASA Technical Reports Server (NTRS)
Martin, Rodney Alexander; Das, Santanu
2010-01-01
The prediction of anomalies or adverse events is a challenging task, and there are a variety of methods which can be used to address the problem. In this paper, we demonstrate how to recast the anomaly prediction problem into a form whose solution is accessible as a level-crossing prediction problem. The level-crossing prediction problem has an elegant, optimal, yet untested solution under certain technical constraints, and only when the appropriate modeling assumptions are made. As such, we will thoroughly investigate the resilience of these modeling assumptions, and show how they affect final performance. Finally, the predictive capability of this method will be assessed by quantitative means, using both validation and test data containing anomalies or adverse events from real aviation data sets that have previously been identified as operationally significant by domain experts. It will be shown that the formulation proposed yields a lower false alarm rate on average than competing methods based on similarly advanced concepts, and a higher correct detection rate than a standard method based upon exceedances that is commonly used for prediction.
Classical least squares multivariate spectral analysis
Haaland, David M.
2002-01-01
An improved classical least squares multivariate spectral analysis method that adds spectral shapes describing non-calibrated components and system effects (other than baseline corrections) present in the analyzed mixture to the prediction phase of the method. These improvements decrease or eliminate many of the restrictions to the CLS-type methods and greatly extend their capabilities, accuracy, and precision. One new application of PACLS includes the ability to accurately predict unknown sample concentrations when new unmodeled spectral components are present in the unknown samples. Other applications of PACLS include the incorporation of spectrometer drift into the quantitative multivariate model and the maintenance of a calibration on a drifting spectrometer. Finally, the ability of PACLS to transfer a multivariate model between spectrometers is demonstrated.
Environmental exposure effects on composite materials for commercial aircraft
NASA Technical Reports Server (NTRS)
Gibbons, M. N.
1982-01-01
The data base for composite materials' properties as they are affected by the environments encountered in operating conditions, both in flight and at ground terminals is expanded. Absorbed moisture degrades the mechanical properties of graphite/epoxy laminates at elevated temperatures. Since airplane components are frequently exposed to atmospheric moisture, rain, and accumulated water, quantitative data are required to evaluate the amount of fluids absorbed under various environmental conditions and the subsequent effects on material properties. In addition, accelerated laboratory test techniques are developed are reliably capable of predicting long term behavior. An accelerated environmental exposure testing procedure is developed, and experimental results are correlated and compared with analytical results to establish the level of confidence for predicting composite material properties.
Global, quantitative and dynamic mapping of protein subcellular localization
Itzhak, Daniel N; Tyanova, Stefka; Cox, Jürgen; Borner, Georg HH
2016-01-01
Subcellular localization critically influences protein function, and cells control protein localization to regulate biological processes. We have developed and applied Dynamic Organellar Maps, a proteomic method that allows global mapping of protein translocation events. We initially used maps statically to generate a database with localization and absolute copy number information for over 8700 proteins from HeLa cells, approaching comprehensive coverage. All major organelles were resolved, with exceptional prediction accuracy (estimated at >92%). Combining spatial and abundance information yielded an unprecedented quantitative view of HeLa cell anatomy and organellar composition, at the protein level. We subsequently demonstrated the dynamic capabilities of the approach by capturing translocation events following EGF stimulation, which we integrated into a quantitative model. Dynamic Organellar Maps enable the proteome-wide analysis of physiological protein movements, without requiring any reagents specific to the investigated process, and will thus be widely applicable in cell biology. DOI: http://dx.doi.org/10.7554/eLife.16950.001 PMID:27278775
An overview of the nonequilibrium behavior of polymer glasses
NASA Technical Reports Server (NTRS)
Tant, M. R.; Wilkes, G. L.
1981-01-01
It is pointed out that research efforts are at present being directed in two areas, one comprising experimental studies of this phenomenon in various glassy polymer systems and the other involving the development of a quantitative theory capable of satisfactorily predicting aging behavior for a variety of polymer materials under different conditions. Recent work in both these areas is surveyed. The basic principles of nonequilibrium behavior are outlined, with emphasis placed on changes in material properties with annealing below the glass transition temperature. Free volume theory and thermodynamic theory are discussed.
NASA Astrophysics Data System (ADS)
Engelmann, Brett Warren
The Src homology 2 (SH2) domains evolved alongside protein tyrosine kinases (PTKs) and phosphatases (PTPs) in metazoans to recognize the phosphotyrosine (pY) post-translational modification. The human genome encodes 121 SH2 domains within 111 SH2 domain containing proteins that represent the primary mechanism for cellular signal transduction immediately downstream of PTKs. Despite pY recognition contributing to roughly half of the binding energy, SH2 domains possess substantial binding specificity, or affinity discrimination between phosphopeptide ligands. This specificity is largely imparted by amino acids (AAs) adjacent to the pY, typically from positions +1 to +4 C-terminal to the pY. Much experimental effort has been undertaken to construct preferred binding motifs for many SH2 domains. However, due to limitations in previous experimental methodologies these motifs do not account for the interplay between AAs. It was therefore not known how AAs within the context of individual peptides function to impart SH2 domain specificity. In this work we identified the critical role context plays in defining SH2 domain specificity for physiological ligands. We also constructed a high quality interactome using 50 SH2 domains and 192 physiological ligands. We next developed a quantitative high-throughput (Q-HTP) peptide microarray platform to assess the affinities four SH2 domains have for 124 physiological ligands. We demonstrated the superior characteristics of our platform relative to preceding approaches and validated our results using established biophysical techniques, literature corroboration, and predictive algorithms. The quantitative information provided by the arrays was leveraged to investigate SH2 domain binding distributions and identify points of binding overlap. Our microarray derived affinity estimates were integrated to produce quantitative interaction motifs capable of predicting interactions. Furthermore, our microarrays proved capable of resolving subtle contextual differences within motifs that modulate interaction affinities. We conclude that contextually informed specificity profiling of protein interaction domains using the methodologies developed in this study can inform efforts to understand the interconnectivity of signaling networks in normal and aberrant states. Three supplementary tables containing detailed lists of peptides, interactions, and sources of corroborative information are provided.
Platt, Manu O.; Wilder, Catera L.; Wells, Alan; Griffith, Linda G.; Lauffenburger, Douglas A.
2010-01-01
Bone marrow-derived multi-potent stromal cells (MSCs) offer great promise for regenerating tissue. While certain transcription factors have been identified in association with tendency toward particular MSC differentiation phenotypes, the regulatory network of key receptor-mediated signaling pathways activated by extracellular ligands that induce various differentiation responses remain poorly understood. Attempts to predict differentiation fate tendencies from individual pathways in isolation are problematic due to the complex pathway interactions inherent in signaling networks. Accordingly, we have undertaken a multi-variate systems approach integrating experimental measurement of multiple kinase pathway activities and osteogenic differentiation in MSCs, together with computational analysis to elucidate quantitative combinations of kinase signals predictive of cell behavior across diverse contexts. In particular, for culture on polymeric biomaterials surfaces presenting tethered epidermal growth factor (tEGF), type-I collagen, neither, or both, we have found that a partial least-squares regression model yields successful prediction of phenotypic behavior on the basis of two principal components comprising the weighted sums of 8 intracellular phosphoproteins: p-EGFR, p-Akt, p-ERK1/2, p-Hsp27, p-c-jun, p-GSK3α/β, p-p38, and p-STAT3. This combination provides strongest predictive capability for 21-day differentiated phenotype status when calculated from day-7 signal measurements (99%); day-4 (88%) and day-14 (89%) signal measurements are also significantly predictive, indicating a broad time-frame during MSC osteogenesis wherein multiple pathways and states of the kinase signaling network are quantitatively integrated to regulate gene expression, cell processes, and ultimately, cell fate. PMID:19750537
NASA Astrophysics Data System (ADS)
McCray, Wilmon Wil L., Jr.
The research was prompted by a need to conduct a study that assesses process improvement, quality management and analytical techniques taught to students in U.S. colleges and universities undergraduate and graduate systems engineering and the computing science discipline (e.g., software engineering, computer science, and information technology) degree programs during their academic training that can be applied to quantitatively manage processes for performance. Everyone involved in executing repeatable processes in the software and systems development lifecycle processes needs to become familiar with the concepts of quantitative management, statistical thinking, process improvement methods and how they relate to process-performance. Organizations are starting to embrace the de facto Software Engineering Institute (SEI) Capability Maturity Model Integration (CMMI RTM) Models as process improvement frameworks to improve business processes performance. High maturity process areas in the CMMI model imply the use of analytical, statistical, quantitative management techniques, and process performance modeling to identify and eliminate sources of variation, continually improve process-performance; reduce cost and predict future outcomes. The research study identifies and provides a detail discussion of the gap analysis findings of process improvement and quantitative analysis techniques taught in U.S. universities systems engineering and computing science degree programs, gaps that exist in the literature, and a comparison analysis which identifies the gaps that exist between the SEI's "healthy ingredients " of a process performance model and courses taught in U.S. universities degree program. The research also heightens awareness that academicians have conducted little research on applicable statistics and quantitative techniques that can be used to demonstrate high maturity as implied in the CMMI models. The research also includes a Monte Carlo simulation optimization model and dashboard that demonstrates the use of statistical methods, statistical process control, sensitivity analysis, quantitative and optimization techniques to establish a baseline and predict future customer satisfaction index scores (outcomes). The American Customer Satisfaction Index (ACSI) model and industry benchmarks were used as a framework for the simulation model.
Computational modeling of in vitro biological responses on polymethacrylate surfaces
Ghosh, Jayeeta; Lewitus, Dan Y; Chandra, Prafulla; Joy, Abraham; Bushman, Jared; Knight, Doyle; Kohn, Joachim
2011-01-01
The objective of this research was to examine the capabilities of QSPR (Quantitative Structure Property Relationship) modeling to predict specific biological responses (fibrinogen adsorption, cell attachment and cell proliferation index) on thin films of different polymethacrylates. Using 33 commercially available monomers it is theoretically possible to construct a library of over 40,000 distinct polymer compositions. A subset of these polymers were synthesized and solvent cast surfaces were prepared in 96 well plates for the measurement of fibrinogen adsorption. NIH 3T3 cell attachment and proliferation index were measured on spin coated thin films of these polymers. Based on the experimental results of these polymers, separate models were built for homo-, co-, and terpolymers in the library with good correlation between experiment and predicted values. The ability to predict biological responses by simple QSPR models for large numbers of polymers has important implications in designing biomaterials for specific biological or medical applications. PMID:21779132
Hood, D C; Birch, D G
1990-10-01
An electrical potential recorded from the cornea, the a-wave of the ERG, is evaluated as a measure of human photoreceptor activity by comparing its behavior to a model derived from in vitro recordings from rod photoreceptors. The leading edge of the ERG exhibits both the linear and nonlinear behavior predicted by this model. The capability for recording the electrical activity of human photoreceptors in vivo opens new avenues for assessing normal and abnormal receptor activity in humans. Furthermore, the quantitative model of the receptor response can be used to isolate the inner retinal contribution, Granit's PII, to the gross ERG. Based on this analysis, the practice of using the trough-to-peak amplitude of the b-wave as a proxy for the amplitude of the inner nuclear layer activity is evaluated.
Weaver, J L; Busquet, M; Colombant, D G; Mostovych, A N; Feldman, U; Klapisch, M; Seely, J F; Brown, C; Holland, G
2005-02-04
Absolutely calibrated, time-resolved spectral intensity measurements of soft-x-ray emission (hnu approximately 0.1-1.0 keV) from laser-irradiated polystyrene targets are compared to radiation-hydrodynamic simulations that include our new postprocessor, Virtual Spectro. This new capability allows a unified, detailed treatment of atomic physics and radiative transfer in nonlocal thermodynamic equilibrium conditions for simple spectra from low-Z materials as well as complex spectra from high-Z materials. The excellent agreement (within a factor of approximately 1.5) demonstrates the powerful predictive capability of the codes for the complex conditions in the ablating plasma. A comparison to data with high spectral resolution (E/deltaE approximately 1000) emphasizes the importance of including radiation coupling in the quantitative simulation of emission spectra.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Pengpeng; Zheng, Xiaojing, E-mail: xjzheng@xidian.edu.cn; Jin, Ke
2016-04-14
Weak magnetic nondestructive testing (e.g., metal magnetic memory method) concerns the magnetization variation of ferromagnetic materials due to its applied load and a weak magnetic surrounding them. One key issue on these nondestructive technologies is the magnetomechanical effect for quantitative evaluation of magnetization state from stress–strain condition. A representative phenomenological model has been proposed to explain the magnetomechanical effect by Jiles in 1995. However, the Jiles' model has some deficiencies in quantification, for instance, there is a visible difference between theoretical prediction and experimental measurements on stress–magnetization curve, especially in the compression case. Based on the thermodynamic relations and themore » approach law of irreversible magnetization, a nonlinear coupled model is proposed to improve the quantitative evaluation of the magnetomechanical effect. Excellent agreement has been achieved between the predictions from the present model and previous experimental results. In comparison with Jiles' model, the prediction accuracy is improved greatly by the present model, particularly for the compression case. A detailed study has also been performed to reveal the effects of initial magnetization status, cyclic loading, and demagnetization factor on the magnetomechanical effect. Our theoretical model reveals that the stable weak magnetic signals of nondestructive testing after multiple cyclic loads are attributed to the first few cycles eliminating most of the irreversible magnetization. Remarkably, the existence of demagnetization field can weaken magnetomechanical effect, therefore, significantly reduces the testing capability. This theoretical model can be adopted to quantitatively analyze magnetic memory signals, and then can be applied in weak magnetic nondestructive testing.« less
Goel, Purva; Bapat, Sanket; Vyas, Renu; Tambe, Amruta; Tambe, Sanjeev S
2015-11-13
The development of quantitative structure-retention relationships (QSRR) aims at constructing an appropriate linear/nonlinear model for the prediction of the retention behavior (such as Kovats retention index) of a solute on a chromatographic column. Commonly, multi-linear regression and artificial neural networks are used in the QSRR development in the gas chromatography (GC). In this study, an artificial intelligence based data-driven modeling formalism, namely genetic programming (GP), has been introduced for the development of quantitative structure based models predicting Kovats retention indices (KRI). The novelty of the GP formalism is that given an example dataset, it searches and optimizes both the form (structure) and the parameters of an appropriate linear/nonlinear data-fitting model. Thus, it is not necessary to pre-specify the form of the data-fitting model in the GP-based modeling. These models are also less complex, simple to understand, and easy to deploy. The effectiveness of GP in constructing QSRRs has been demonstrated by developing models predicting KRIs of light hydrocarbons (case study-I) and adamantane derivatives (case study-II). In each case study, two-, three- and four-descriptor models have been developed using the KRI data available in the literature. The results of these studies clearly indicate that the GP-based models possess an excellent KRI prediction accuracy and generalization capability. Specifically, the best performing four-descriptor models in both the case studies have yielded high (>0.9) values of the coefficient of determination (R(2)) and low values of root mean squared error (RMSE) and mean absolute percent error (MAPE) for training, test and validation set data. The characteristic feature of this study is that it introduces a practical and an effective GP-based method for developing QSRRs in gas chromatography that can be gainfully utilized for developing other types of data-driven models in chromatography science. Copyright © 2015 Elsevier B.V. All rights reserved.
Hussien, Amr Elsayed M; Furth, Christian; Schönberger, Stefan; Hundsdoerfer, Patrick; Steffen, Ingo G; Amthauer, Holger; Müller, Hans-Wilhelm; Hautzel, Hubertus
2015-01-28
In pediatric Hodgkin's lymphoma (pHL) early response-to-therapy prediction is metabolically assessed by (18)F-FDG PET carrying an excellent negative predictive value (NPV) but an impaired positive predictive value (PPV). Aim of this study was to improve the PPV while keeping the optimal NPV. A comparison of different PET data analyses was performed applying individualized standardized uptake values (SUV), PET-derived metabolic tumor volume (MTV) and the product of both parameters, termed total lesion glycolysis (TLG); One-hundred-eight PET datasets (PET1, n = 54; PET2, n = 54) of 54 children were analysed by visual and semi-quantitative means. SUVmax, SUVmean, MTV and TLG were obtained the results of both PETs and the relative change from PET1 to PET2 (Δ in %) were compared for their capability of identifying responders and non-responders using receiver operating characteristics (ROC)-curves. In consideration of individual variations in noise and contrasts levels all parameters were additionally obtained after threshold correction to lean body mass and background; All semi-quantitative SUV estimates obtained at PET2 were significantly superior to the visual PET2 analysis. However, ΔSUVmax revealed the best results (area under the curve, 0.92; p < 0.001; sensitivity 100%; specificity 85.4%; PPV 46.2%; NPV 100%; accuracy, 87.0%) but was not significantly superior to SUVmax-estimation at PET2 and ΔTLGmax. Likewise, the lean body mass and background individualization of the datasets did not impove the results of the ROC analyses; Sophisticated semi-quantitative PET measures in early response assessment of pHL patients do not perform significantly better than the previously proposed ΔSUVmax. All analytical strategies failed to improve the impaired PPV to a clinically acceptable level while preserving the excellent NPV.
Quantitative Species Measurements In Microgravity Combustion Flames
NASA Technical Reports Server (NTRS)
Chen, Shin-Juh; Pilgrim, Jeffrey S.; Silver, Joel A.; Piltch, Nancy D.
2003-01-01
The capability of models and theories to accurately predict and describe the behavior of low gravity flames can only be verified by quantitative measurements. Although video imaging, simple temperature measurements, and velocimetry methods have provided useful information in many cases, there is still a need for quantitative species measurements. Over the past decade, we have been developing high sensitivity optical absorption techniques to permit in situ, non-intrusive, absolute concentration measurements for both major and minor flames species using diode lasers. This work has helped to establish wavelength modulation spectroscopy (WMS) as an important method for species detection within the restrictions of microgravity-based measurements. More recently, in collaboration with Prof. Dahm at the University of Michigan, a new methodology combining computed flame libraries with a single experimental measurement has allowed us to determine the concentration profiles for all species in a flame. This method, termed ITAC (Iterative Temperature with Assumed Chemistry) was demonstrated for a simple laminar nonpremixed methane-air flame at both 1-g and at 0-g in a vortex ring flame. In this paper, we report additional normal and microgravity experiments which further confirm the usefulness of this approach. We also present the development of a new type of laser. This is an external cavity diode laser (ECDL) which has the unique capability of high frequency modulation as well as a very wide tuning range. This will permit the detection of multiple species with one laser while using WMS detection.
Volumes Learned: It Takes More Than Size to "Size Up" Pulmonary Lesions.
Ma, Xiaonan; Siegelman, Jenifer; Paik, David S; Mulshine, James L; St Pierre, Samantha; Buckler, Andrew J
2016-09-01
This study aimed to review the current understanding and capabilities regarding use of imaging for noninvasive lesion characterization and its relationship to lung cancer screening and treatment. Our review of the state of the art was broken down into questions about the different lung cancer image phenotypes being characterized, the role of imaging and requirements for increasing its value with respect to increasing diagnostic confidence and quantitative assessment, and a review of the current capabilities with respect to those needs. The preponderance of the literature has so far been focused on the measurement of lesion size, with increasing contributions being made to determine the formal performance of scanners, measurement tools, and human operators in terms of bias and variability. Concurrently, an increasing number of investigators are reporting utility and predictive value of measures other than size, and sensitivity and specificity is being reported. Relatively little has been documented on quantitative measurement of non-size features with corresponding estimation of measurement performance and reproducibility. The weight of the evidence suggests characterization of pulmonary lesions built on quantitative measures adds value to the screening for, and treatment of, lung cancer. Advanced image analysis techniques may identify patterns or biomarkers not readily assessed by eye and may also facilitate management of multidimensional imaging data in such a way as to efficiently integrate it into the clinical workflow. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Quantitative prediction of drug side effects based on drug-related features.
Niu, Yanqing; Zhang, Wen
2017-09-01
Unexpected side effects of drugs are great concern in the drug development, and the identification of side effects is an important task. Recently, machine learning methods are proposed to predict the presence or absence of interested side effects for drugs, but it is difficult to make the accurate prediction for all of them. In this paper, we transform side effect profiles of drugs as their quantitative scores, by summing up their side effects with weights. The quantitative scores may measure the dangers of drugs, and thus help to compare the risk of different drugs. Here, we attempt to predict quantitative scores of drugs, namely the quantitative prediction. Specifically, we explore a variety of drug-related features and evaluate their discriminative powers for the quantitative prediction. Then, we consider several feature combination strategies (direct combination, average scoring ensemble combination) to integrate three informative features: chemical substructures, targets, and treatment indications. Finally, the average scoring ensemble model which produces the better performances is used as the final quantitative prediction model. Since weights for side effects are empirical values, we randomly generate different weights in the simulation experiments. The experimental results show that the quantitative method is robust to different weights, and produces satisfying results. Although other state-of-the-art methods cannot make the quantitative prediction directly, the prediction results can be transformed as the quantitative scores. By indirect comparison, the proposed method produces much better results than benchmark methods in the quantitative prediction. In conclusion, the proposed method is promising for the quantitative prediction of side effects, which may work cooperatively with existing state-of-the-art methods to reveal dangers of drugs.
Modelling the effect of structural QSAR parameters on skin penetration using genetic programming
NASA Astrophysics Data System (ADS)
Chung, K. K.; Do, D. Q.
2010-09-01
In order to model relationships between chemical structures and biological effects in quantitative structure-activity relationship (QSAR) data, an alternative technique of artificial intelligence computing—genetic programming (GP)—was investigated and compared to the traditional method—statistical. GP, with the primary advantage of generating mathematical equations, was employed to model QSAR data and to define the most important molecular descriptions in QSAR data. The models predicted by GP agreed with the statistical results, and the most predictive models of GP were significantly improved when compared to the statistical models using ANOVA. Recently, artificial intelligence techniques have been applied widely to analyse QSAR data. With the capability of generating mathematical equations, GP can be considered as an effective and efficient method for modelling QSAR data.
Zheng, Jenny; van Schaick, Erno; Wu, Liviawati Sutjandra; Jacqmin, Philippe; Perez Ruixo, Juan Jose
2015-08-01
Osteoporosis is a chronic skeletal disease characterized by low bone strength resulting in increased fracture risk. New treatments for osteoporosis are still an unmet medical need because current available treatments have various limitations. Bone mineral density (BMD) is an important endpoint for evaluating new osteoporosis treatments; however, the BMD response is often slower and less profound than that of bone turnover markers (BTMs). If the relationship between BTMs and BMD can be quantified, the BMD response can be predicted by the changes in BTM after a single dose; therefore, a decision based on BMD changes can be informed early. We have applied a bone cycle model to a phase 2 denosumab dose-ranging study in osteopenic women to quantitatively link serum denosumab pharmacokinetics, BTMs, and lumbar spine (LS) BMD. The data from two phase 3 denosumab studies in patients with low bone mass, FREEDOM and DEFEND, were used for external validation. Both internal and external visual predictive checks demonstrated that the model was capable of predicting LS BMD at the denosumab regimen of 60 mg every 6 months. It has been demonstrated that the model, in combination with the changes in BTMs observed from a single-dose study in men, is capable of predicting long-term BMD outcomes (e.g., LS BMD response in men after 1 year of treatment) in different populations. We propose that this model can be used to inform drug development decisions for osteoporosis treatment early via evaluating LS BMD response when BTM data become available in early trials.
Fatemi, Mohammad Hossein; Ghorbanzad'e, Mehdi
2009-11-01
Quantitative structure-property relationship models for the prediction of the nematic transition temperature (T (N)) were developed by using multilinear regression analysis and a feedforward artificial neural network (ANN). A collection of 42 thermotropic liquid crystals was chosen as the data set. The data set was divided into three sets: for training, and an internal and external test set. Training and internal test sets were used for ANN model development, and the external test set was used for evaluation of the predictive power of the model. In order to build the models, a set of six descriptors were selected by the best multilinear regression procedure of the CODESSA program. These descriptors were: atomic charge weighted partial negatively charged surface area, relative negative charged surface area, polarity parameter/square distance, minimum most negative atomic partial charge, molecular volume, and the A component of moment of inertia, which encode geometrical and electronic characteristics of molecules. These descriptors were used as inputs to ANN. The optimized ANN model had 6:6:1 topology. The standard errors in the calculation of T (N) for the training, internal, and external test sets using the ANN model were 1.012, 4.910, and 4.070, respectively. To further evaluate the ANN model, a crossvalidation test was performed, which produced the statistic Q (2) = 0.9796 and standard deviation of 2.67 based on predicted residual sum of square. Also, the diversity test was performed to ensure the model's stability and prove its predictive capability. The obtained results reveal the suitability of ANN for the prediction of T (N) for liquid crystals using molecular structural descriptors.
Pasotti, Lorenzo; Bellato, Massimo; Casanova, Michela; Zucca, Susanna; Cusella De Angelis, Maria Gabriella; Magni, Paolo
2017-01-01
The study of simplified, ad-hoc constructed model systems can help to elucidate if quantitatively characterized biological parts can be effectively re-used in composite circuits to yield predictable functions. Synthetic systems designed from the bottom-up can enable the building of complex interconnected devices via rational approach, supported by mathematical modelling. However, such process is affected by different, usually non-modelled, unpredictability sources, like cell burden. Here, we analyzed a set of synthetic transcriptional cascades in Escherichia coli . We aimed to test the predictive power of a simple Hill function activation/repression model (no-burden model, NBM) and of a recently proposed model, including Hill functions and the modulation of proteins expression by cell load (burden model, BM). To test the bottom-up approach, the circuit collection was divided into training and test sets, used to learn individual component functions and test the predicted output of interconnected circuits, respectively. Among the constructed configurations, two test set circuits showed unexpected logic behaviour. Both NBM and BM were able to predict the quantitative output of interconnected devices with expected behaviour, but only the BM was also able to predict the output of one circuit with unexpected behaviour. Moreover, considering training and test set data together, the BM captures circuits output with higher accuracy than the NBM, which is unable to capture the experimental output exhibited by some of the circuits even qualitatively. Finally, resource usage parameters, estimated via BM, guided the successful construction of new corrected variants of the two circuits showing unexpected behaviour. Superior descriptive and predictive capabilities were achieved considering resource limitation modelling, but further efforts are needed to improve the accuracy of models for biological engineering.
A Bayesian explanation of the "Uncanny Valley" effect and related psychological phenomena
NASA Astrophysics Data System (ADS)
Moore, Roger K.
2012-11-01
There are a number of psychological phenomena in which dramatic emotional responses are evoked by seemingly innocuous perceptual stimuli. A well known example is the `uncanny valley' effect whereby a near human-looking artifact can trigger feelings of eeriness and repulsion. Although such phenomena are reasonably well documented, there is no quantitative explanation for the findings and no mathematical model that is capable of predicting such behavior. Here I show (using a Bayesian model of categorical perception) that differential perceptual distortion arising from stimuli containing conflicting cues can give rise to a perceptual tension at category boundaries that could account for these phenomena. The model is not only the first quantitative explanation of the uncanny valley effect, but it may also provide a mathematical explanation for a range of social situations in which conflicting cues give rise to negative, fearful or even violent reactions.
Zooming in on neutrino oscillations with DUNE
NASA Astrophysics Data System (ADS)
Srivastava, Rahul; Ternes, Christoph A.; Tórtola, Mariam; Valle, José W. F.
2018-05-01
We examine the capabilities of the DUNE experiment as a probe of the neutrino mixing paradigm. Taking the current status of neutrino oscillations and the design specifications of DUNE, we determine the experiment's potential to probe the structure of neutrino mixing and C P violation. We focus on the poorly determined parameters θ23 and δC P and consider both two and seven years of run. We take various benchmarks as our true values, such as the current preferred values of θ23 and δC P, as well as several theory-motivated choices. We determine quantitatively DUNE's potential to perform a precision measurement of θ23, as well as to test the C P violation hypothesis in a model-independent way. We find that, after running for seven years, DUNE will make a substantial step in the precise determination of these parameters, bringing to quantitative test the predictions of various theories of neutrino mixing.
A Bayesian explanation of the ‘Uncanny Valley’ effect and related psychological phenomena
Moore, Roger K.
2012-01-01
There are a number of psychological phenomena in which dramatic emotional responses are evoked by seemingly innocuous perceptual stimuli. A well known example is the ‘uncanny valley’ effect whereby a near human-looking artifact can trigger feelings of eeriness and repulsion. Although such phenomena are reasonably well documented, there is no quantitative explanation for the findings and no mathematical model that is capable of predicting such behavior. Here I show (using a Bayesian model of categorical perception) that differential perceptual distortion arising from stimuli containing conflicting cues can give rise to a perceptual tension at category boundaries that could account for these phenomena. The model is not only the first quantitative explanation of the uncanny valley effect, but it may also provide a mathematical explanation for a range of social situations in which conflicting cues give rise to negative, fearful or even violent reactions. PMID:23162690
Fontaine, Joseph J.; Jorgensen, Christopher; Stuber, Erica F.; Gruber, Lutz F.; Bishop, Andrew A.; Lusk, Jeffrey J.; Zach, Eric S.; Decker, Karie L.
2017-01-01
We know economic and social policy has implications for ecosystems at large, but the consequences for a given geographic area or specific wildlife population are more difficult to conceptualize and communicate. Species distribution models, which extrapolate species-habitat relationships across ecological scales, are capable of predicting population changes in distribution and abundance in response to management and policy, and thus, are an ideal means for facilitating proactive management within a larger policy framework. To illustrate the capabilities of species distribution modeling in scenario planning for wildlife populations, we projected an existing distribution model for ring-necked pheasants (Phasianus colchicus) onto a series of alternative future landscape scenarios for Nebraska, USA. Based on our scenarios, we qualitatively and quantitatively estimated the effects of agricultural policy decisions on pheasant populations across Nebraska, in specific management regions, and at wildlife management areas.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rutland, Christopher J.
2009-04-26
The Terascale High-Fidelity Simulations of Turbulent Combustion (TSTC) project is a multi-university collaborative effort to develop a high-fidelity turbulent reacting flow simulation capability utilizing terascale, massively parallel computer technology. The main paradigm of the approach is direct numerical simulation (DNS) featuring the highest temporal and spatial accuracy, allowing quantitative observations of the fine-scale physics found in turbulent reacting flows as well as providing a useful tool for development of sub-models needed in device-level simulations. Under this component of the TSTC program the simulation code named S3D, developed and shared with coworkers at Sandia National Laboratories, has been enhanced with newmore » numerical algorithms and physical models to provide predictive capabilities for turbulent liquid fuel spray dynamics. Major accomplishments include improved fundamental understanding of mixing and auto-ignition in multi-phase turbulent reactant mixtures and turbulent fuel injection spray jets.« less
Measurements and Modeling of Nitric Oxide Formation in Counterflow, Premixed CH4/O2/N2 Flames
NASA Technical Reports Server (NTRS)
Thomsen, D. Douglas; Laurendeau, Normand M.
2000-01-01
Laser-induced fluorescence (LIF) measurements of NO concentration in a variety of CH4/O2/N2 flames are used to evaluate the chemical kinetics of NO formation. The analysis begins with previous measurements in flat, laminar, premixed CH4/O2/N2 flames stabilized on a water-cooled McKenna burner at pressures ranging from 1 to 14.6 atm, equivalence ratios from 0.5 to 1.6, and volumetric nitrogen/oxygen dilution ratios of 2.2, 3.1 and 3.76. These measured results are compared to predictions to determine the capabilities and limitations of the comprehensive kinetic mechanism developed by the Gas Research Institute (GRI), version 2.11. The model is shown to predict well the qualitative trends of NO formation in lean-premixed flames, while quantitatively underpredicting NO concentration by 30-50%. For rich flames, the model is unable to even qualitatively match the experimental results. These flames were found to be limited by low temperatures and an inability to separate the flame from the burner surface. In response to these limitations, a counterflow burner was designed for use in opposed premixed flame studies. A new LIF calibration technique was developed and applied to obtain quantitative measurements of NO concentration in laminar, counterflow premixed, CH4/O2/N2 flames at pressures ranging from 1 to 5.1 atm, equivalence ratios of 0.6 to 1.5, and an N2/O2 dilution ratio of 3.76. The counterflow premixed flame measurements are combined with measurements in burner-stabilized premixed flames and counterflow diffusion flames to build a comprehensive database for analysis of the GRI kinetic mechanism. Pathways, quantitative reaction path and sensitivity analyses are applied to the GRI mechanism for these flame conditions. The prompt NO mechanism is found to severely underpredict the amount of NO formed in rich premixed and nitrogen-diluted diffusion flames. This underprediction is traced to uncertainties in the CH kinetics as well as in the nitrogen oxidation chemistry. Suggestions are made which significantly improve the predictive capability of the GRI mechanism in near-stoichiometric, rich, premixed flames and in atmospheric-pressure, diffusion flames. However, the modified reaction mechanism is unable to model the formation of NO in ultra-rich, premixed or in high-pressure, nonpremixed flames, thus indicating the need for additional study under these conditions.
NASA Technical Reports Server (NTRS)
Anderson, R. B.; Morris, R. V.; Clegg, S. M.; Bell, J. F., III; Humphries, S. D.; Wiens, R. C.
2011-01-01
The ChemCam instrument selected for the Curiosity rover is capable of remote laser-induced breakdown spectroscopy (LIBS).[1] We used a remote LIBS instrument similar to ChemCam to analyze 197 geologic slab samples and 32 pressed-powder geostandards. The slab samples are well-characterized and have been used to validate the calibration of previous instruments on Mars missions, including CRISM [2], OMEGA [3], the MER Pancam [4], Mini-TES [5], and Moessbauer [6] instruments and the Phoenix SSI [7]. The resulting dataset was used to compare multivariate methods for quantitative LIBS and to determine the effect of grain size on calculations. Three multivariate methods - partial least squares (PLS), multilayer perceptron artificial neural networks (MLP ANNs) and cascade correlation (CC) ANNs - were used to generate models and extract the quantitative composition of unknown samples. PLS can be used to predict one element (PLS1) or multiple elements (PLS2) at a time, as can the neural network methods. Although MLP and CC ANNs were successful in some cases, PLS generally produced the most accurate and precise results.
GLS-Finder: A Platform for Fast Profiling of Glucosinolates in Brassica Vegetables.
Sun, Jianghao; Zhang, Mengliang; Chen, Pei
2016-06-01
Mass spectrometry combined with related tandem techniques has become the most popular method for plant secondary metabolite characterization. We introduce a new strategy based on in-database searching, mass fragmentation behavior study, formula predicting for fast profiling of glucosinolates, a class of important compounds in brassica vegetables. A MATLAB script-based expert system computer program, "GLS-Finder", was developed. It is capable of qualitative and semi-quantitative analyses of glucosinolates in samples using data generated by ultrahigh-performance liquid chromatography-high-resolution accurate mass with multi-stage mass fragmentation (UHPLC-HRAM/MS(n)). A suite of bioinformatic tools was integrated into the "GLS-Finder" to perform raw data deconvolution, peak alignment, glucosinolate putative assignments, semi-quantitation, and unsupervised principal component analysis (PCA). GLS-Finder was successfully applied to identify intact glucosinolates in 49 commonly consumed Brassica vegetable samples in the United States. It is believed that this work introduces a new way of fast data processing and interpretation for qualitative and quantitative analyses of glucosinolates, where great efficacy was improved in comparison to identification manually.
Evaluation of CASL boiling model for DNB performance in full scale 5x5 fuel bundle with spacer grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Seung Jun
As one of main tasks for FY17 CASL-THM activity, Evaluation study on applicability of the CASL baseline boiling model for 5x5 DNB application is conducted and the predictive capability of the DNB analysis is reported here. While the baseline CASL-boiling model (GEN- 1A) approach has been successfully implemented and validated with a single pipe application in the previous year’s task, the extended DNB validation for realistic sub-channels with detailed spacer grid configurations are tasked in FY17. The focus area of the current study is to demonstrate the robustness and feasibility of the CASL baseline boiling model for DNB performance inmore » a full 5x5 fuel bundle application. A quantitative evaluation of the DNB predictive capability is performed by comparing with corresponding experimental measurements (i.e. reference for the model validation). The reference data are provided from the Westinghouse Electricity Company (WEC). Two different grid configurations tested here include Non-Mixing Vane Grid (NMVG), and Mixing Vane Grid (MVG). Thorough validation studies with two sub-channel configurations are performed at a wide range of realistic PWR operational conditions.« less
NASA Astrophysics Data System (ADS)
Galve, J. P.; Gutiérrez, F.; Remondo, J.; Bonachea, J.; Lucha, P.; Cendrero, A.
2009-10-01
Multiple sinkhole susceptibility models have been generated in three study areas of the Ebro Valley evaporite karst (NE Spain) applying different methods (nearest neighbour distance, sinkhole density, heuristic scoring system and probabilistic analysis) for each sinkhole type separately (cover collapse sinkholes, cover and bedrock collapse sinkholes and cover and bedrock sagging sinkholes). The quantitative and independent evaluation of the predictive capability of the models reveals that: (1) The most reliable susceptibility models are those derived from the nearest neighbour distance and sinkhole density. These models can be generated in a simple and rapid way from detailed geomorphological maps. (2) The reliability of the nearest neighbour distance and density models is conditioned by the degree of clustering of the sinkholes. Consequently, the karst areas in which sinkholes show a higher clustering are a priori more favourable for predicting new occurrences. (3) The predictive capability of the best models obtained in this research is significantly higher (12.5-82.5%) than that of the heuristic sinkhole susceptibility model incorporated into the General Urban Plan for the municipality of Zaragoza. Although the probabilistic approach provides lower quality results than the methods based on sinkhole proximity and density, it helps to identify the most significant factors and select the most effective mitigation strategies and may be applied to model susceptibility in different future scenarios.
NASA Astrophysics Data System (ADS)
Valdes, Pablo A.; Angelo, Joseph; Gioux, Sylvain
2015-03-01
Fluorescence imaging has shown promise as an adjunct to improve the extent of resection in neurosurgery and oncologic surgery. Nevertheless, current fluorescence imaging techniques do not account for the heterogeneous attenuation effects of tissue optical properties. In this work, we present a novel imaging system that performs real time quantitative fluorescence imaging using Single Snapshot Optical Properties (SSOP) imaging. We developed the technique and performed initial phantom studies to validate the quantitative capabilities of the system for intraoperative feasibility. Overall, this work introduces a novel real-time quantitative fluorescence imaging method capable of being used intraoperatively for neurosurgical guidance.
Li, Jia; Xia, Yunni; Luo, Xin
2014-01-01
OWL-S, one of the most important Semantic Web service ontologies proposed to date, provides a core ontological framework and guidelines for describing the properties and capabilities of their web services in an unambiguous, computer interpretable form. Predicting the reliability of composite service processes specified in OWL-S allows service users to decide whether the process meets the quantitative quality requirement. In this study, we consider the runtime quality of services to be fluctuating and introduce a dynamic framework to predict the runtime reliability of services specified in OWL-S, employing the Non-Markovian stochastic Petri net (NMSPN) and the time series model. The framework includes the following steps: obtaining the historical response times series of individual service components; fitting these series with a autoregressive-moving-average-model (ARMA for short) and predicting the future firing rates of service components; mapping the OWL-S process into a NMSPN model; employing the predicted firing rates as the model input of NMSPN and calculating the normal completion probability as the reliability estimate. In the case study, a comparison between the static model and our approach based on experimental data is presented and it is shown that our approach achieves higher prediction accuracy.
Anomalous transport theory for the reversed field pinch
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terry, P.W.; Hegna, C.C; Sovinec, C.R.
1996-09-01
Physically motivated transport models with predictive capabilities and significance beyond the reversed field pinch (RFP) are presented. It is shown that the ambipolar constrained electron heat loss observed in MST can be quantitatively modeled by taking account of the clumping in parallel streaming electrons and the resultant self-consistent interaction with collective modes; that the discrete dynamo process is a relaxation oscillation whose dependence on the tearing instability and profile relaxation physics leads to amplitude and period scaling predictions consistent with experiment; that the Lundquist number scaling in relaxed plasmas driven by magnetic turbulence has a weak S{sup {minus}1/4} scaling; andmore » that radial E{times}B shear flow can lead to large reductions in the edge particle flux with little change in the heat flux, as observed in the RFP and tokamak. 24 refs.« less
Exploration Clinical Decision Support System: Medical Data Architecture
NASA Technical Reports Server (NTRS)
Lindsey, Tony; Shetye, Sandeep; Shaw, Tianna (Editor)
2016-01-01
The Exploration Clinical Decision Support (ECDS) System project is intended to enhance the Exploration Medical Capability (ExMC) Element for extended duration, deep-space mission planning in HRP. A major development guideline is the Risk of "Adverse Health Outcomes & Decrements in Performance due to Limitations of In-flight Medical Conditions". ECDS attempts to mitigate that Risk by providing crew-specific health information, actionable insight, crew guidance and advice based on computational algorithmic analysis. The availability of inflight health diagnostic computational methods has been identified as an essential capability for human exploration missions. Inflight electronic health data sources are often heterogeneous, and thus may be isolated or not examined as an aggregate whole. The ECDS System objective provides both a data architecture that collects and manages disparate health data, and an active knowledge system that analyzes health evidence to deliver case-specific advice. A single, cohesive space-ready decision support capability that considers all exploration clinical measurements is not commercially available at present. Hence, this Task is a newly coordinated development effort by which ECDS and its supporting data infrastructure will demonstrate the feasibility of intelligent data mining and predictive modeling as a biomedical diagnostic support mechanism on manned exploration missions. The initial step towards ground and flight demonstrations has been the research and development of both image and clinical text-based computer-aided patient diagnosis. Human anatomical images displaying abnormal/pathological features have been annotated using controlled terminology templates, marked-up, and then stored in compliance with the AIM standard. These images have been filtered and disease characterized based on machine learning of semantic and quantitative feature vectors. The next phase will evaluate disease treatment response via quantitative linear dimension biomarkers that enable image content-based retrieval and criteria assessment. In addition, a data mining engine (DME) is applied to cross-sectional adult surveys for predicting occurrence of renal calculi, ranked by statistical significance of demographics and specific food ingestion. In addition to this precursor space flight algorithm training, the DME will utilize a feature-engineering capability for unstructured clinical text classification health discovery. The ECDS backbone is a proposed multi-tier modular architecture providing data messaging protocols, storage, management and real-time patient data access. Technology demonstrations and success metrics will be finalized in FY16.
Shuttle Entry Imaging Using Infrared Thermography
NASA Technical Reports Server (NTRS)
Horvath, Thomas; Berry, Scott; Alter, Stephen; Blanchard, Robert; Schwartz, Richard; Ross, Martin; Tack, Steve
2007-01-01
During the Columbia Accident Investigation, imaging teams supporting debris shedding analysis were hampered by poor entry image quality and the general lack of information on optical signatures associated with a nominal Shuttle entry. After the accident, recommendations were made to NASA management to develop and maintain a state-of-the-art imagery database for Shuttle engineering performance assessments and to improve entry imaging capability to support anomaly and contingency analysis during a mission. As a result, the Space Shuttle Program sponsored an observation campaign to qualitatively characterize a nominal Shuttle entry over the widest possible Mach number range. The initial objectives focused on an assessment of capability to identify/resolve debris liberated from the Shuttle during entry, characterization of potential anomalous events associated with RCS jet firings and unusual phenomenon associated with the plasma trail. The aeroheating technical community viewed the Space Shuttle Program sponsored activity as an opportunity to influence the observation objectives and incrementally demonstrate key elements of a quantitative spatially resolved temperature measurement capability over a series of flights. One long-term desire of the Shuttle engineering community is to calibrate boundary layer transition prediction methodologies that are presently part of the Shuttle damage assessment process using flight data provided by a controlled Shuttle flight experiment. Quantitative global imaging may offer a complementary method of data collection to more traditional methods such as surface thermocouples. This paper reviews the process used by the engineering community to influence data collection methods and analysis of global infrared images of the Shuttle obtained during hypersonic entry. Emphasis is placed upon airborne imaging assets sponsored by the Shuttle program during Return to Flight. Visual and IR entry imagery were obtained with available airborne imaging platforms used within DoD along with agency assets developed and optimized for use during Shuttle ascent to demonstrate capability (i.e., tracking, acquisition of multispectral data, spatial resolution) and identify system limitations (i.e., radiance modeling, saturation) using state-of-the-art imaging instrumentation and communication systems. Global infrared intensity data have been transformed to temperature by comparison to Shuttle flight thermocouple data. Reasonable agreement is found between the flight thermography images and numerical prediction. A discussion of lessons learned and potential application to a potential Shuttle boundary layer transition flight test is presented.
NASA Astrophysics Data System (ADS)
Sun, Chi-Kuang; Wei, Ming-Liang; Su, Yu-Hsiang; Weng, Wei-Hung; Liao, Yi-Hua
2017-02-01
Harmonic generation microscopy is a noninvasive repetitive imaging technique that provides real-time 3D microscopic images of human skin with a sub-femtoliter resolution and high penetration down to the reticular dermis. In this talk, we show that with a strong resonance effect, the third-harmonic-generation (THG) modality provides enhanced contrast on melanin and allows not only differential diagnosis of various pigmented skin lesions but also quantitative imaging for longterm tracking. This unique capability makes THG microscopy the only label-free technique capable of identifying the active melanocytes in human skin and to image their different dendriticity patterns. In this talk, we will review our recent efforts to in vivo image melanin distribution and quantitatively diagnose pigmented skin lesions using label-free harmonic generation biopsy. This talk will first cover the spectroscopic study on the melanin enhanced THG effect in human cells and the calibration strategy inside human skin for quantitative imaging. We will then review our recent clinical trials including: differential diagnosis capability study on pigmented skin tumors; as well as quantitative virtual biopsy study on pre- and post- treatment evaluation on melasma and solar lentigo. Our study indicates the unmatched capability of harmonic generation microscopy to perform virtual biopsy for noninvasive histopathological diagnosis of various pigmented skin tumors, as well as its unsurpassed capability to noninvasively reveal the pathological origin of different hyperpigmentary diseases on human face as well as to monitor the efficacy of laser depigmentation treatments. This work is sponsored by National Health Research Institutes.
Hass, Joachim; Hertäg, Loreen; Durstewitz, Daniel
2016-01-01
The prefrontal cortex is centrally involved in a wide range of cognitive functions and their impairment in psychiatric disorders. Yet, the computational principles that govern the dynamics of prefrontal neural networks, and link their physiological, biochemical and anatomical properties to cognitive functions, are not well understood. Computational models can help to bridge the gap between these different levels of description, provided they are sufficiently constrained by experimental data and capable of predicting key properties of the intact cortex. Here, we present a detailed network model of the prefrontal cortex, based on a simple computationally efficient single neuron model (simpAdEx), with all parameters derived from in vitro electrophysiological and anatomical data. Without additional tuning, this model could be shown to quantitatively reproduce a wide range of measures from in vivo electrophysiological recordings, to a degree where simulated and experimentally observed activities were statistically indistinguishable. These measures include spike train statistics, membrane potential fluctuations, local field potentials, and the transmission of transient stimulus information across layers. We further demonstrate that model predictions are robust against moderate changes in key parameters, and that synaptic heterogeneity is a crucial ingredient to the quantitative reproduction of in vivo-like electrophysiological behavior. Thus, we have produced a physiologically highly valid, in a quantitative sense, yet computationally efficient PFC network model, which helped to identify key properties underlying spike time dynamics as observed in vivo, and can be harvested for in-depth investigation of the links between physiology and cognition. PMID:27203563
Liu, Jinxia; Cao, Yue; Wang, Qiu; Pan, Wenjuan; Ma, Fei; Liu, Changhong; Chen, Wei; Yang, Jianbo; Zheng, Lei
2016-01-01
Water-injected beef has aroused public concern as a major food-safety issue in meat products. In the study, the potential of multispectral imaging analysis in the visible and near-infrared (405-970 nm) regions was evaluated for identifying water-injected beef. A multispectral vision system was used to acquire images of beef injected with up to 21% content of water, and partial least squares regression (PLSR) algorithm was employed to establish prediction model, leading to quantitative estimations of actual water increase with a correlation coefficient (r) of 0.923. Subsequently, an optimized model was achieved by integrating spectral data with feature information extracted from ordinary RGB data, yielding better predictions (r = 0.946). Moreover, the prediction equation was transferred to each pixel within the images for visualizing the distribution of actual water increase. These results demonstrate the capability of multispectral imaging technology as a rapid and non-destructive tool for the identification of water-injected beef. Copyright © 2015 Elsevier Ltd. All rights reserved.
Nano-QSPR Modelling of Carbon-Based Nanomaterials Properties.
Salahinejad, Maryam
2015-01-01
Evaluation of chemical and physical properties of nanomaterials is of critical importance in a broad variety of nanotechnology researches. There is an increasing interest in computational methods capable of predicting properties of new and modified nanomaterials in the absence of time-consuming and costly experimental studies. Quantitative Structure- Property Relationship (QSPR) approaches are progressive tools in modelling and prediction of many physicochemical properties of nanomaterials, which are also known as nano-QSPR. This review provides insight into the concepts, challenges and applications of QSPR modelling of carbon-based nanomaterials. First, we try to provide a general overview of QSPR implications, by focusing on the difficulties and limitations on each step of the QSPR modelling of nanomaterials. Then follows with the most significant achievements of QSPR methods in modelling of carbon-based nanomaterials properties and their recent applications to generate predictive models. This review specifically addresses the QSPR modelling of physicochemical properties of carbon-based nanomaterials including fullerenes, single-walled carbon nanotube (SWNT), multi-walled carbon nanotube (MWNT) and graphene.
Testing and analysis of flat and curved panels with multiple cracks
NASA Technical Reports Server (NTRS)
Broek, David; Jeong, David Y.; Thomson, Douglas
1994-01-01
An experimental and analytical investigation of multiple cracking in various types of test specimens is described in this paper. The testing phase is comprised of a flat unstiffened panel series and curved stiffened and unstiffened panel series. The test specimens contained various configurations for initial damage. Static loading was applied to these specimens until ultimate failure, while loads and crack propagation were recorded. This data provides the basis for developing and validating methodologies for predicting linkup of multiple cracks, progression to failure, and overall residual strength. The results from twelve flat coupon and ten full scale curved panel tests are presented. In addition, an engineering analysis procedure was developed to predict multiple crack linkup. Reasonable agreement was found between predictions and actual test results for linkup and residual strength for both flat and curved panels. The results indicate that an engineering analysis approach has the potential to quantitatively assess the effect of multiple cracks in the arrest capability of an aircraft fuselage structure.
Sresht, Vishnu; Lewandowski, Eric P; Blankschtein, Daniel; Jusufi, Arben
2017-08-22
A molecular modeling approach is presented with a focus on quantitative predictions of the surface tension of aqueous surfactant solutions. The approach combines classical Molecular Dynamics (MD) simulations with a molecular-thermodynamic theory (MTT) [ Y. J. Nikas, S. Puvvada, D. Blankschtein, Langmuir 1992 , 8 , 2680 ]. The MD component is used to calculate thermodynamic and molecular parameters that are needed in the MTT model to determine the surface tension isotherm. The MD/MTT approach provides the important link between the surfactant bulk concentration, the experimental control parameter, and the surfactant surface concentration, the MD control parameter. We demonstrate the capability of the MD/MTT modeling approach on nonionic alkyl polyethylene glycol surfactants at the air-water interface and observe reasonable agreement of the predicted surface tensions and the experimental surface tension data over a wide range of surfactant concentrations below the critical micelle concentration. Our modeling approach can be extended to ionic surfactants and their mixtures with both ionic and nonionic surfactants at liquid-liquid interfaces.
Validation metrics for turbulent plasma transport
Holland, C.
2016-06-22
Developing accurate models of plasma dynamics is essential for confident predictive modeling of current and future fusion devices. In modern computer science and engineering, formal verification and validation processes are used to assess model accuracy and establish confidence in the predictive capabilities of a given model. This paper provides an overview of the key guiding principles and best practices for the development of validation metrics, illustrated using examples from investigations of turbulent transport in magnetically confined plasmas. Particular emphasis is given to the importance of uncertainty quantification and its inclusion within the metrics, and the need for utilizing synthetic diagnosticsmore » to enable quantitatively meaningful comparisons between simulation and experiment. As a starting point, the structure of commonly used global transport model metrics and their limitations is reviewed. An alternate approach is then presented, which focuses upon comparisons of predicted local fluxes, fluctuations, and equilibrium gradients against observation. Furthermore, the utility of metrics based upon these comparisons is demonstrated by applying them to gyrokinetic predictions of turbulent transport in a variety of discharges performed on the DIII-D tokamak, as part of a multi-year transport model validation activity.« less
Analysis, Simulation and Prediction of Cosmetic Defects on Automotive External Panel
NASA Astrophysics Data System (ADS)
Le Port, A.; Thuillier, S.; Borot, C.; Charbonneaux, J.
2011-08-01
The first feeling of quality for a vehicle is linked to its perfect appearance. This has a major impact on the reputation of a car manufacturer. Cosmetic defects are thus more and more taken into account in the process design. Qualifying a part as good or bad from the cosmetic point of view is mainly subjective: the part aspect is considered acceptable if no defect is visible on the vehicle by the final customer. Cosmetic defects that appear during sheet metal forming are checked by visual inspection in light inspection rooms, stoning, or with optical or mechanical sensors or feelers. A lack of cosmetic defect prediction before part production leads to the need for corrective actions, production delays and generates additional costs. This paper first explores the objective description of what cosmetic defects are on a stamped part and where they come from. It then investigates the capability of software to predict these defects, and suggests the use of a cosmetic defects analysis tool developed within PAM-STAMP 2G for its qualitative and quantitative prediction.
Validation metrics for turbulent plasma transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holland, C.
Developing accurate models of plasma dynamics is essential for confident predictive modeling of current and future fusion devices. In modern computer science and engineering, formal verification and validation processes are used to assess model accuracy and establish confidence in the predictive capabilities of a given model. This paper provides an overview of the key guiding principles and best practices for the development of validation metrics, illustrated using examples from investigations of turbulent transport in magnetically confined plasmas. Particular emphasis is given to the importance of uncertainty quantification and its inclusion within the metrics, and the need for utilizing synthetic diagnosticsmore » to enable quantitatively meaningful comparisons between simulation and experiment. As a starting point, the structure of commonly used global transport model metrics and their limitations is reviewed. An alternate approach is then presented, which focuses upon comparisons of predicted local fluxes, fluctuations, and equilibrium gradients against observation. Furthermore, the utility of metrics based upon these comparisons is demonstrated by applying them to gyrokinetic predictions of turbulent transport in a variety of discharges performed on the DIII-D tokamak, as part of a multi-year transport model validation activity.« less
Roff, Derek A; Mostowy, Serge; Fairbairn, Daphne J
2002-01-01
The concept of phenotypic trade-offs is a central element in evolutionary theory. In general, phenotypic models assume a fixed trade-off function, whereas quantitative genetic theory predicts that the trade-off function will change as a result of selection. For a linear trade-off function selection will readily change the intercept but will have to be relatively stronger to change the slope. We test these predictions by examining the trade-off between fecundity and flight capability, as measured by dorso-longitudinal muscle mass, in four different populations of the sand cricket, Gryllus firmus. Three populations were recently derived from the wild, and the fourth had been in the laboratory for 19 years. We hypothesized that the laboratory population had most likely undergone more and different selection from the three wild populations and therefore should differ from these in respect to both slope and intercept. Because of geographic variation in selection, we predicted a general difference in intercept among the four populations. We further tested the hypothesis that this intercept will be correlated with proportion macropterous and that this relationship will itself vary with environmental conditions experienced during both the nymphal and adult period. Observed variation in the phenotypic trade-off was consistent with the predictions of the quantitative genetic model. These results point to the importance of modeling trade-offs as dynamic rather than static relationships. We discuss how phenotypic models can incorporate such variation. The phenotypic trade-off between fecundity and dorso-longitudinal muscle mass is determined in part by variation in body size, illustrating the necessity of considering trade-offs to be multi factorial rather than simply bivariate relationships.
Lei, Tailong; Sun, Huiyong; Kang, Yu; Zhu, Feng; Liu, Hui; Zhou, Wenfang; Wang, Zhe; Li, Dan; Li, Youyong; Hou, Tingjun
2017-11-06
Xenobiotic chemicals and their metabolites are mainly excreted out of our bodies by the urinary tract through the urine. Chemical-induced urinary tract toxicity is one of the main reasons that cause failure during drug development, and it is a common adverse event for medications, natural supplements, and environmental chemicals. Despite its importance, there are only a few in silico models for assessing urinary tract toxicity for a large number of compounds with diverse chemical structures. Here, we developed a series of qualitative and quantitative structure-activity relationship (QSAR) models for predicting urinary tract toxicity. In our study, the recursive feature elimination method incorporated with random forests (RFE-RF) was used for dimension reduction, and then eight machine learning approaches were used for QSAR modeling, i.e., relevance vector machine (RVM), support vector machine (SVM), regularized random forest (RRF), C5.0 trees, eXtreme gradient boosting (XGBoost), AdaBoost.M1, SVM boosting (SVMBoost), and RVM boosting (RVMBoost). For building classification models, the synthetic minority oversampling technique was used to handle the imbalance data set problem. Among all the machine learning approaches, SVMBoost based on the RBF kernel achieves both the best quantitative (q ext 2 = 0.845) and qualitative predictions for the test set (MCC of 0.787, AUC of 0.893, sensitivity of 89.6%, specificity of 94.1%, and global accuracy of 90.8%). The application domains were then analyzed, and all of the tested chemicals fall within the application domain coverage. We also examined the structure features of the chemicals with large prediction errors. In brief, both the regression and classification models developed by the SVMBoost approach have reliable prediction capability for assessing chemical-induced urinary tract toxicity.
Aura Satellite Mission: Oxford/RAL Spring School in Quantitative Earth Observation
NASA Technical Reports Server (NTRS)
Douglass, Anne
2005-01-01
The four instruments on Aura are providing new and exciting measurements of stratospheric and tropospheric ozone, species that contribute to ozone production and loss, and long-lived gases such as nitrous oxide and methane that provide information about atmospheric transport. These discussions of atmospheric chemistry will start with the basic principles of ozone production and loss. Aura data will be used where possible to illustrate the pertinent atmospheric processes. Three-dimensional model simulations will be used both to illustrate present capabilities in constituent modeling and to demonstrate how observations are used to evaluate and improve models and our ability to predict future ozone evolution.
Computer-oriented synthesis of wide-band non-uniform negative resistance amplifiers
NASA Technical Reports Server (NTRS)
Branner, G. R.; Chan, S.-P.
1975-01-01
This paper presents a synthesis procedure which provides design values for broad-band amplifiers using non-uniform negative resistance devices. Employing a weighted least squares optimization scheme, the technique, based on an extension of procedures for uniform negative resistance devices, is capable of providing designs for a variety of matching network topologies. It also provides, for the first time, quantitative results for predicting the effects of parameter element variations on overall amplifier performance. The technique is also unique in that it employs exact partial derivatives for optimization and sensitivity computation. In comparison with conventional procedures, significantly improved broad-band designs are shown to result.
ImatraNMR: Novel software for batch integration and analysis of quantitative NMR spectra
NASA Astrophysics Data System (ADS)
Mäkelä, A. V.; Heikkilä, O.; Kilpeläinen, I.; Heikkinen, S.
2011-08-01
Quantitative NMR spectroscopy is a useful and important tool for analysis of various mixtures. Recently, in addition of traditional quantitative 1D 1H and 13C NMR methods, a variety of pulse sequences aimed for quantitative or semiquantitative analysis have been developed. To obtain actual usable results from quantitative spectra, they must be processed and analyzed with suitable software. Currently, there are many processing packages available from spectrometer manufacturers and third party developers, and most of them are capable of analyzing and integration of quantitative spectra. However, they are mainly aimed for processing single or few spectra, and are slow and difficult to use when large numbers of spectra and signals are being analyzed, even when using pre-saved integration areas or custom scripting features. In this article, we present a novel software, ImatraNMR, designed for batch analysis of quantitative spectra. In addition to capability of analyzing large number of spectra, it provides results in text and CSV formats, allowing further data-analysis using spreadsheet programs or general analysis programs, such as Matlab. The software is written with Java, and thus it should run in any platform capable of providing Java Runtime Environment version 1.6 or newer, however, currently it has only been tested with Windows and Linux (Ubuntu 10.04). The software is free for non-commercial use, and is provided with source code upon request.
Hansen, N; Harper, M R; Green, W H
2011-12-07
An automated reaction mechanism generator is used to develop a predictive, comprehensive reaction mechanism for the high-temperature oxidation chemistry of n-butanol. This new kinetic model is an advancement of an earlier model, which had been extensively tested against earlier experimental data (Harper et al., Combust. Flame, 2011, 158, 16-41). In this study, the model's predictive capabilities are improved by targeting isomer-resolved quantitative mole fraction profiles of flame species in low-pressure flames. To this end, a total of three burner-stabilized premixed flames are isomer-selectively analyzed by flame-sampling molecular-beam time-of-flight mass spectrometry using photoionization by tunable vacuum-ultraviolet synchrotron radiation. For most species, the newly developed chemical kinetic model is capable of accurately reproducing the experimental trends in these flames. The results clearly indicate that n-butanol is mainly consumed by H-atom abstraction with H, O, and OH, forming predominantly the α-C(4)H(9)O radical (CH(3)CH(2)CH(2)˙CHOH). Fission of C-C bonds in n-butanol is only predicted to be significant in a similar, but hotter flame studied by Oßwald et al. (Combust. Flame, 2011, 158, 2-15). The water-elimination reaction to 1-butene is found to be of no importance under the premixed conditions studied here. The initially formed isomeric C(4)H(9)O radicals are predicted to further oxidize by reacting with H and O(2) or to decompose to smaller fragments via β-scission. Enols are detected experimentally, with their importance being overpredicted by the model.
A conservative fully implicit algorithm for predicting slug flows
NASA Astrophysics Data System (ADS)
Krasnopolsky, Boris I.; Lukyanov, Alexander A.
2018-02-01
An accurate and predictive modelling of slug flows is required by many industries (e.g., oil and gas, nuclear engineering, chemical engineering) to prevent undesired events potentially leading to serious environmental accidents. For example, the hydrodynamic and terrain-induced slugging leads to unwanted unsteady flow conditions. This demands the development of fast and robust numerical techniques for predicting slug flows. The presented in this paper study proposes a multi-fluid model and its implementation method accounting for phase appearance and disappearance. The numerical modelling of phase appearance and disappearance presents a complex numerical challenge for all multi-component and multi-fluid models. Numerical challenges arise from the singular systems of equations when some phases are absent and from the solution discontinuity when some phases appear or disappear. This paper provides a flexible and robust solution to these issues. A fully implicit formulation described in this work enables to efficiently solve governing fluid flow equations. The proposed numerical method provides a modelling capability of phase appearance and disappearance processes, which is based on switching procedure between various sets of governing equations. These sets of equations are constructed using information about the number of phases present in the computational domain. The proposed scheme does not require an explicit truncation of solutions leading to a conservative scheme for mass and linear momentum. A transient two-fluid model is used to verify and validate the proposed algorithm for conditions of hydrodynamic and terrain-induced slug flow regimes. The developed modelling capabilities allow to predict all the major features of the experimental data, and are in a good quantitative agreement with them.
Social policies related to parenthood and capabilities of Slovenian parents.
Mrčela, Aleksandra Kanjuo; Sadar, Nevenka Černigoj
2011-01-01
We apply Sen's capability approach to evaluate the capabilities of Slovenian parents to reconcile paid work and family in the context of the transition to a market economy. We examine how different levels of capabilities together affect the work–life balance (WLB) of employed parents. We combine both quantitative and qualitative methodological approaches. The results of our quantitative and qualitative research show that increased precariousness of employment and intensification of work create gaps between the legal and normative possibilities for successful reconciliation strategies and actual use of such arrangements in Slovenia. The existing social policies and the acceptance of gender equality in the sphere of paid work enhance capabilities for reconciliation of paid work and parenthood, whereas the intensification of working lives, the dominance of paid work over other parts of life, and the acceptance of gender inequalities in parental and household responsibilities limit parents’ capabilities to achieve WLB.
Cao, Hui; Yan, Xingyu; Li, Yaojiang; Wang, Yanxia; Zhou, Yan; Yang, Sanchun
2014-01-01
Quantitative analysis for the flue gas of natural gas-fired generator is significant for energy conservation and emission reduction. The traditional partial least squares method may not deal with the nonlinear problems effectively. In the paper, a nonlinear partial least squares method with extended input based on radial basis function neural network (RBFNN) is used for components prediction of flue gas. For the proposed method, the original independent input matrix is the input of RBFNN and the outputs of hidden layer nodes of RBFNN are the extension term of the original independent input matrix. Then, the partial least squares regression is performed on the extended input matrix and the output matrix to establish the components prediction model of flue gas. A near-infrared spectral dataset of flue gas of natural gas combustion is used for estimating the effectiveness of the proposed method compared with PLS. The experiments results show that the root-mean-square errors of prediction values of the proposed method for methane, carbon monoxide, and carbon dioxide are, respectively, reduced by 4.74%, 21.76%, and 5.32% compared to those of PLS. Hence, the proposed method has higher predictive capabilities and better robustness.
Multimodal quantitative phase and fluorescence imaging of cell apoptosis
NASA Astrophysics Data System (ADS)
Fu, Xinye; Zuo, Chao; Yan, Hao
2017-06-01
Fluorescence microscopy, utilizing fluorescence labeling, has the capability to observe intercellular changes which transmitted and reflected light microscopy techniques cannot resolve. However, the parts without fluorescence labeling are not imaged. Hence, the processes simultaneously happen in these parts cannot be revealed. Meanwhile, fluorescence imaging is 2D imaging where information in the depth is missing. Therefore the information in labeling parts is also not complete. On the other hand, quantitative phase imaging is capable to image cells in 3D in real time through phase calculation. However, its resolution is limited by the optical diffraction and cannot observe intercellular changes below 200 nanometers. In this work, fluorescence imaging and quantitative phase imaging are combined to build a multimodal imaging system. Such system has the capability to simultaneously observe the detailed intercellular phenomenon and 3D cell morphology. In this study the proposed multimodal imaging system is used to observe the cell behavior in the cell apoptosis. The aim is to highlight the limitations of fluorescence microscopy and to point out the advantages of multimodal quantitative phase and fluorescence imaging. The proposed multimodal quantitative phase imaging could be further applied in cell related biomedical research, such as tumor.
3D-QSAR analysis of MCD inhibitors by CoMFA and CoMSIA.
Pourbasheer, Eslam; Aalizadeh, Reza; Ebadi, Amin; Ganjali, Mohammad Reza
2015-01-01
Three-dimensional quantitative structure-activity relationship was developed for the series of compounds as malonyl-CoA decarboxylase antagonists (MCD) using the CoMFA and CoMSIA methods. The statistical parameters for CoMFA (q(2)=0.558, r(2)=0.841) and CoMSIA (q(2)= 0.615, r(2) = 0.870) models were derived based on 38 compounds as training set in the basis of the selected alignment. The external predictive abilities of the built models were evaluated by using the test set of nine compounds. From obtained results, the CoMSIA method was found to have highly predictive capability in comparison with CoMFA method. Based on the given results by CoMSIA and CoMFA contour maps, some features that can enhance the activity of compounds as MCD antagonists were introduced and used to design new compounds with better inhibition activity.
Comparison of Aircraft Icing Growth Assessment Software
NASA Technical Reports Server (NTRS)
Wright, William; Potapczuk, Mark G.; Levinson, Laurie H.
2011-01-01
A research project is underway to produce computer software that can accurately predict ice growth under any meteorological conditions for any aircraft surface. An extensive comparison of the results in a quantifiable manner against the database of ice shapes that have been generated in the NASA Glenn Icing Research Tunnel (IRT) has been performed, including additional data taken to extend the database in the Super-cooled Large Drop (SLD) regime. The project shows the differences in ice shape between LEWICE 3.2.2, GlennICE, and experimental data. The project addresses the validation of the software against a recent set of ice-shape data in the SLD regime. This validation effort mirrors a similar effort undertaken for previous validations of LEWICE. Those reports quantified the ice accretion prediction capabilities of the LEWICE software. Several ice geometry features were proposed for comparing ice shapes in a quantitative manner. The resulting analysis showed that LEWICE compared well to the available experimental data.
The Bragg Reflection Polarimeter On the Gravity and Extreme Magnetism Small Explorer Mission
NASA Astrophysics Data System (ADS)
Allured, Ryan; Griffiths, S.; Daly, R.; Prieskorn, Z.; Marlowe, H.; Kaaret, P.; GEMS Team
2011-09-01
The strong gravity associated with black holes warps the spacetime outside of the event horizon, and it is predicted that this will leave characteristic signatures on the polarization of X-ray emission originating in the accretion disk. The Gravity and Extreme Magnetism Small Explorer (GEMS) mission will be the first observatory with the capability to make polarization measurements with enough sensitivity to quantitatively test this prediction. Students at the University of Iowa are currently working on the development of the Bragg Reflection Polarimeter (BRP), a soft X-ray polarimeter sensitive at 500 eV, that is the student experiment on GEMS. The BRP will complement the main experiment by making a polarization measurement from accreting black holes below the main energy band (2-10 keV). This measurement will constrain the inclination of the accretion disk and tighten measurements of black hole spin.
Buske, Peter; Galle, Jörg; Barker, Nick; Aust, Gabriela; Clevers, Hans; Loeffler, Markus
2011-01-06
We introduce a novel dynamic model of stem cell and tissue organisation in murine intestinal crypts. Integrating the molecular, cellular and tissue level of description, this model links a broad spectrum of experimental observations encompassing spatially confined cell proliferation, directed cell migration, multiple cell lineage decisions and clonal competition.Using computational simulations we demonstrate that the model is capable of quantitatively describing and predicting the dynamic behaviour of the intestinal tissue during steady state as well as after cell damage and following selective gain or loss of gene function manipulations affecting Wnt- and Notch-signalling. Our simulation results suggest that reversibility and flexibility of cellular decisions are key elements of robust tissue organisation of the intestine. We predict that the tissue should be able to fully recover after complete elimination of cellular subpopulations including subpopulations deemed to be functional stem cells. This challenges current views of tissue stem cell organisation.
Quantitative validation of carbon-fiber laminate low velocity impact simulations
English, Shawn A.; Briggs, Timothy M.; Nelson, Stacy M.
2015-09-26
Simulations of low velocity impact with a flat cylindrical indenter upon a carbon fiber fabric reinforced polymer laminate are rigorously validated. Comparison of the impact energy absorption between the model and experiment is used as the validation metric. Additionally, non-destructive evaluation, including ultrasonic scans and three-dimensional computed tomography, provide qualitative validation of the models. The simulations include delamination, matrix cracks and fiber breaks. An orthotropic damage and failure constitutive model, capable of predicting progressive damage and failure, is developed in conjunction and described. An ensemble of simulations incorporating model parameter uncertainties is used to predict a response distribution which ismore » then compared to experimental output using appropriate statistical methods. Lastly, the model form errors are exposed and corrected for use in an additional blind validation analysis. The result is a quantifiable confidence in material characterization and model physics when simulating low velocity impact in structures of interest.« less
NASA Astrophysics Data System (ADS)
Matthews-Bird, F.; Gosling, W. D.; Brooks, S. J.; Montoya, E.; Coe, A. L.
2014-12-01
Chironomidae (non-biting midges) is a family of two-winged aquatic insects of the order Diptera. They are globally distributed and one of the most diverse families within aquatic ecosystems. The insects are stenotopic, and the rapid turnover of species and their ability to colonise quickly favourable habitats means chironomids are extremely sensitive to environmental change, notably temperature. Through the development of quantitative temperature inference models chironomids have become important palaeoecological tools. Proxies capable of generating independent estimates of past climate are crucial to disentangling climate signals and ecosystem response in the palaeoecological record. This project has developed the first modern environmental calibration data set in order to use chironomids from the Tropical Andes as quantitative climate proxies. Using surface sediments from c. 60 lakes from Bolivia, Peru and Ecuador we have developed an inference model capable of reconstructing temperatures, with a prediction error of 1-2°C, from fossil assemblages. Here we present the first Lateglacial and Holocene chironomid-inferred temperature reconstructions from two sites in the tropical Andes. The first record, from a high elevation (4153 m asl) lake in the Bolivian Andes, shows persistently cool temperatures for the past 15 kyr, punctuated by warm episodes in the early Holocene (9-10 kyr BP). The chironomid-inferred Holocene temperature trends from a lake sediment record on the eastern Andean flank of Ecuador (1248 m asl) spanning the last 5 millennia are synchronous with temperature changes in the NGRIP ice core record. The temperature estimates suggest along the eastern flank of the Andes, at lower latitudes (~1°S), climate closely resemble the well-established fluctuations of the Northern Hemisphere for this time period. Late-glacial climate fluctuations across South America are still disputed with some palaeoecological records suggesting evidence for Younger Dryas like events. Estimates from quantitative climate proxies such as chironomids will help constrain these patterns and further our understanding of climate teleconnections on Quaternary timescales.
Rapid monitoring of grape withering using visible near-infrared spectroscopy.
Beghi, Roberto; Giovenzana, Valentina; Marai, Simone; Guidetti, Riccardo
2015-12-01
Wineries need new practical and quick instruments, non-destructive and able to quantitatively evaluate during withering the parameters that impact product quality. The aim of the work was to test an optical portable system (visible near-infrared (NIR) spectrophotometer) in a wavelength range of 400-1000 nm for the prediction of quality parameters of grape berries during withering. A total of 300 red grape samples (Vitis vinifera L., Corvina cultivar) harvested in vintage year 2012 from the Valpolicella area (Verona, Italy) were analyzed. Qualitative (principal component analysis, PCA) and quantitative (partial least squares regression algorithm, PLS) evaluations were performed on grape spectra. PCA showed a clear sample grouping for the different withering stages. PLS models gave encouraging predictive capabilities for soluble solids content (R(2) val = 0.62 and ratio performance deviation, RPD = 1.87) and firmness (R(2) val = 0.56 and RPD = 1.79). The work demonstrated the applicability of visible NIR spectroscopy as a rapid technique for the analysis of grape quality directly in barns, during withering. The sector could be provided with simple and inexpensive optical systems that could be used to monitor the withering degree of grape for better management of the wine production process. © 2014 Society of Chemical Industry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thrall, Brian D.; Minard, Kevin R.; Teeguarden, Justin G.
A Cooperative Research and Development Agreement (CRADA) was sponsored by Battelle Memorial Institute (Battelle, Columbus), to initiate a collaborative research program across multiple Department of Energy (DOE) National Laboratories aimed at developing a suite of new capabilities for predictive toxicology. Predicting the potential toxicity of emerging classes of engineered nanomaterials was chosen as one of two focusing problems for this program. PNNL’s focus toward this broader goal was to refine and apply experimental and computational tools needed to provide quantitative understanding of nanoparticle dosimetry for in vitro cell culture systems, which is necessary for comparative risk estimates for different nanomaterialsmore » or biological systems. Research conducted using lung epithelial and macrophage cell models successfully adapted magnetic particle detection and fluorescent microscopy technologies to quantify uptake of various forms of engineered nanoparticles, and provided experimental constraints and test datasets for benchmark comparison against results obtained using an in vitro computational dosimetry model, termed the ISSD model. The experimental and computational approaches developed were used to demonstrate how cell dosimetry is applied to aid in interpretation of genomic studies of nanoparticle-mediated biological responses in model cell culture systems. The combined experimental and theoretical approach provides a highly quantitative framework for evaluating relationships between biocompatibility of nanoparticles and their physical form in a controlled manner.« less
Nonlinear ultrasonics for material state awareness
NASA Astrophysics Data System (ADS)
Jacobs, L. J.
2014-02-01
Predictive health monitoring of structural components will require the development of advanced sensing techniques capable of providing quantitative information on the damage state of structural materials. By focusing on nonlinear acoustic techniques, it is possible to measure absolute, strength based material parameters that can then be coupled with uncertainty models to enable accurate and quantitative life prediction. Starting at the material level, this review will present current research that involves a combination of sensing techniques and physics-based models to characterize damage in metallic materials. In metals, these nonlinear ultrasonic measurements can sense material state, before the formation of micro- and macro-cracks. Typically, cracks of a measurable size appear quite late in a component's total life, while the material's integrity in terms of toughness and strength gradually decreases due to the microplasticity (dislocations) and associated change in the material's microstructure. This review focuses on second harmonic generation techniques. Since these nonlinear acoustic techniques are acoustic wave based, component interrogation can be performed with bulk, surface and guided waves using the same underlying material physics; these nonlinear ultrasonic techniques provide results which are independent of the wave type used. Recent physics-based models consider the evolution of damage due to dislocations, slip bands, interstitials, and precipitates in the lattice structure, which can lead to localized damage.
Validation of Heat Transfer and Film Cooling Capabilities of the 3-D RANS Code TURBO
NASA Technical Reports Server (NTRS)
Shyam, Vikram; Ameri, Ali; Chen, Jen-Ping
2010-01-01
The capabilities of the 3-D unsteady RANS code TURBO have been extended to include heat transfer and film cooling applications. The results of simulations performed with the modified code are compared to experiment and to theory, where applicable. Wilcox s k-turbulence model has been implemented to close the RANS equations. Two simulations are conducted: (1) flow over a flat plate and (2) flow over an adiabatic flat plate cooled by one hole inclined at 35 to the free stream. For (1) agreement with theory is found to be excellent for heat transfer, represented by local Nusselt number, and quite good for momentum, as represented by the local skin friction coefficient. This report compares the local skin friction coefficients and Nusselt numbers on a flat plate obtained using Wilcox's k-model with the theory of Blasius. The study looks at laminar and turbulent flows over an adiabatic flat plate and over an isothermal flat plate for two different wall temperatures. It is shown that TURBO is able to accurately predict heat transfer on a flat plate. For (2) TURBO shows good qualitative agreement with film cooling experiments performed on a flat plate with one cooling hole. Quantitatively, film effectiveness is under predicted downstream of the hole.
A Revised Validation Process for Ice Accretion Codes
NASA Technical Reports Server (NTRS)
Wright, William B.; Porter, Christopher E.
2017-01-01
A research project is underway at NASA Glenn to produce computer software that can accurately predict ice growth under any meteorological conditions for any aircraft surface. This report will present results from the latest LEWICE release, version 3.5. This program differs from previous releases in its ability to model mixed phase and ice crystal conditions such as those encountered inside an engine. It also has expanded capability to use structured grids and a new capability to use results from unstructured grid flow solvers. A quantitative comparison of the results against a database of ice shapes that have been generated in the NASA Glenn Icing Research Tunnel (IRT) has also been performed. This paper will extend the comparison of ice shapes between LEWICE 3.5 and experimental data from a previous paper. Comparisons of lift and drag are made between experimentally collected data from experimentally obtained ice shapes and simulated (CFD) data on simulated (LEWICE) ice shapes. Comparisons are also made between experimentally collected and simulated performance data on select experimental ice shapes to ensure the CFD solver, FUN3D, is valid within the flight regime. The results show that the predicted results are within the accuracy limits of the experimental data for the majority of cases.
Validation Process for LEWICE by Use of a Navier-Stokes Solver
NASA Technical Reports Server (NTRS)
Wright, William B.; Porter, Christopher E.
2017-01-01
A research project is underway at NASA Glenn to produce computer software that can accurately predict ice growth under any meteorological conditions for any aircraft surface. This report will present results from the latest LEWICE release, version 3.5. This program differs from previous releases in its ability to model mixed phase and ice crystal conditions such as those encountered inside an engine. It also has expanded capability to use structured grids and a new capability to use results from unstructured grid flow solvers. A quantitative comparison of the results against a database of ice shapes that have been generated in the NASA Glenn Icing Research Tunnel (IRT) has also been performed. This paper will extend the comparison of ice shapes between LEWICE 3.5 and experimental data from a previous paper. Comparisons of lift and drag are made between experimentally collected data from experimentally obtained ice shapes and simulated (CFD) data on simulated (LEWICE) ice shapes. Comparisons are also made between experimentally collected and simulated performance data on select experimental ice shapes to ensure the CFD solver, FUN3D, is valid within the flight regime. The results show that the predicted results are within the accuracy limits of the experimental data for the majority of cases.
A Quantitative Study of Oxygen as a Metabolic Regulator
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan; LaManna, Joseph C.; Cabera, Marco E.
2000-01-01
An acute reduction in oxygen delivery to a tissue is associated with metabolic changes aimed at maintaining ATP homeostasis. However, given the complexity of the human bio-energetic system, it is difficult to determine quantitatively how cellular metabolic processes interact to maintain ATP homeostasis during stress (e.g., hypoxia, ischemia, and exercise). In particular, we are interested in determining mechanisms relating cellular oxygen concentration to observed metabolic responses at the cellular, tissue, organ, and whole body levels and in quantifying how changes in tissue oxygen availability affect the pathways of ATP synthesis and the metabolites that control these pathways. In this study; we extend a previously developed mathematical model of human bioenergetics, to provide a physicochemical framework that permits quantitative understanding of oxygen as a metabolic regulator. Specifically, the enhancement - sensitivity analysis - permits studying the effects of variations in tissue oxygenation and parameters controlling cellular respiration on glycolysis, lactate production, and pyruvate oxidation. The analysis can distinguish between parameters that must be determined accurately and those that require less precision, based on their effects on model predictions. This capability may prove to be important in optimizing experimental design, thus reducing use of animals.
Advanced in-production hotspot prediction and monitoring with micro-topography
NASA Astrophysics Data System (ADS)
Fanton, P.; Hasan, T.; Lakcher, A.; Le-Gratiet, B.; Prentice, C.; Simiz, J.-G.; La Greca, R.; Depre, L.; Hunsche, S.
2017-03-01
At 28nm technology node and below, hot spot prediction and process window control across production wafers have become increasingly critical to prevent hotspots from becoming yield-limiting defects. We previously established proof of concept for a systematic approach to identify the most critical pattern locations, i.e. hotspots, in a reticle layout by computational lithography and combining process window characteristics of these patterns with across-wafer process variation data to predict where hotspots may become yield impacting defects [1,2]. The current paper establishes the impact of micro-topography on a 28nm metal layer, and its correlation with hotspot best focus variations across a production chip layout. Detailed topography measurements are obtained from an offline tool, and pattern-dependent best focus (BF) shifts are determined from litho simulations that include mask-3D effects. We also establish hotspot metrology and defect verification by SEM image contour extraction and contour analysis. This enables detection of catastrophic defects as well as quantitative characterization of pattern variability, i.e. local and global CD uniformity, across a wafer to establish hotspot defect and variability maps. Finally, we combine defect prediction and verification capabilities for process monitoring by on-product, guided hotspot metrology, i.e. with sampling locations being determined from the defect prediction model and achieved prediction accuracy (capture rate) around 75%
Reliability Prediction of Ontology-Based Service Compositions Using Petri Net and Time Series Models
Li, Jia; Xia, Yunni; Luo, Xin
2014-01-01
OWL-S, one of the most important Semantic Web service ontologies proposed to date, provides a core ontological framework and guidelines for describing the properties and capabilities of their web services in an unambiguous, computer interpretable form. Predicting the reliability of composite service processes specified in OWL-S allows service users to decide whether the process meets the quantitative quality requirement. In this study, we consider the runtime quality of services to be fluctuating and introduce a dynamic framework to predict the runtime reliability of services specified in OWL-S, employing the Non-Markovian stochastic Petri net (NMSPN) and the time series model. The framework includes the following steps: obtaining the historical response times series of individual service components; fitting these series with a autoregressive-moving-average-model (ARMA for short) and predicting the future firing rates of service components; mapping the OWL-S process into a NMSPN model; employing the predicted firing rates as the model input of NMSPN and calculating the normal completion probability as the reliability estimate. In the case study, a comparison between the static model and our approach based on experimental data is presented and it is shown that our approach achieves higher prediction accuracy. PMID:24688429
NASA Technical Reports Server (NTRS)
Hoehler, Tori M.; Albert, Daniel B.; Bebout, Brad M.; Turk, Kendra A.; DesMarais, David J.
2004-01-01
The ultimate potential of any microbial ecosystem to contribute chemically to its environment - and therefore, to impact planetary biogeochemistry or to generate recognizable biosignatures - depends not only on the individual metabolic capabilities of constituent organisms, but also on how those capabilities are expressed through interactions with neighboring organisms. This is particularly important for microbial mats, which compress an extremely broad range of metabolic potential into a small and dynamic system. H2 participates in many of these metabolic processes, including the major elemental cycling processes of photosynthesis, nitrogen fixation, sulfate reduction, and fermentation, and may therefore serve as a mediator of microbial interactions within the mat system. Collectively, the requirements of energy, electron transfer, and biomass element stoichiometry suggest quantitative relationships among the major element cycling processes, as regards H2 metabolism We determined experimentally the major contributions to 32 cycling in hypersaline microbial mats from Baja California, Mexico, and compared them to predicted relationships. Fermentation under dark, anoxic conditions is quantitatively the most important mechanism of H2 production, consistent with expectations for non-heterocystous mats such as those under study. Up to 16% of reducing equivalents fixed by photosynthesis during the day may be released by this mechanism. The direct contribution of nitrogen fixation to H2 production is small in comparison, but this process may indirectly stimulate substantial H2 generation, by requiring higher rates of fermentation. Sulfate reduction, aerobic consumption, diffusive and ebulitive loss, and possibly H2-based photoreduction of CO2 serve as the principal H2 sinks. Collectively, these processes interact to create an orders-of-magnitude daily variation in H2 concentrations and fluxes, and thereby in the oxidation-reduction potential that is imposed on microbial processes occuring within the mat matrix.
ImatraNMR: novel software for batch integration and analysis of quantitative NMR spectra.
Mäkelä, A V; Heikkilä, O; Kilpeläinen, I; Heikkinen, S
2011-08-01
Quantitative NMR spectroscopy is a useful and important tool for analysis of various mixtures. Recently, in addition of traditional quantitative 1D (1)H and (13)C NMR methods, a variety of pulse sequences aimed for quantitative or semiquantitative analysis have been developed. To obtain actual usable results from quantitative spectra, they must be processed and analyzed with suitable software. Currently, there are many processing packages available from spectrometer manufacturers and third party developers, and most of them are capable of analyzing and integration of quantitative spectra. However, they are mainly aimed for processing single or few spectra, and are slow and difficult to use when large numbers of spectra and signals are being analyzed, even when using pre-saved integration areas or custom scripting features. In this article, we present a novel software, ImatraNMR, designed for batch analysis of quantitative spectra. In addition to capability of analyzing large number of spectra, it provides results in text and CSV formats, allowing further data-analysis using spreadsheet programs or general analysis programs, such as Matlab. The software is written with Java, and thus it should run in any platform capable of providing Java Runtime Environment version 1.6 or newer, however, currently it has only been tested with Windows and Linux (Ubuntu 10.04). The software is free for non-commercial use, and is provided with source code upon request. Copyright © 2011 Elsevier Inc. All rights reserved.
Pärs, Martti; Gradmann, Michael; Gräf, Katja; Bauer, Peter; Thelakkat, Mukundan; Köhler, Jürgen
2014-01-01
We investigated the capability of molecular triads, consisting of two strong fluorophores that were covalently linked to a photochromic molecule, for optical gating. Therefore we monitored the fluorescence intensity of the fluorophores as a function of the isomeric state of the photoswitch. From the analysis of our data we develop a kinetic model that allows us to predict quantitatively the degree of the fluorescence modulation as a function of the mutual intensities of the lasers that are used to induce the fluorescence and the switching of the photochromic unit. We find that the achievable contrast for the modulation of the fluorescence depends mainly on the intensity ratio of the two light beams and appears to be very robust against absolute changes of these intensities. The latter result provides valuable information for the development of all-optical circuits which would require to handle different signal strengths for the input and output levels. PMID:24614963
A Unified Theory of Impact Crises and Mass Extinctions: Quantitative Tests
NASA Technical Reports Server (NTRS)
Rampino, Michael R.; Haggerty, Bruce M.; Pagano, Thomas C.
1997-01-01
Several quantitative tests of a general hypothesis linking impacts of large asteroids and comets with mass extinctions of life are possible based on astronomical data, impact dynamics, and geological information. The waiting of large-body impacts on the Earth derive from the flux of Earth-crossing asteroids and comets, and the estimated size of impacts capable of causing large-scale environmental disasters, predict that impacts of objects greater than or equal to 5 km in diameter (greater than or equal to 10 (exp 7) Mt TNT equivalent) could be sufficient to explain the record of approximately 25 extinction pulses in the last 540 Myr, with the 5 recorded major mass extinctions related to impacts of the largest objects of greater than or equal to 10 km in diameter (greater than or equal to 10(exp 8) Mt Events). Smaller impacts (approximately 10 (exp 6) Mt), with significant regional environmental effects, could be responsible for the lesser boundaries in the geologic record.
NASA Astrophysics Data System (ADS)
Cederman, L.-E.; Conte, R.; Helbing, D.; Nowak, A.; Schweitzer, F.; Vespignani, A.
2012-11-01
A huge flow of quantitative social, demographic and behavioral data is becoming available that traces the activities and interactions of individuals, social patterns, transportation infrastructures and travel fluxes. This has caused, together with innovative computational techniques and methods for modeling social actions in hybrid (natural and artificial) societies, a qualitative change in the ways we model socio-technical systems. For the first time, society can be studied in a comprehensive fashion that addresses social and behavioral complexity. In other words we are in the position to envision the development of large data and computational cyber infrastructure defining an exploratory of society that provides quantitative anticipatory, explanatory and scenario analysis capabilities ranging from emerging infectious disease to conflict and crime surges. The goal of the exploratory of society is to provide the basic infrastructure embedding the framework of tools and knowledge needed for the design of forecast/anticipatory/crisis management approaches to socio technical systems, supporting future decision making procedures by accelerating the scientific cycle that goes from data generation to predictions.
NASA Astrophysics Data System (ADS)
Kaminski, Thomas; Rayner, Peter Julian
2017-10-01
Various observational data streams have been shown to provide valuable constraints on the state and evolution of the global carbon cycle. These observations have the potential to reduce uncertainties in past, current, and predicted natural and anthropogenic surface fluxes. In particular such observations provide independent information for verification of actions as requested by the Paris Agreement. It is, however, difficult to decide which variables to sample, and how, where, and when to sample them, in order to achieve an optimal use of the observational capabilities. Quantitative network design (QND) assesses the impact of a given set of existing or hypothetical observations in a modelling framework. QND has been used to optimise in situ networks and assess the benefit to be expected from planned space missions. This paper describes recent progress and highlights aspects that are not yet sufficiently addressed. It demonstrates the advantage of an integrated QND system that can simultaneously evaluate a multitude of observational data streams and assess their complementarity and redundancy.
Gary D. Falk
1981-01-01
A systematic procedure for predicting the payload capability of running, live, and standing skylines is presented. Three hand-held calculator programs are used to predict payload capability that includes the effect of partial suspension. The programs allow for predictions for downhill yarding and for yarding away from the yarder. The equations and basic principles...
Berchialla, Paola; Scarinzi, Cecilia; Snidero, Silvia; Gregori, Dario
2016-08-01
Risk Assessment is the systematic study of decisions subject to uncertain consequences. An increasing interest has been focused on modeling techniques like Bayesian Networks since their capability of (1) combining in the probabilistic framework different type of evidence including both expert judgments and objective data; (2) overturning previous beliefs in the light of the new information being received and (3) making predictions even with incomplete data. In this work, we proposed a comparison among Bayesian Networks and other classical Quantitative Risk Assessment techniques such as Neural Networks, Classification Trees, Random Forests and Logistic Regression models. Hybrid approaches, combining both Classification Trees and Bayesian Networks, were also considered. Among Bayesian Networks, a clear distinction between purely data-driven approach and combination of expert knowledge with objective data is made. The aim of this paper consists in evaluating among this models which best can be applied, in the framework of Quantitative Risk Assessment, to assess the safety of children who are exposed to the risk of inhalation/insertion/aspiration of consumer products. The issue of preventing injuries in children is of paramount importance, in particular where product design is involved: quantifying the risk associated to product characteristics can be of great usefulness in addressing the product safety design regulation. Data of the European Registry of Foreign Bodies Injuries formed the starting evidence for risk assessment. Results showed that Bayesian Networks appeared to have both the ease of interpretability and accuracy in making prediction, even if simpler models like logistic regression still performed well. © The Author(s) 2013.
A multisite assessment of the quantitative capabilities of the Xpert MTB/RIF assay.
Blakemore, Robert; Nabeta, Pamela; Davidow, Amy L; Vadwai, Viral; Tahirli, Rasim; Munsamy, Vanisha; Nicol, Mark; Jones, Martin; Persing, David H; Hillemann, Doris; Ruesch-Gerdes, Sabine; Leisegang, Felicity; Zamudio, Carlos; Rodrigues, Camilla; Boehme, Catharina C; Perkins, Mark D; Alland, David
2011-11-01
The Xpert MTB/RIF is an automated molecular test for Mycobacterium tuberculosis that estimates bacterial burden by measuring the threshold-cycle (Ct) of its M. tuberculosis-specific real-time polymerase chain reaction. Bacterial burden is an important biomarker for disease severity, infection control risk, and response to therapy. Evaluate bacterial load quantitation by Xpert MTB/RIF compared with conventional quantitative methods. Xpert MTB/RIF results were compared with smear-microscopy, semiquantiative solid culture, and time-to-detection in liquid culture for 741 patients and 2,008 samples tested in a multisite clinical trial. An internal control real-time polymerase chain reaction was evaluated for its ability to identify inaccurate quantitative Xpert MTB/RIF results. Assays with an internal control Ct greater than 34 were likely to be inaccurately quantitated; this represented 15% of M. tuberculosis-positive tests. Excluding these, decreasing M. tuberculosis Ct was associated with increasing smear microscopy grade for smears of concentrated sputum pellets (r(s) = -0.77) and directly from sputum (r(s) =-0.71). A Ct cutoff of approximately 27.7 best predicted smear-positive status. The association between M. tuberculosis Ct and time-to-detection in liquid culture (r(s) = 0.68) and semiquantitative colony counts (r(s) = -0.56) was weaker than smear. Tests of paired same-patient sputum showed that high viscosity sputum samples contained ×32 more M. tuberculosis than nonviscous samples. Comparisons between the grade of the acid-fast bacilli smear and Xpert MTB/RIF quantitative data across study sites enabled us to identify a site outlier in microscopy. Xpert MTB/RIF quantitation offers a new, standardized approach to measuring bacterial burden in the sputum of patients with tuberculosis.
Sakuraba, Shun; Asai, Kiyoshi; Kameda, Tomoshi
2015-11-05
The dimerization free energies of RNA-RNA duplexes are fundamental values that represent the structural stability of RNA complexes. We report a comparative analysis of RNA-RNA duplex dimerization free-energy changes upon mutations, estimated from a molecular dynamics simulation and experiments. A linear regression for nine pairs of double-stranded RNA sequences, six base pairs each, yielded a mean absolute deviation of 0.55 kcal/mol and an R(2) value of 0.97, indicating quantitative agreement between simulations and experimental data. The observed accuracy indicates that the molecular dynamics simulation with the current molecular force field is capable of estimating the thermodynamic properties of RNA molecules.
The person with a spinal cord injury: an evolving prototype for life care planning.
Stiens, Steven A; Fawber, Heidi L; Yuhas, Steven A
2013-08-01
The sequela of spinal cord injury (SCI) can provide a prototype for life care planning because the segmental design of the vertebrate body allows assessments to be quantitative, repeatable, and predictive of the injured person's impairments, self-care capabilities, and required assistance. Life care planning for patients with SCI uses a standard method that is comparable between planner, yet individualizes assessment and seeks resources that meet unique patient-centered needs in their communities of choice. Clinical care and rehabilitation needs organized with an SCI problem list promotes collaboration by the interdisciplinary team, caregivers, and family in efficient achievement of patient-centered goals and completion of daily care plans. Published by Elsevier Inc.
NASA Technical Reports Server (NTRS)
1974-01-01
After the simplified version of the 41-Node Stolwijk Metabolic Man Model was implemented on the Sigma 3 and UNIVAC 1110 computers in batch mode, it became desirable to make certain revisions. First, the availability of time-sharing terminals makes it possible to provide the capability and flexibility of conversational interaction between user and model. Secondly, recent physiological studies show the need to revise certain parameter values contained in the model. Thirdly, it was desired to make quantitative and accurate predictions of evaporative water loss for humans in an orbiting space station. The result of the first phase of this effort are reported.
Cao, Pengxing; Tan, Xiahui; Donovan, Graham; Sanderson, Michael J; Sneyd, James
2014-08-01
The inositol trisphosphate receptor ([Formula: see text]) is one of the most important cellular components responsible for oscillations in the cytoplasmic calcium concentration. Over the past decade, two major questions about the [Formula: see text] have arisen. Firstly, how best should the [Formula: see text] be modeled? In other words, what fundamental properties of the [Formula: see text] allow it to perform its function, and what are their quantitative properties? Secondly, although calcium oscillations are caused by the stochastic opening and closing of small numbers of [Formula: see text], is it possible for a deterministic model to be a reliable predictor of calcium behavior? Here, we answer these two questions, using airway smooth muscle cells (ASMC) as a specific example. Firstly, we show that periodic calcium waves in ASMC, as well as the statistics of calcium puffs in other cell types, can be quantitatively reproduced by a two-state model of the [Formula: see text], and thus the behavior of the [Formula: see text] is essentially determined by its modal structure. The structure within each mode is irrelevant for function. Secondly, we show that, although calcium waves in ASMC are generated by a stochastic mechanism, [Formula: see text] stochasticity is not essential for a qualitative prediction of how oscillation frequency depends on model parameters, and thus deterministic [Formula: see text] models demonstrate the same level of predictive capability as do stochastic models. We conclude that, firstly, calcium dynamics can be accurately modeled using simplified [Formula: see text] models, and, secondly, to obtain qualitative predictions of how oscillation frequency depends on parameters it is sufficient to use a deterministic model.
Optimization of time-course experiments for kinetic model discrimination.
Lages, Nuno F; Cordeiro, Carlos; Sousa Silva, Marta; Ponces Freire, Ana; Ferreira, António E N
2012-01-01
Systems biology relies heavily on the construction of quantitative models of biochemical networks. These models must have predictive power to help unveiling the underlying molecular mechanisms of cellular physiology, but it is also paramount that they are consistent with the data resulting from key experiments. Often, it is possible to find several models that describe the data equally well, but provide significantly different quantitative predictions regarding particular variables of the network. In those cases, one is faced with a problem of model discrimination, the procedure of rejecting inappropriate models from a set of candidates in order to elect one as the best model to use for prediction.In this work, a method is proposed to optimize the design of enzyme kinetic assays with the goal of selecting a model among a set of candidates. We focus on models with systems of ordinary differential equations as the underlying mathematical description. The method provides a design where an extension of the Kullback-Leibler distance, computed over the time courses predicted by the models, is maximized. Given the asymmetric nature this measure, a generalized differential evolution algorithm for multi-objective optimization problems was used.The kinetics of yeast glyoxalase I (EC 4.4.1.5) was chosen as a difficult test case to evaluate the method. Although a single-substrate kinetic model is usually considered, a two-substrate mechanism has also been proposed for this enzyme. We designed an experiment capable of discriminating between the two models by optimizing the initial substrate concentrations of glyoxalase I, in the presence of the subsequent pathway enzyme, glyoxalase II (EC 3.1.2.6). This discriminatory experiment was conducted in the laboratory and the results indicate a two-substrate mechanism for the kinetics of yeast glyoxalase I.
Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models
Anderson, Ryan; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott M.; Morris, Richard V.; Ehlmann, Bethany L.; Dyar, M. Darby
2017-01-01
Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “sub-model” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.
Huang, Lixing; Hu, Jiao; Su, Yongquan; Qin, Yingxue; Kong, Wendi; Ma, Ying; Xu, Xiaojin; Lin, Mao; Yan, Qingpi
2015-01-01
The capability of Vibrio alginolyticus to adhere to fish mucus is a key virulence factor of the bacteria. Our previous research showed that stress conditions, such as Cu(2+), Pb(2+), Hg(2+), and low pH, can reduce this adhesion ability. Non-coding (nc) RNAs play a crucial role in regulating bacterial gene expression, affecting the bacteria's pathogenicity. To investigate the mechanism(s) underlying the decline in adhesion ability caused by stressors, we combined high-throughput sequencing with computational techniques to detect stressed ncRNA dynamics. These approaches yielded three commonly altered ncRNAs that are predicted to regulate the bacterial chemotaxis pathway, which plays a key role in the adhesion process of bacteria. We hypothesized they play a key role in the adhesion process of V. alginolyticus. In this study, we validated the effects of these three ncRNAs on their predicted target genes and their role in the V. alginolyticus adhesion process with RNA interference (i), quantitative real-time polymerase chain reaction (qPCR), northern blot, capillary assay, and in vitro adhesion assays. The expression of these ncRNAs and their predicted target genes were confirmed by qPCR and northern blot, which reinforced the reliability of the sequencing data and the target prediction. Overexpression of these ncRNAs was capable of reducing the chemotactic and adhesion ability of V. alginolyticus, and the expression levels of their target genes were also significantly reduced. Our results indicated that these three ncRNAs: (1) are able to regulate the bacterial chemotaxis pathway, and (2) play a key role in the adhesion process of V. alginolyticus.
Validation metrics for turbulent plasma transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holland, C., E-mail: chholland@ucsd.edu
Developing accurate models of plasma dynamics is essential for confident predictive modeling of current and future fusion devices. In modern computer science and engineering, formal verification and validation processes are used to assess model accuracy and establish confidence in the predictive capabilities of a given model. This paper provides an overview of the key guiding principles and best practices for the development of validation metrics, illustrated using examples from investigations of turbulent transport in magnetically confined plasmas. Particular emphasis is given to the importance of uncertainty quantification and its inclusion within the metrics, and the need for utilizing synthetic diagnosticsmore » to enable quantitatively meaningful comparisons between simulation and experiment. As a starting point, the structure of commonly used global transport model metrics and their limitations is reviewed. An alternate approach is then presented, which focuses upon comparisons of predicted local fluxes, fluctuations, and equilibrium gradients against observation. The utility of metrics based upon these comparisons is demonstrated by applying them to gyrokinetic predictions of turbulent transport in a variety of discharges performed on the DIII-D tokamak [J. L. Luxon, Nucl. Fusion 42, 614 (2002)], as part of a multi-year transport model validation activity.« less
Artificial neural networks in gynaecological diseases: current and potential future applications.
Siristatidis, Charalampos S; Chrelias, Charalampos; Pouliakis, Abraham; Katsimanis, Evangelos; Kassanos, Dimitrios
2010-10-01
Current (and probably future) practice of medicine is mostly associated with prediction and accurate diagnosis. Especially in clinical practice, there is an increasing interest in constructing and using valid models of diagnosis and prediction. Artificial neural networks (ANNs) are mathematical systems being used as a prospective tool for reliable, flexible and quick assessment. They demonstrate high power in evaluating multifactorial data, assimilating information from multiple sources and detecting subtle and complex patterns. Their capability and difference from other statistical techniques lies in performing nonlinear statistical modelling. They represent a new alternative to logistic regression, which is the most commonly used method for developing predictive models for outcomes resulting from partitioning in medicine. In combination with the other non-algorithmic artificial intelligence techniques, they provide useful software engineering tools for the development of systems in quantitative medicine. Our paper first presents a brief introduction to ANNs, then, using what we consider the best available evidence through paradigms, we evaluate the ability of these networks to serve as first-line detection and prediction techniques in some of the most crucial fields in gynaecology. Finally, through the analysis of their current application, we explore their dynamics for future use.
Ye, Hao; Luo, Heng; Ng, Hui Wen; Meehan, Joe; Ge, Weigong; Tong, Weida; Hong, Huixiao
2016-01-01
ToxCast data have been used to develop models for predicting in vivo toxicity. To predict the in vivo toxicity of a new chemical using a ToxCast data based model, its ToxCast bioactivity data are needed but not normally available. The capability of predicting ToxCast bioactivity data is necessary to fully utilize ToxCast data in the risk assessment of chemicals. We aimed to understand and elucidate the relationships between the chemicals and bioactivity data of the assays in ToxCast and to develop a network analysis based method for predicting ToxCast bioactivity data. We conducted modularity analysis on a quantitative network constructed from ToxCast data to explore the relationships between the assays and chemicals. We further developed Nebula (neighbor-edges based and unbiased leverage algorithm) for predicting ToxCast bioactivity data. Modularity analysis on the network constructed from ToxCast data yielded seven modules. Assays and chemicals in the seven modules were distinct. Leave-one-out cross-validation yielded a Q(2) of 0.5416, indicating ToxCast bioactivity data can be predicted by Nebula. Prediction domain analysis showed some types of ToxCast assay data could be more reliably predicted by Nebula than others. Network analysis is a promising approach to understand ToxCast data. Nebula is an effective algorithm for predicting ToxCast bioactivity data, helping fully utilize ToxCast data in the risk assessment of chemicals. Published by Elsevier Ltd.
A method of predicting the energy-absorption capability of composite subfloor beams
NASA Technical Reports Server (NTRS)
Farley, Gary L.
1987-01-01
A simple method of predicting the energy-absorption capability of composite subfloor beam structure was developed. The method is based upon the weighted sum of the energy-absorption capability of constituent elements of a subfloor beam. An empirical data base of energy absorption results from circular and square cross section tube specimens were used in the prediction capability. The procedure is applicable to a wide range of subfloor beam structure. The procedure was demonstrated on three subfloor beam concepts. Agreement between test and prediction was within seven percent for all three cases.
A Workstation for Interactive Display and Quantitative Analysis of 3-D and 4-D Biomedical Images
Robb, R.A.; Heffeman, P.B.; Camp, J.J.; Hanson, D.P.
1986-01-01
The capability to extract objective and quantitatively accurate information from 3-D radiographic biomedical images has not kept pace with the capabilities to produce the images themselves. This is rather an ironic paradox, since on the one hand the new 3-D and 4-D imaging capabilities promise significant potential for providing greater specificity and sensitivity (i.e., precise objective discrimination and accurate quantitative measurement of body tissue characteristics and function) in clinical diagnostic and basic investigative imaging procedures than ever possible before, but on the other hand, the momentous advances in computer and associated electronic imaging technology which have made these 3-D imaging capabilities possible have not been concomitantly developed for full exploitation of these capabilities. Therefore, we have developed a powerful new microcomputer-based system which permits detailed investigations and evaluation of 3-D and 4-D (dynamic 3-D) biomedical images. The system comprises a special workstation to which all the information in a large 3-D image data base is accessible for rapid display, manipulation, and measurement. The system provides important capabilities for simultaneously representing and analyzing both structural and functional data and their relationships in various organs of the body. This paper provides a detailed description of this system, as well as some of the rationale, background, theoretical concepts, and practical considerations related to system implementation. ImagesFigure 5Figure 7Figure 8Figure 9Figure 10Figure 11Figure 12Figure 13Figure 14Figure 15Figure 16
Lewis, Richard L; Shvartsman, Michael; Singh, Satinder
2013-07-01
We explore the idea that eye-movement strategies in reading are precisely adapted to the joint constraints of task structure, task payoff, and processing architecture. We present a model of saccadic control that separates a parametric control policy space from a parametric machine architecture, the latter based on a small set of assumptions derived from research on eye movements in reading (Engbert, Nuthmann, Richter, & Kliegl, 2005; Reichle, Warren, & McConnell, 2009). The eye-control model is embedded in a decision architecture (a machine and policy space) that is capable of performing a simple linguistic task integrating information across saccades. Model predictions are derived by jointly optimizing the control of eye movements and task decisions under payoffs that quantitatively express different desired speed-accuracy trade-offs. The model yields distinct eye-movement predictions for the same task under different payoffs, including single-fixation durations, frequency effects, accuracy effects, and list position effects, and their modulation by task payoff. The predictions are compared to-and found to accord with-eye-movement data obtained from human participants performing the same task under the same payoffs, but they are found not to accord as well when the assumptions concerning payoff optimization and processing architecture are varied. These results extend work on rational analysis of oculomotor control and adaptation of reading strategy (Bicknell & Levy, ; McConkie, Rayner, & Wilson, 1973; Norris, 2009; Wotschack, 2009) by providing evidence for adaptation at low levels of saccadic control that is shaped by quantitatively varying task demands and the dynamics of processing architecture. Copyright © 2013 Cognitive Science Society, Inc.
Assessment of and standardization for quantitative nondestructive test
NASA Technical Reports Server (NTRS)
Neuschaefer, R. W.; Beal, J. B.
1972-01-01
Present capabilities and limitations of nondestructive testing (NDT) as applied to aerospace structures during design, development, production, and operational phases are assessed. It will help determine what useful structural quantitative and qualitative data may be provided from raw materials to vehicle refurbishment. This assessment considers metal alloys systems and bonded composites presently applied in active NASA programs or strong contenders for future use. Quantitative and qualitative data has been summarized from recent literature, and in-house information, and presented along with a description of those structures or standards where the information was obtained. Examples, in tabular form, of NDT technique capabilities and limitations have been provided. NDT techniques discussed and assessed were radiography, ultrasonics, penetrants, thermal, acoustic, and electromagnetic. Quantitative data is sparse; therefore, obtaining statistically reliable flaw detection data must be strongly emphasized. The new requirements for reusable space vehicles have resulted in highly efficient design concepts operating in severe environments. This increases the need for quantitative NDT evaluation of selected structural components, the end item structure, and during refurbishment operations.
Short-range quantitative precipitation forecasting using Deep Learning approaches
NASA Astrophysics Data System (ADS)
Akbari Asanjan, A.; Yang, T.; Gao, X.; Hsu, K. L.; Sorooshian, S.
2017-12-01
Predicting short-range quantitative precipitation is very important for flood forecasting, early flood warning and other hydrometeorological purposes. This study aims to improve the precipitation forecasting skills using a recently developed and advanced machine learning technique named Long Short-Term Memory (LSTM). The proposed LSTM learns the changing patterns of clouds from Cloud-Top Brightness Temperature (CTBT) images, retrieved from the infrared channel of Geostationary Operational Environmental Satellite (GOES), using a sophisticated and effective learning method. After learning the dynamics of clouds, the LSTM model predicts the upcoming rainy CTBT events. The proposed model is then merged with a precipitation estimation algorithm termed Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks (PERSIANN) to provide precipitation forecasts. The results of merged LSTM with PERSIANN are compared to the results of an Elman-type Recurrent Neural Network (RNN) merged with PERSIANN and Final Analysis of Global Forecast System model over the states of Oklahoma, Florida and Oregon. The performance of each model is investigated during 3 storm events each located over one of the study regions. The results indicate the outperformance of merged LSTM forecasts comparing to the numerical and statistical baselines in terms of Probability of Detection (POD), False Alarm Ratio (FAR), Critical Success Index (CSI), RMSE and correlation coefficient especially in convective systems. The proposed method shows superior capabilities in short-term forecasting over compared methods.
Projected changes to growth and mortality of Hawaiian corals over the next 100 years.
Hoeke, Ron K; Jokiel, Paul L; Buddemeier, Robert W; Brainard, Russell E
2011-03-29
Recent reviews suggest that the warming and acidification of ocean surface waters predicated by most accepted climate projections will lead to mass mortality and declining calcification rates of reef-building corals. This study investigates the use of modeling techniques to quantitatively examine rates of coral cover change due to these effects. Broad-scale probabilities of change in shallow-water scleractinian coral cover in the Hawaiian Archipelago for years 2000-2099 A.D. were calculated assuming a single middle-of-the-road greenhouse gas emissions scenario. These projections were based on ensemble calculations of a growth and mortality model that used sea surface temperature (SST), atmospheric carbon dioxide (CO(2)), observed coral growth (calcification) rates, and observed mortality linked to mass coral bleaching episodes as inputs. SST and CO(2) predictions were derived from the World Climate Research Programme (WCRP) multi-model dataset, statistically downscaled with historical data. The model calculations illustrate a practical approach to systematic evaluation of climate change effects on corals, and also show the effect of uncertainties in current climate predictions and in coral adaptation capabilities on estimated changes in coral cover. Despite these large uncertainties, this analysis quantitatively illustrates that a large decline in coral cover is highly likely in the 21(st) Century, but that there are significant spatial and temporal variances in outcomes, even under a single climate change scenario.
Projected changes to growth and mortality of Hawaiian corals over the next 100 years
Hoeke, R.K.; Jokiel, P.L.; Buddemeier, R.W.; Brainard, R.E.
2011-01-01
Background: Recent reviews suggest that the warming and acidification of ocean surface waters predicated by most accepted climate projections will lead to mass mortality and declining calcification rates of reef-building corals. This study investigates the use of modeling techniques to quantitatively examine rates of coral cover change due to these effects. Methodology/Principal Findings: Broad-scale probabilities of change in shallow-water scleractinian coral cover in the Hawaiian Archipelago for years 2000-2099 A.D. were calculated assuming a single middle-of-the-road greenhouse gas emissions scenario. These projections were based on ensemble calculations of a growth and mortality model that used sea surface temperature (SST), atmospheric carbon dioxide (CO2), observed coral growth (calcification) rates, and observed mortality linked to mass coral bleaching episodes as inputs. SST and CO2 predictions were derived from the World Climate Research Programme (WCRP) multi-model dataset, statistically downscaled with historical data. Conclusions/Significance: The model calculations illustrate a practical approach to systematic evaluation of climate change effects on corals, and also show the effect of uncertainties in current climate predictions and in coral adaptation capabilities on estimated changes in coral cover. Despite these large uncertainties, this analysis quantitatively illustrates that a large decline in coral cover is highly likely in the 21st Century, but that there are significant spatial and temporal variances in outcomes, even under a single climate change scenario.
Bi, Jianjun; Song, Rengang; Yang, Huilan; Li, Bingling; Fan, Jianyong; Liu, Zhongrong; Long, Chaoqin
2011-01-01
Identification of immunodominant epitopes is the first step in the rational design of peptide vaccines aimed at T-cell immunity. To date, however, it is yet a great challenge for accurately predicting the potent epitope peptides from a pool of large-scale candidates with an efficient manner. In this study, a method that we named StepRank has been developed for the reliable and rapid prediction of binding capabilities/affinities between proteins and genome-wide peptides. In this procedure, instead of single strategy used in most traditional epitope identification algorithms, four steps with different purposes and thus different computational demands are employed in turn to screen the large-scale peptide candidates that are normally generated from, for example, pathogenic genome. The steps 1 and 2 aim at qualitative exclusion of typical nonbinders by using empirical rule and linear statistical approach, while the steps 3 and 4 focus on quantitative examination and prediction of the interaction energy profile and binding affinity of peptide to target protein via quantitative structure-activity relationship (QSAR) and structure-based free energy analysis. We exemplify this method through its application to binding predictions of the peptide segments derived from the 76 known open-reading frames (ORFs) of herpes simplex virus type 1 (HSV-1) genome with or without affinity to human major histocompatibility complex class I (MHC I) molecule HLA-A*0201, and find that the predictive results are well compatible with the classical anchor residue theory and perfectly match for the extended motif pattern of MHC I-binding peptides. The putative epitopes are further confirmed by comparisons with 11 experimentally measured HLA-A*0201-restrcited peptides from the HSV-1 glycoproteins D and K. We expect that this well-designed scheme can be applied in the computational screening of other viral genomes as well.
Quantitative imaging of the human upper airway: instrument design and clinical studies
NASA Astrophysics Data System (ADS)
Leigh, M. S.; Armstrong, J. J.; Paduch, A.; Sampson, D. D.; Walsh, J. H.; Hillman, D. R.; Eastwood, P. R.
2006-08-01
Imaging of the human upper airway is widely used in medicine, in both clinical practice and research. Common imaging modalities include video endoscopy, X-ray CT, and MRI. However, no current modality is both quantitative and safe to use for extended periods of time. Such a capability would be particularly valuable for sleep research, which is inherently reliant on long observation sessions. We have developed an instrument capable of quantitative imaging of the human upper airway, based on endoscopic optical coherence tomography. There are no dose limits for optical techniques, and the minimally invasive imaging probe is safe for use in overnight studies. We report on the design of the instrument and its use in preliminary clinical studies, and we present results from a range of initial experiments. The experiments show that the instrument is capable of imaging during sleep, and that it can record dynamic changes in airway size and shape. This information is useful for research into sleep disorders, and potentially for clinical diagnosis and therapies.
NASA Astrophysics Data System (ADS)
Indi Sriprisan, Sirikul; Townsend, Lawrence; Cucinotta, Francis A.; Miller, Thomas M.
Purpose: An analytical knockout-ablation-coalescence model capable of making quantitative predictions of the neutron spectra from high-energy nucleon-nucleus and nucleus-nucleus collisions is being developed for use in space radiation protection studies. The FORTRAN computer code that implements this model is called UBERNSPEC. The knockout or abrasion stage of the model is based on Glauber multiple scattering theory. The ablation part of the model uses the classical evaporation model of Weisskopf-Ewing. In earlier work, the knockout-ablation model has been extended to incorporate important coalescence effects into the formalism. Recently, alpha coalescence has been incorporated, and the ability to predict light ion spectra with the coalescence model added. The earlier versions were limited to nuclei with mass numbers less than 69. In this work, the UBERNSPEC code has been extended to make predictions of secondary neutrons and light ion production from the interactions of heavy charged particles with higher mass numbers (as large as 238). The predictions are compared with published measurements of neutron spectra and light ion energy for a variety of collision pairs. Furthermore, the predicted spectra from this work are compared with the predictions from the recently-developed heavy ion event generator incorporated in the Monte Carlo radiation transport code HETC-HEDS.
The evolution of trade-offs under directional and correlational selection.
Roff, Derek A; Fairbairn, Daphne J
2012-08-01
Using quantitative genetic theory, we develop predictions for the evolution of trade-offs in response to directional and correlational selection. We predict that directional selection favoring an increase in one trait in a trade-off will result in change in the intercept but not the slope of the trade-off function, with the mean value of the selected trait increasing and that of the correlated trait decreasing. Natural selection will generally favor an increase in some combination of trait values, which can be represented as directional selection on an index value. Such selection induces both directional and correlational selection on the component traits. Theory predicts that selection on an index value will also change the intercept but not the slope of the trade-off function but because of correlational selection, the direction of change in component traits may be in the same or opposite directions. We test these predictions using artificial selection on the well-established trade-off between fecundity and flight capability in the cricket, Gryllus firmus and compare the empirical results with a priori predictions made using genetic parameters from a separate half-sibling experiment. Our results support the predictions and illustrate the complexity of trade-off evolution when component traits are subject to both directional and correlational selection. © 2012 The Author(s). Evolution© 2012 The Society for the Study of Evolution.
Brocher, Thomas M.; Carr, Michael D.; Halsing, David L.; John, David A.; Langenheim, V.E.; Mangan, Margaret T.; Marvin-DiPasquale, Mark C.; Takekawa, John Y.; Tiedeman, Claire
2006-01-01
In the spring of 2004, the U.S. Geological Survey (USGS) Menlo Park Center Council commissioned an interdisciplinary working group to develop a forward-looking science strategy for the USGS Menlo Park Science Center in California (hereafter also referred to as "the Center"). The Center has been the flagship research center for the USGS in the western United States for more than 50 years, and the Council recognizes that science priorities must be the primary consideration guiding critical decisions made about the future evolution of the Center. In developing this strategy, the working group consulted widely within the USGS and with external clients and collaborators, so that most stakeholders had an opportunity to influence the science goals and operational objectives.The Science Goals are to: Natural Hazards: Conduct natural-hazard research and assessments critical to effective mitigation planning, short-term forecasting, and event response. Ecosystem Change: Develop a predictive understanding of ecosystem change that advances ecosystem restoration and adaptive management. Natural Resources: Advance the understanding of natural resources in a geologic, hydrologic, economic, environmental, and global context. Modeling Earth System Processes: Increase and improve capabilities for quantitative simulation, prediction, and assessment of Earth system processes.The strategy presents seven key Operational Objectives with specific actions to achieve the scientific goals. These Operational Objectives are to:Provide a hub for technology, laboratories, and library services to support science in the Western Region. Increase advanced computing capabilities and promote sharing of these resources. Enhance the intellectual diversity, vibrancy, and capacity of the work force through improved recruitment and retention. Strengthen client and collaborative relationships in the community at an institutional level.Expand monitoring capability by increasing density, sensitivity, and efficiency and reducing costs of instruments and networks. Encourage a breadth of scientific capabilities in Menlo Park to foster interdisciplinary science. Communicate USGS science to a diverse audience.
Kinetic Modeling of Sunflower Grain Filling and Fatty Acid Biosynthesis
Durruty, Ignacio; Aguirrezábal, Luis A. N.; Echarte, María M.
2016-01-01
Grain growth and oil biosynthesis are complex processes that involve various enzymes placed in different sub-cellular compartments of the grain. In order to understand the mechanisms controlling grain weight and composition, we need mathematical models capable of simulating the dynamic behavior of the main components of the grain during the grain filling stage. In this paper, we present a non-structured mechanistic kinetic model developed for sunflower grains. The model was first calibrated for sunflower hybrid ACA855. The calibrated model was able to predict the theoretical amount of carbohydrate equivalents allocated to the grain, grain growth and the dynamics of the oil and non-oil fraction, while considering maintenance requirements and leaf senescence. Incorporating into the model the serial-parallel nature of fatty acid biosynthesis permitted a good representation of the kinetics of palmitic, stearic, oleic, and linoleic acids production. A sensitivity analysis showed that the relative influence of input parameters changed along grain development. Grain growth was mostly affected by the specific growth parameter (μ′) while fatty acid composition strongly depended on their own maximum specific rate parameters. The model was successfully applied to two additional hybrids (MG2 and DK3820). The proposed model can be the first building block toward the development of a more sophisticated model, capable of predicting the effects of environmental conditions on grain weight and composition, in a comprehensive and quantitative way. PMID:27242809
Phenotypic characterization of glioblastoma identified through shape descriptors
NASA Astrophysics Data System (ADS)
Chaddad, Ahmad; Desrosiers, Christian; Toews, Matthew
2016-03-01
This paper proposes quantitatively describing the shape of glioblastoma (GBM) tissue phenotypes as a set of shape features derived from segmentations, for the purposes of discriminating between GBM phenotypes and monitoring tumor progression. GBM patients were identified from the Cancer Genome Atlas, and quantitative MR imaging data were obtained from the Cancer Imaging Archive. Three GBM tissue phenotypes are considered including necrosis, active tumor and edema/invasion. Volumetric tissue segmentations are obtained from registered T1˗weighted (T1˗WI) postcontrast and fluid-attenuated inversion recovery (FLAIR) MRI modalities. Shape features are computed from respective tissue phenotype segmentations, and a Kruskal-Wallis test was employed to select features capable of classification with a significance level of p < 0.05. Several classifier models are employed to distinguish phenotypes, where a leave-one-out cross-validation was performed. Eight features were found statistically significant for classifying GBM phenotypes with p <0.05, orientation is uninformative. Quantitative evaluations show the SVM results in the highest classification accuracy of 87.50%, sensitivity of 94.59% and specificity of 92.77%. In summary, the shape descriptors proposed in this work show high performance in predicting GBM tissue phenotypes. They are thus closely linked to morphological characteristics of GBM phenotypes and could potentially be used in a computer assisted labeling system.
Alexis, Matamoro-Vidal; Isaac, Salazar-Ciudad; David, Houle
2015-01-01
One of the aims of evolutionary developmental biology is to discover the developmental origins of morphological variation. The discipline has mainly focused on qualitative morphological differences (e.g., presence or absence of a structure) between species. Studies addressing subtle, quantitative variation are less common. The Drosophila wing is a model for the study of development and evolution, making it suitable to investigate the developmental mechanisms underlying the subtle quantitative morphological variation observed in nature. Previous reviews have focused on the processes involved in wing differentiation, patterning and growth. Here, we investigate what is known about how the wing achieves its final shape, and what variation in development is capable of generating the variation in wing shape observed in nature. Three major developmental stages need to be considered: larval development, pupariation, and pupal development. The major cellular processes involved in the determination of tissue size and shape are cell proliferation, cell death, oriented cell division and oriented cell intercalation. We review how variation in temporal and spatial distribution of growth and transcription factors affects these cellular mechanisms, which in turn affects wing shape. We then discuss which aspects of the wing morphological variation are predictable on the basis of these mechanisms. PMID:25619644
X-Ray and UV Photoelectron Spectroscopy | Materials Science | NREL
backsheet material, showing excellent quantitative agreement between measured and predicted peak area ratios quantitative agreement between measured and predicted peak area ratios. Subtle differences in polymer functionality are assessed by deviations from stoichiometry. Elemental Analysis Uses quantitative identification
Weather Prediction Center (WPC) Home Page
grids, quantitative precipitation, and winter weather outlook probabilities can be found at: http Short Range Products » More Medium Range Products Quantitative Precipitation Forecasts Legacy Page Discussion (Day 1-3) Quantitative Precipitation Forecast Discussion NWS Weather Prediction Center College
Effectiveness of Non-Lethal Capabilities in a Maritime Environment
2006-09-01
demonstrates both the space filling properties for quantitative factors of the NOLH and the lack of correlation between the factors. 27 Figure 12 ...11 b. Optical Dazzler ........................................................................ 12 c...Warning Munitions................................................................. 12 2. Lethal Capabilities
Quantitative Secondary Electron Detector (QSED)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nayak, Subu; Joy, David C.
2013-12-31
Research is proposed to investigate the feasibility of applying recent advances in semiconductor technology to fabricate direct digital Quantitative Secondary Electron Detectors (QSED) for scanning electron microscopes (SEMs). If successful, commercial versions of the QSED would transform the SEM into a quantitative, metrological system with enhanced capabilities that, in turn, would broaden research horizons across industries. This project will be conducted in collaboration with Dr. David C Joy at the University of Tennessee, who has demonstrated limited (to the 1keV range) digital collection of the energy from backscattered signals in a SEM using a modified silicon drift detector. Several detectormore » configurations will be fabricated and tested for sensitivities, background noise reduction, DC offset elimination, and metrological capabilities (linearity, accuracy, etc.) against a set of commercially important performance criteria to ascertain concept feasibility. Once feasibility is proven, the solid state digital device array and its switching frequency will be scaled-up, in Phase II, to improve temporal resolution. If successful, this work will produce a crucial advancement in electron microscopy with wide-ranging applications. The following are key advantages anticipated from direct digital QSED: 1. High signal-to-noise ratio will improve SEM resolution in nano-scale, which is critical for dimensional metrology in any application. 2. Quantitative measurement will enhance process control and design validation in semiconductors, photo-voltaics, bio-medical devices and catalysts; and will improve accuracy in predicting the reliability and the lifecycle of materials across industries. 3. Video and dynamic-imaging capabilities will advance study in nano-scale phenomena in a variety of industries, including pharmaceutical and semiconductor materials. 4. Lower cost will make high-performing electron microscopes affordable to more researchers. 5. Compact size and ease of integration with imaging software will enable customers to retrofit and upgrade existing SEM equipment. ScienceTomorrow’s direct digital QSED concept has generated enthusiastic interest among a number of microscope makers, service companies, and microscope users. The company has offers of support from several companies. The roles these companies would play in supporting the project are described in the proposal. The proposed QSED advance sits squarely in the middle of ScienceTomorrow’s mission to provide next-generation technology solutions to today’s critical problems and, if successful, will further the company’s business strategy by launching an advanced, high-margin product that will enable the company and its partners to create at least 17 net-new jobs by the end of 2018.« less
NASA Astrophysics Data System (ADS)
Boyd-Lee, Ashley; King, Julia
1992-07-01
A discrete statistical model of fatigue crack growth in a nickel base superalloy Waspaloy, which is quantitative from the start of the short crack regime to failure, is presented. Instantaneous crack growth rate distributions and persistence of arrest distributions are used to compute fatigue lives and worst case scenarios without extrapolation. The basis of the model is non-material specific, it provides an improved method of analyzing crack growth rate data. For Waspaloy, the model shows the importance of good bulk fatigue crack growth resistance to resist early short fatigue crack growth and the importance of maximizing crack arrest both by the presence of a proportion of small grains and by maximizing grain boundary corrugation.
An engineering design approach to systems biology.
Janes, Kevin A; Chandran, Preethi L; Ford, Roseanne M; Lazzara, Matthew J; Papin, Jason A; Peirce, Shayn M; Saucerman, Jeffrey J; Lauffenburger, Douglas A
2017-07-17
Measuring and modeling the integrated behavior of biomolecular-cellular networks is central to systems biology. Over several decades, systems biology has been shaped by quantitative biologists, physicists, mathematicians, and engineers in different ways. However, the basic and applied versions of systems biology are not typically distinguished, which blurs the separate aspirations of the field and its potential for real-world impact. Here, we articulate an engineering approach to systems biology, which applies educational philosophy, engineering design, and predictive models to solve contemporary problems in an age of biomedical Big Data. A concerted effort to train systems bioengineers will provide a versatile workforce capable of tackling the diverse challenges faced by the biotechnological and pharmaceutical sectors in a modern, information-dense economy.
Comparing modelling techniques when designing VPH gratings for BigBOSS
NASA Astrophysics Data System (ADS)
Poppett, Claire; Edelstein, Jerry; Lampton, Michael; Jelinsky, Patrick; Arns, James
2012-09-01
BigBOSS is a Stage IV Dark Energy instrument based on the Baryon Acoustic Oscillations (BAO) and Red Shift Distortions (RSD) techniques using spectroscopic data of 20 million ELG and LRG galaxies at 0.5<=z<=1.6 in addition to several hundred thousand QSOs at 0.5<=z<=3.5. When designing BigBOSS instrumentation, it is imperative to maximize throughput whilst maintaining a resolving power of between R=1500 and 4000 over a wavelength range of 360-980 nm. Volume phase Holographic (VPH) gratings have been identified as a key technology which will enable the efficiency requirement to be met, however it is important to be able to accurately predict their performance. In this paper we quantitatively compare different modelling techniques in order to assess the parameter space over which they are more capable of accurately predicting measured performance. Finally we present baseline parameters for grating designs that are most suitable for the BigBOSS instrument.
Kim, J; Lee, C; Chong, Y
2009-01-01
Influenza endonucleases have appeared as an attractive target of antiviral therapy for influenza infection. With the purpose of designing a novel antiviral agent with enhanced biological activities against influenza endonuclease, a three-dimensional quantitative structure-activity relationships (3D-QSAR) model was generated based on 34 influenza endonuclease inhibitors. The comparative molecular similarity index analysis (CoMSIA) with a steric, electrostatic and hydrophobic (SEH) model showed the best correlative and predictive capability (q(2) = 0.763, r(2) = 0.969 and F = 174.785), which provided a pharmacophore composed of the electronegative moiety as well as the bulky hydrophobic group. The CoMSIA model was used as a pharmacophore query in the UNITY search of the ChemDiv compound library to give virtual active compounds. The 3D-QSAR model was then used to predict the activity of the selected compounds, which identified three compounds as the most likely inhibitor candidates.
Fedosov, D. A.; Caswell, B.; Suresh, S.; Karniadakis, G. E.
2011-01-01
The pathogenicity of Plasmodium falciparum (Pf) malaria results from the stiffening of red blood cells (RBCs) and its ability to adhere to endothelial cells (cytoadherence). The dynamics of Pf-parasitized RBCs is studied by three-dimensional mesoscopic simulations of flow in cylindrical capillaries in order to predict the flow resistance enhancement at different parasitemia levels. In addition, the adhesive dynamics of Pf-RBCs is explored for various parameters revealing several types of cell dynamics such as firm adhesion, very slow slipping along the wall, and intermittent flipping. The parasite inside the RBC is modeled explicitly in order to capture phenomena such as “hindered tumbling” motion of the RBC and the sudden transition from firm RBC cytoadherence to flipping on the endothelial surface. These predictions are in quantitative agreement with recent experimental observations, and thus the three-dimensional modeling method presented here provides new capabilities for guiding and interpreting future in vitro and in vivo studies of malaria. PMID:21173269
Kinetic Monte Carlo and cellular particle dynamics simulations of multicellular systems
NASA Astrophysics Data System (ADS)
Flenner, Elijah; Janosi, Lorant; Barz, Bogdan; Neagu, Adrian; Forgacs, Gabor; Kosztin, Ioan
2012-03-01
Computer modeling of multicellular systems has been a valuable tool for interpreting and guiding in vitro experiments relevant to embryonic morphogenesis, tumor growth, angiogenesis and, lately, structure formation following the printing of cell aggregates as bioink particles. Here we formulate two computer simulation methods: (1) a kinetic Monte Carlo (KMC) and (2) a cellular particle dynamics (CPD) method, which are capable of describing and predicting the shape evolution in time of three-dimensional multicellular systems during their biomechanical relaxation. Our work is motivated by the need of developing quantitative methods for optimizing postprinting structure formation in bioprinting-assisted tissue engineering. The KMC and CPD model parameters are determined and calibrated by using an original computational-theoretical-experimental framework applied to the fusion of two spherical cell aggregates. The two methods are used to predict the (1) formation of a toroidal structure through fusion of spherical aggregates and (2) cell sorting within an aggregate formed by two types of cells with different adhesivities.
On the Conditioning of Machine-Learning-Assisted Turbulence Modeling
NASA Astrophysics Data System (ADS)
Wu, Jinlong; Sun, Rui; Wang, Qiqi; Xiao, Heng
2017-11-01
Recently, several researchers have demonstrated that machine learning techniques can be used to improve the RANS modeled Reynolds stress by training on available database of high fidelity simulations. However, obtaining improved mean velocity field remains an unsolved challenge, restricting the predictive capability of current machine-learning-assisted turbulence modeling approaches. In this work we define a condition number to evaluate the model conditioning of data-driven turbulence modeling approaches, and propose a stability-oriented machine learning framework to model Reynolds stress. Two canonical flows, the flow in a square duct and the flow over periodic hills, are investigated to demonstrate the predictive capability of the proposed framework. The satisfactory prediction performance of mean velocity field for both flows demonstrates the predictive capability of the proposed framework for machine-learning-assisted turbulence modeling. With showing the capability of improving the prediction of mean flow field, the proposed stability-oriented machine learning framework bridges the gap between the existing machine-learning-assisted turbulence modeling approaches and the demand of predictive capability of turbulence models in real applications.
A novel approach for quantitative harmonization in PET.
Namías, M; Bradshaw, T; Menezes, V O; Machado, M A D; Jeraj, R
2018-05-04
Positron emission tomography (PET) imaging allows for measurement of activity concentrations of a given radiotracer in vivo. The quantitative capabilities of PET imaging are particularly important in the context of monitoring response to treatment, where quantitative changes in tracer uptake could be used as a biomarker of treatment response. Reconstruction algorithms and settings have a significant impact on PET quantification. In this work we introduce a novel harmonization methodology requiring only a simple cylindrical phantom and show that it can match the performance of more complex harmonization approaches based on phantoms with spherical inserts. Resolution and noise measurements from cylindrical phantoms are used to simulate the spherical inserts from NEMA image quality phantoms. An optimization algorithm was used to find the optimal smoothing filters for the simulated NEMA phantom images to identify those that best harmonized the PET scanners. Our methodology was tested on seven different PET models from two manufacturers installed at five institutions. Our methodology is able to predict contrast recovery coefficients (CRCs) from NEMA phantoms with errors within ±5.2% for CRCmax and ±3.7% for CRCmean (limits of agreement = 95%). After applying the proposed harmonization protocol, all the CRC values were within the tolerances from EANM. Quantitative harmonization in compliance with the EARL FDG-PET/CT accreditation program is achieved in a simpler way, without the need of NEMA phantoms. This may lead to simplified scanner harmonization workflows more accessible to smaller institutions.
Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Ryan B.; Clegg, Samuel M.; Frydenvang, Jens
We report that accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response ofmore » an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “submodel” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. Lastly, the sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.« less
Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models
Anderson, Ryan B.; Clegg, Samuel M.; Frydenvang, Jens; ...
2016-12-15
We report that accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response ofmore » an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “submodel” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. Lastly, the sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.« less
Quantitative Near-field Microscopy of Heterogeneous and Correlated Electron Oxides
NASA Astrophysics Data System (ADS)
McLeod, Alexander Swinton
Scanning near-field optical microscopy (SNOM) is a novel scanning probe microscopy technique capable of circumventing the conventional diffraction limit of light, affording unparalleled optical resolution (down to 10 nanometers) even for radiation in the infrared and terahertz energy regimes, with light wavelengths exceeding 10 micrometers. However, although this technique has been developed and employed for more than a decade to a qualitatively impressive effect, researchers have lacked a practically quantitative grasp of its capabilities, and its application scope has so far remained restricted by implementations limited to ambient atmospheric conditions. The two-fold objective of this dissertation work has been to address both these shortcomings. The first half of the dissertation presents a realistic, semi-analytic, and benchmarked theoretical description of probe-sample near-field interactions that form the basis of SNOM. Owing its name to the efficient nano-focusing of light at a sharp metallic apex, the "lightning rod model" of probe-sample near-field interactions is mathematically developed from a flexible and realistic scattering formalism. Powerful and practical applications are demonstrated through the accurate prediction of spectroscopic near-field optical contrasts, as well as the "inversion" of these spectroscopic contrasts into a quantitative description of material optical properties. Thus enabled, this thesis work proceeds to present quantitative applications of infrared near-field spectroscopy to investigate nano-resolved chemical compositions in a diverse host of samples, including technologically relevant lithium ion battery materials, astrophysical planetary materials, and invaluable returned extraterrestrial samples. The second half of the dissertation presents the design, construction, and demonstration of a sophisticated low-temperature scanning near-field infrared microscope. This instrument operates in an ultra-high vacuum environment suitable for the investigation of nano-scale physics in correlated electron matter at cryogenic temperatures, thus vastly expanding the scope of applications for infrared SNOM. Performance of the microscope is demonstrated through quanttiative exploration of the canonical insulator-metal transition occuring in the correlated electron insulator V2O3. The methodology established for this investigation provides a model for ongoing and future nano-optical studies of phase transitions and phase coexistence in correlated electron oxides.
Multi-scale Modeling of Chromosomal DNA in Living Cells
NASA Astrophysics Data System (ADS)
Spakowitz, Andrew
The organization and dynamics of chromosomal DNA play a pivotal role in a range of biological processes, including gene regulation, homologous recombination, replication, and segregation. Establishing a quantitative theoretical model of DNA organization and dynamics would be valuable in bridging the gap between the molecular-level packaging of DNA and genome-scale chromosomal processes. Our research group utilizes analytical theory and computational modeling to establish a predictive theoretical model of chromosomal organization and dynamics. In this talk, I will discuss our efforts to develop multi-scale polymer models of chromosomal DNA that are both sufficiently detailed to address specific protein-DNA interactions while capturing experimentally relevant time and length scales. I will demonstrate how these modeling efforts are capable of quantitatively capturing aspects of behavior of chromosomal DNA in both prokaryotic and eukaryotic cells. This talk will illustrate that capturing dynamical behavior of chromosomal DNA at various length scales necessitates a range of theoretical treatments that accommodate the critical physical contributions that are relevant to in vivo behavior at these disparate length and time scales. National Science Foundation, Physics of Living Systems Program (PHY-1305516).
Vandermolen, Brooke I; Hezelgrave, Natasha L; Smout, Elizabeth M; Abbott, Danielle S; Seed, Paul T; Shennan, Andrew H
2016-10-01
Quantitative fetal fibronectin testing has demonstrated accuracy for prediction of spontaneous preterm birth in asymptomatic women with a history of preterm birth. Predictive accuracy in women with previous cervical surgery (a potentially different risk mechanism) is not known. We sought to compare the predictive accuracy of cervicovaginal fluid quantitative fetal fibronectin and cervical length testing in asymptomatic women with previous cervical surgery to that in women with 1 previous preterm birth. We conducted a prospective blinded secondary analysis of a larger observational study of cervicovaginal fluid quantitative fetal fibronectin concentration in asymptomatic women measured with a Hologic 10Q system (Hologic, Marlborough, MA). Prediction of spontaneous preterm birth (<30, <34, and <37 weeks) with cervicovaginal fluid quantitative fetal fibronectin concentration in primiparous women who had undergone at least 1 invasive cervical procedure (n = 473) was compared with prediction in women who had previous spontaneous preterm birth, preterm prelabor rupture of membranes, or late miscarriage (n = 821). Relationship with cervical length was explored. The rate of spontaneous preterm birth <34 weeks in the cervical surgery group was 3% compared with 9% in previous spontaneous preterm birth group. Receiver operating characteristic curves comparing quantitative fetal fibronectin for prediction at all 3 gestational end points were comparable between the cervical surgery and previous spontaneous preterm birth groups (34 weeks: area under the curve, 0.78 [95% confidence interval 0.64-0.93] vs 0.71 [95% confidence interval 0.64-0.78]; P = .39). Prediction of spontaneous preterm birth using cervical length compared with quantitative fetal fibronectin for prediction of preterm birth <34 weeks of gestation offered similar prediction (area under the curve, 0.88 [95% confidence interval 0.79-0.96] vs 0.77 [95% confidence interval 0.62-0.92], P = .12 in the cervical surgery group; and 0.77 [95% confidence interval 0.70-0.84] vs 0.74 [95% confidence interval 0.67-0.81], P = .32 in the previous spontaneous preterm birth group). Prediction of spontaneous preterm birth using cervicovaginal fluid quantitative fetal fibronectin in asymptomatic women with cervical surgery is valid, and has comparative accuracy to that in women with a history of spontaneous preterm birth. Copyright © 2016 Elsevier Inc. All rights reserved.
Additional extensions to the NASCAP computer code, volume 2
NASA Technical Reports Server (NTRS)
Stannard, P. R.; Katz, I.; Mandell, M. J.
1982-01-01
Particular attention is given to comparison of the actural response of the SCATHA (Spacecraft Charging AT High Altitudes) P78-2 satellite with theoretical (NASCAP) predictions. Extensive comparisons for a variety of environmental conditions confirm the validity of the NASCAP model. A summary of the capabilities and range of validity of NASCAP is presented, with extensive reference to previously published applications. It is shown that NASCAP is capable of providing quantitatively accurate results when the object and environment are adequately represented and fall within the range of conditions for which NASCAP was intended. Three dimensional electric field affects play an important role in determining the potential of dielectric surfaces and electrically isolated conducting surfaces, particularly in the presence of artificially imposed high voltages. A theory for such phenomena is presented and applied to the active control experiments carried out in SCATHA, as well as other space and laboratory experiments. Finally, some preliminary work toward modeling large spacecraft in polar Earth orbit is presented. An initial physical model is presented including charge emission. A simple code based upon the model is described along with code test results.
Quantitative self-assembly prediction yields targeted nanomedicines
NASA Astrophysics Data System (ADS)
Shamay, Yosi; Shah, Janki; Işık, Mehtap; Mizrachi, Aviram; Leibold, Josef; Tschaharganeh, Darjus F.; Roxbury, Daniel; Budhathoki-Uprety, Januka; Nawaly, Karla; Sugarman, James L.; Baut, Emily; Neiman, Michelle R.; Dacek, Megan; Ganesh, Kripa S.; Johnson, Darren C.; Sridharan, Ramya; Chu, Karen L.; Rajasekhar, Vinagolu K.; Lowe, Scott W.; Chodera, John D.; Heller, Daniel A.
2018-02-01
Development of targeted nanoparticle drug carriers often requires complex synthetic schemes involving both supramolecular self-assembly and chemical modification. These processes are generally difficult to predict, execute, and control. We describe herein a targeted drug delivery system that is accurately and quantitatively predicted to self-assemble into nanoparticles based on the molecular structures of precursor molecules, which are the drugs themselves. The drugs assemble with the aid of sulfated indocyanines into particles with ultrahigh drug loadings of up to 90%. We devised quantitative structure-nanoparticle assembly prediction (QSNAP) models to identify and validate electrotopological molecular descriptors as highly predictive indicators of nano-assembly and nanoparticle size. The resulting nanoparticles selectively targeted kinase inhibitors to caveolin-1-expressing human colon cancer and autochthonous liver cancer models to yield striking therapeutic effects while avoiding pERK inhibition in healthy skin. This finding enables the computational design of nanomedicines based on quantitative models for drug payload selection.
Long-distance gene flow and adaptation of forest trees to rapid climate change
Kremer, Antoine; Ronce, Ophélie; Robledo-Arnuncio, Juan J; Guillaume, Frédéric; Bohrer, Gil; Nathan, Ran; Bridle, Jon R; Gomulkiewicz, Richard; Klein, Etienne K; Ritland, Kermit; Kuparinen, Anna; Gerber, Sophie; Schueler, Silvio
2012-01-01
Forest trees are the dominant species in many parts of the world and predicting how they might respond to climate change is a vital global concern. Trees are capable of long-distance gene flow, which can promote adaptive evolution in novel environments by increasing genetic variation for fitness. It is unclear, however, if this can compensate for maladaptive effects of gene flow and for the long-generation times of trees. We critically review data on the extent of long-distance gene flow and summarise theory that allows us to predict evolutionary responses of trees to climate change. Estimates of long-distance gene flow based both on direct observations and on genetic methods provide evidence that genes can move over spatial scales larger than habitat shifts predicted under climate change within one generation. Both theoretical and empirical data suggest that the positive effects of gene flow on adaptation may dominate in many instances. The balance of positive to negative consequences of gene flow may, however, differ for leading edge, core and rear sections of forest distributions. We propose future experimental and theoretical research that would better integrate dispersal biology with evolutionary quantitative genetics and improve predictions of tree responses to climate change. PMID:22372546
Long-distance gene flow and adaptation of forest trees to rapid climate change.
Kremer, Antoine; Ronce, Ophélie; Robledo-Arnuncio, Juan J; Guillaume, Frédéric; Bohrer, Gil; Nathan, Ran; Bridle, Jon R; Gomulkiewicz, Richard; Klein, Etienne K; Ritland, Kermit; Kuparinen, Anna; Gerber, Sophie; Schueler, Silvio
2012-04-01
Forest trees are the dominant species in many parts of the world and predicting how they might respond to climate change is a vital global concern. Trees are capable of long-distance gene flow, which can promote adaptive evolution in novel environments by increasing genetic variation for fitness. It is unclear, however, if this can compensate for maladaptive effects of gene flow and for the long-generation times of trees. We critically review data on the extent of long-distance gene flow and summarise theory that allows us to predict evolutionary responses of trees to climate change. Estimates of long-distance gene flow based both on direct observations and on genetic methods provide evidence that genes can move over spatial scales larger than habitat shifts predicted under climate change within one generation. Both theoretical and empirical data suggest that the positive effects of gene flow on adaptation may dominate in many instances. The balance of positive to negative consequences of gene flow may, however, differ for leading edge, core and rear sections of forest distributions. We propose future experimental and theoretical research that would better integrate dispersal biology with evolutionary quantitative genetics and improve predictions of tree responses to climate change. © 2012 Blackwell Publishing Ltd/CNRS.
Wang, Qingzhi; Zhao, Hongxia; Wang, Yan; Xie, Qing; Chen, Jingwen; Quan, Xie
2017-09-08
Organophosphate flame retardants (OPFRs) are ubiquitous in the environment. To better understand and predict their environmental transport and fate, well-defined physicochemical properties are required. Vapor pressures ( P ) of 14 OPFRs were estimated as a function of temperature ( T ) by gas chromatography (GC), while 1,1,1-trichioro-2,2-bis (4-chlorophenyl) ethane ( p,p '-DDT) was acted as a reference substance. Their log P GC values and internal energies of phase transfer (△ vap H ) ranged from -6.17 to -1.25 and 74.1 kJ/mol to 122 kJ/mol, respectively. Substitution pattern and molar volume ( V M ) were found to be capable of influencing log P GC values of the OPFRs. The halogenated alkyl-OPFRs had lower log P GC values than aryl-or alkyl-OPFRs. The bigger the molar volume was, the smaller the log P GC value was. In addition, a quantitative structure-property relationship (QSPR) model of log P GC versus different relative retention times (RRTs) was developed with a high cross-validated value ( Q 2 cum ) of 0.946, indicating a good predictive ability and stability. Therefore, the log P GC values of the OPFRs without standard substance can be predicted by using their RRTs on different GC columns.
Forests and Soil Erosion across Europe
NASA Astrophysics Data System (ADS)
Bathurst, J. C.
2012-04-01
Land use and climate change threaten the ability of Europe's forests to provide a vital service in limiting soil erosion, e.g. from forest fires and landslides. However, our ability to define the threat and to propose mitigation measures suffers from two deficiencies concerning the forest/erosion interface: 1) While there have been a considerable number of field studies of the relationship between forest cover and erosion in different parts of Europe, the data sets are scattered among research groups and a range of literature outlets. There is no comprehensive overview of the forest/erosion interface at the European scale, essential for considering regional variations and investigating the effects of future changes in land use and climate. 2) Compared with forest/water studies, we have a poorer quantitative appreciation of forest/erosion interactions. In the forest/water area it is possible to make quantitative statements such as that a 20% change in forest cover across a river catchment is needed for the effect on annual water yield to be measurable or that a forested catchment in upland UK has an annual water yield around 15% lower than an otherwise comparable grassland catchment. Comparable statements are not yet possible for forest/erosion interactions and there are uncertainties in the mathematical representation of forest/erosion interactions which limit our ability to make predictions, for example of the impact of forest loss in a given area. This presentation therefore considers the next step in improving our predictive capability. It proposes the integration of existing research and data to construct the "big picture" across Europe, i.e. erosion rates and sediment yields associated with forest cover and its loss in a range of erosion regimes (e.g. post-forest fire erosion or post-logging landslides). This would provide a basis for generalizations at the European scale. However, such an overview would not form a predictive capability. Therefore it is also necessary to identify a range of predictive methods, from empirical guidelines to computer models, which can be recommended for applications such as extrapolating from the local to the regional scale and for planning mitigation strategies. Such developments could help improve efficiency in the integrated management of forest, soil and water resources, benefit local engineering projects ranging from hazard mitigation plans to road culvert design, contribute to the implementation of the EU Water Framework Development, form a more objective basis for cost/benefit analysis of proposed management actions and help in putting a value on forest services.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Chao; Xu, Zhijie; Lai, Canhai
The standard two-film theory (STFT) is a diffusion-based mechanism that can be used to describe gas mass transfer across liquid film. Fundamental assumptions of the STFT impose serious limitations on its ability to predict mass transfer coefficients. To better understand gas absorption across liquid film in practical situations, a multiphase computational fluid dynamics (CFD) model fully equipped with mass transport and chemistry capabilities has been developed for solvent-based carbon dioxide (CO 2) capture to predict the CO 2 mass transfer coefficient in a wetted wall column. The hydrodynamics is modeled using a volume of fluid method, and the diffusive andmore » reactive mass transfer between the two phases is modeled by adopting a one-fluid formulation. We demonstrate that the proposed CFD model can naturally account for the influence of many important factors on the overall mass transfer that cannot be quantitatively explained by the STFT, such as the local variation in fluid velocities and properties, flow instabilities, and complex geometries. The CFD model also can predict the local mass transfer coefficient variation along the column height, which the STFT typically does not consider.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Chao; Xu, Zhijie; Lai, Canhai
The standard two-film theory (STFT) is a diffusion-based mechanism that can be used to describe gas mass transfer across liquid film. Fundamental assumptions of the STFT impose serious limitations on its ability to predict mass transfer coefficients. To better understand gas absorption across liquid film in practical situations, a multiphase computational fluid dynamics (CFD) model fully equipped with mass transport and chemistry capabilities has been developed for solvent-based carbon dioxide (CO2) capture to predict the CO2 mass transfer coefficient in a wetted wall column. The hydrodynamics is modeled using a volume of fluid method, and the diffusive and reactive massmore » transfer between the two phases is modeled by adopting a one-fluid formulation. We demonstrate that the proposed CFD model can naturally account for the influence of many important factors on the overall mass transfer that cannot be quantitatively explained by the STFT, such as the local variation in fluid velocities and properties, flow instabilities, and complex geometries. The CFD model also can predict the local mass transfer coefficient variation along the column height, which the STFT typically does not consider.« less
Wang, Chao; Xu, Zhijie; Lai, Canhai; ...
2018-03-27
The standard two-film theory (STFT) is a diffusion-based mechanism that can be used to describe gas mass transfer across liquid film. Fundamental assumptions of the STFT impose serious limitations on its ability to predict mass transfer coefficients. To better understand gas absorption across liquid film in practical situations, a multiphase computational fluid dynamics (CFD) model fully equipped with mass transport and chemistry capabilities has been developed for solvent-based carbon dioxide (CO 2) capture to predict the CO 2 mass transfer coefficient in a wetted wall column. The hydrodynamics is modeled using a volume of fluid method, and the diffusive andmore » reactive mass transfer between the two phases is modeled by adopting a one-fluid formulation. We demonstrate that the proposed CFD model can naturally account for the influence of many important factors on the overall mass transfer that cannot be quantitatively explained by the STFT, such as the local variation in fluid velocities and properties, flow instabilities, and complex geometries. The CFD model also can predict the local mass transfer coefficient variation along the column height, which the STFT typically does not consider.« less
WPC Quantitative Precipitation Forecasts - Day 1
to all federal, state, and local government web resources and services. Quantitative Precipitation Prediction Center 5830 University Research Court College Park, Maryland 20740 Weather Prediction Center Web
Bertaux, François; Stoma, Szymon; Drasdo, Dirk; Batt, Gregory
2014-01-01
Isogenic cells sensing identical external signals can take markedly different decisions. Such decisions often correlate with pre-existing cell-to-cell differences in protein levels. When not neglected in signal transduction models, these differences are accounted for in a static manner, by assuming randomly distributed initial protein levels. However, this approach ignores the a priori non-trivial interplay between signal transduction and the source of this cell-to-cell variability: temporal fluctuations of protein levels in individual cells, driven by noisy synthesis and degradation. Thus, modeling protein fluctuations, rather than their consequences on the initial population heterogeneity, would set the quantitative analysis of signal transduction on firmer grounds. Adopting this dynamical view on cell-to-cell differences amounts to recast extrinsic variability into intrinsic noise. Here, we propose a generic approach to merge, in a systematic and principled manner, signal transduction models with stochastic protein turnover models. When applied to an established kinetic model of TRAIL-induced apoptosis, our approach markedly increased model prediction capabilities. One obtains a mechanistic explanation of yet-unexplained observations on fractional killing and non-trivial robust predictions of the temporal evolution of cell resistance to TRAIL in HeLa cells. Our results provide an alternative explanation to survival via induction of survival pathways since no TRAIL-induced regulations are needed and suggest that short-lived anti-apoptotic protein Mcl1 exhibit large and rare fluctuations. More generally, our results highlight the importance of accounting for stochastic protein turnover to quantitatively understand signal transduction over extended durations, and imply that fluctuations of short-lived proteins deserve particular attention. PMID:25340343
NASA Astrophysics Data System (ADS)
Tao, Feifei; Mba, Ogan; Liu, Li; Ngadi, Michael
2017-04-01
Polyunsaturated fatty acids (PUFAs) are important nutrients present in Salmon. However, current methods for quantifying the fatty acids (FAs) contents in foods are generally based on gas chromatography (GC) technique, which is time-consuming, laborious and destructive to the tested samples. Therefore, the capability of near-infrared (NIR) hyperspectral imaging to predict the PUFAs contents of C20:2 n-6, C20:3 n-6, C20:5 n-3, C22:5 n-3 and C22:6 n-3 in Salmon fillets in a rapid and non-destructive way was investigated in this work. Mean reflectance spectra were first extracted from the region of interests (ROIs), and then the spectral pre-processing methods of 2nd derivative and Savitzky-Golay (SG) smoothing were performed on the original spectra. Based on the original and the pre-processed spectra, PLSR technique was employed to develop the quantitative models for predicting each PUFA content in Salmon fillets. The results showed that for all the studied PUFAs, the quantitative models developed using the pre-processed reflectance spectra by "2nd derivative + SG smoothing" could improve their modeling results. Good prediction results were achieved with RP and RMSEP of 0.91 and 0.75 mg/g dry weight, 0.86 and 1.44 mg/g dry weight, 0.82 and 3.01 mg/g dry weight for C20:3 n-6, C22:5 n-3 and C20:5 n-3, respectively after pre-processing by "2nd derivative + SG smoothing". The work demonstrated that NIR hyperspectral imaging could be a useful tool for rapid and non-destructive determination of the PUFA contents in fish fillets.
NASA Technical Reports Server (NTRS)
Chen, Yongkang; Weislogel, Mark; Schaeffer, Ben; Semerjian, Ben; Yang, Lihong; Zimmerli, Gregory
2012-01-01
The mathematical theory of capillary surfaces has developed steadily over the centuries, but it was not until the last few decades that new technologies have put a more urgent demand on a substantially more qualitative and quantitative understanding of phenomena relating to capillarity in general. So far, the new theory development successfully predicts the behavior of capillary surfaces for special cases. However, an efficient quantitative mathematical prediction of capillary phenomena related to the shape and stability of geometrically complex equilibrium capillary surfaces remains a significant challenge. As one of many numerical tools, the open-source Surface Evolver (SE) algorithm has played an important role over the last two decades. The current effort was undertaken to provide a front-end to enhance the accessibility of SE for the purposes of design and analysis. Like SE, the new code is open-source and will remain under development for the foreseeable future. The ultimate goal of the current Surface Evolver Fluid Interface Tool (SEFIT) development is to build a fully integrated front-end with a set of graphical user interface (GUI) elements. Such a front-end enables the access to functionalities that are developed along with the GUIs to deal with pre-processing, convergence computation operation, and post-processing. In other words, SE-FIT is not just a GUI front-end, but an integrated environment that can perform sophisticated computational tasks, e.g. importing industry standard file formats and employing parameter sweep functions, which are both lacking in SE, and require minimal interaction by the user. These functions are created using a mixture of Visual Basic and the SE script language. These form the foundation for a high-performance front-end that substantially simplifies use without sacrificing the proven capabilities of SE. The real power of SE-FIT lies in its automated pre-processing, pre-defined geometries, convergence computation operation, computational diagnostic tools, and crash-handling capabilities to sustain extensive computations. SE-FIT performance is enabled by its so-called file-layer mechanism. During the early stages of SE-FIT development, it became necessary to modify the original SE code to enable capabilities required for an enhanced and synchronized communication. To this end, a file-layer was created that serves as a command buffer to ensure a continuous and sequential execution of commands sent from the front-end to SE. It also establishes a proper means for handling crashes. The file layer logs input commands and SE output; it also supports user interruption requests, back and forward operation (i.e. undo and redo), and others. It especially enables the batch mode computation of a series of equilibrium surfaces and the searching of critical parameter values in studying the stability of capillary surfaces. In this way, the modified SE significantly extends the capabilities of the original SE.
NASA Astrophysics Data System (ADS)
Steger, Stefan; Brenning, Alexander; Bell, Rainer; Petschko, Helene; Glade, Thomas
2016-06-01
Empirical models are frequently applied to produce landslide susceptibility maps for large areas. Subsequent quantitative validation results are routinely used as the primary criteria to infer the validity and applicability of the final maps or to select one of several models. This study hypothesizes that such direct deductions can be misleading. The main objective was to explore discrepancies between the predictive performance of a landslide susceptibility model and the geomorphic plausibility of subsequent landslide susceptibility maps while a particular emphasis was placed on the influence of incomplete landslide inventories on modelling and validation results. The study was conducted within the Flysch Zone of Lower Austria (1,354 km2) which is known to be highly susceptible to landslides of the slide-type movement. Sixteen susceptibility models were generated by applying two statistical classifiers (logistic regression and generalized additive model) and two machine learning techniques (random forest and support vector machine) separately for two landslide inventories of differing completeness and two predictor sets. The results were validated quantitatively by estimating the area under the receiver operating characteristic curve (AUROC) with single holdout and spatial cross-validation technique. The heuristic evaluation of the geomorphic plausibility of the final results was supported by findings of an exploratory data analysis, an estimation of odds ratios and an evaluation of the spatial structure of the final maps. The results showed that maps generated by different inventories, classifiers and predictors appeared differently while holdout validation revealed similar high predictive performances. Spatial cross-validation proved useful to expose spatially varying inconsistencies of the modelling results while additionally providing evidence for slightly overfitted machine learning-based models. However, the highest predictive performances were obtained for maps that explicitly expressed geomorphically implausible relationships indicating that the predictive performance of a model might be misleading in the case a predictor systematically relates to a spatially consistent bias of the inventory. Furthermore, we observed that random forest-based maps displayed spatial artifacts. The most plausible susceptibility map of the study area showed smooth prediction surfaces while the underlying model revealed a high predictive capability and was generated with an accurate landslide inventory and predictors that did not directly describe a bias. However, none of the presented models was found to be completely unbiased. This study showed that high predictive performances cannot be equated with a high plausibility and applicability of subsequent landslide susceptibility maps. We suggest that greater emphasis should be placed on identifying confounding factors and biases in landslide inventories. A joint discussion between modelers and decision makers of the spatial pattern of the final susceptibility maps in the field might increase their acceptance and applicability.
Measuring Aircraft Capability for Military and Political Analysis
1976-03-01
challenged in 1932 when a panel of distinguished British scientists discussed the feasibility of quantitatively estimating sensory events... Quantitative Analysis of Social Problems , E.R. Tufte (ed.), p. 407, Addison-Wesley, 1970. 17 "artificial" boundaries are imposed on the data. Less...of arms transfers in various parts of the world as well. Quantitative research (and hence measurement) contributes to theoretical development by
Christian, Eric L; Anderson, Vernon E.; Harris, Michael E
2011-01-01
Quantitative analysis of metal ion-phosphodiester interactions is a significant experimental challenge due to the complexities introduced by inner-sphere, outer-sphere (H-bonding with coordinated water), and electrostatic interactions that are difficult to isolate in solution studies. Here, we provide evidence that inner-sphere, H-bonding and electrostatic interactions between ions and dimethyl phosphate can be deconvoluted through peak fitting in the region of the Raman spectrum for the symmetric stretch of non-bridging phosphate oxygens (νsPO 2-). An approximation of the change in vibrational spectra due to different interaction modes is achieved using ions capable of all or a subset of the three forms of metal ion interaction. Contribution of electrostatic interactions to ion-induced changes to the Raman νsPO2- signal could be modeled by monitoring attenuation of νsPO2- in the presence of tetramethylammonium, while contribution of H-bonding and inner-sphere coordination could be approximated from the intensities of altered νsPO2- vibrational modes created by an interaction with ammonia, monovalent or divalent ions. A model is proposed in which discrete spectroscopic signals for inner-sphere, H-bonding, and electrostatic interactions are sufficient to account for the total observed change in νsPO2- signal due to interaction with a specific ion capable of all three modes of interaction. Importantly, the quantitative results are consistent with relative levels of coordination predicted from absolute electronegativity and absolute hardness of alkali and alkaline earth metals. PMID:21334281
NASA Astrophysics Data System (ADS)
Morales-Esteban, A.; Martínez-Álvarez, F.; Reyes, J.
2013-05-01
A method to predict earthquakes in two of the seismogenic areas of the Iberian Peninsula, based on Artificial Neural Networks (ANNs), is presented in this paper. ANNs have been widely used in many fields but only very few and very recent studies have been conducted on earthquake prediction. Two kinds of predictions are provided in this study: a) the probability of an earthquake, of magnitude equal or larger than a preset threshold magnitude, within the next 7 days, to happen; b) the probability of an earthquake of a limited magnitude interval to happen, during the next 7 days. First, the physical fundamentals related to earthquake occurrence are explained. Second, the mathematical model underlying ANNs is explained and the configuration chosen is justified. Then, the ANNs have been trained in both areas: The Alborán Sea and the Western Azores-Gibraltar fault. Later, the ANNs have been tested in both areas for a period of time immediately subsequent to the training period. Statistical tests are provided showing meaningful results. Finally, ANNs were compared to other well known classifiers showing quantitatively and qualitatively better results. The authors expect that the results obtained will encourage researchers to conduct further research on this topic. Development of a system capable of predicting earthquakes for the next seven days Application of ANN is particularly reliable to earthquake prediction. Use of geophysical information modeling the soil behavior as ANN's input data Successful analysis of one region with large seismic activity
NASA Astrophysics Data System (ADS)
Chaljub, E. O.; Bard, P.; Tsuno, S.; Kristek, J.; Moczo, P.; Franek, P.; Hollender, F.; Manakou, M.; Raptakis, D.; Pitilakis, K.
2009-12-01
During the last decades, an important effort has been dedicated to develop accurate and computationally efficient numerical methods to predict earthquake ground motion in heterogeneous 3D media. The progress in methods and increasing capability of computers have made it technically feasible to calculate realistic seismograms for frequencies of interest in seismic design applications. In order to foster the use of numerical simulation in practical prediction, it is important to (1) evaluate the accuracy of current numerical methods when applied to realistic 3D applications where no reference solution exists (verification) and (2) quantify the agreement between recorded and numerically simulated earthquake ground motion (validation). Here we report the results of the Euroseistest verification and validation project - an ongoing international collaborative work organized jointly by the Aristotle University of Thessaloniki, Greece, the Cashima research project (supported by the French nuclear agency, CEA, and the Laue-Langevin institute, ILL, Grenoble), and the Joseph Fourier University, Grenoble, France. The project involves more than 10 international teams from Europe, Japan and USA. The teams employ the Finite Difference Method (FDM), the Finite Element Method (FEM), the Global Pseudospectral Method (GPSM), the Spectral Element Method (SEM) and the Discrete Element Method (DEM). The project makes use of a new detailed 3D model of the Mygdonian basin (about 5 km wide, 15 km long, sediments reach about 400 m depth, surface S-wave velocity is 200 m/s). The prime target is to simulate 8 local earthquakes with magnitude from 3 to 5. In the verification, numerical predictions for frequencies up to 4 Hz for a series of models with increasing structural and rheological complexity are analyzed and compared using quantitative time-frequency goodness-of-fit criteria. Predictions obtained by one FDM team and the SEM team are close and different from other predictions (consistent with the ESG2006 exercise which targeted the Grenoble Valley). Diffractions off the basin edges and induced surface-wave propagation mainly contribute to differences between predictions. The differences are particularly large in the elastic models but remain important also in models with attenuation. In the validation, predictions are compared with the recordings by a local array of 19 surface and borehole accelerometers. The level of agreement is found event-dependent. For the largest-magnitude event the agreement is surprisingly good even at high frequencies.
Davatzikos, Christos; Rathore, Saima; Bakas, Spyridon; Pati, Sarthak; Bergman, Mark; Kalarot, Ratheesh; Sridharan, Patmaa; Gastounioti, Aimilia; Jahani, Nariman; Cohen, Eric; Akbari, Hamed; Tunc, Birkan; Doshi, Jimit; Parker, Drew; Hsieh, Michael; Sotiras, Aristeidis; Li, Hongming; Ou, Yangming; Doot, Robert K; Bilello, Michel; Fan, Yong; Shinohara, Russell T; Yushkevich, Paul; Verma, Ragini; Kontos, Despina
2018-01-01
The growth of multiparametric imaging protocols has paved the way for quantitative imaging phenotypes that predict treatment response and clinical outcome, reflect underlying cancer molecular characteristics and spatiotemporal heterogeneity, and can guide personalized treatment planning. This growth has underlined the need for efficient quantitative analytics to derive high-dimensional imaging signatures of diagnostic and predictive value in this emerging era of integrated precision diagnostics. This paper presents cancer imaging phenomics toolkit (CaPTk), a new and dynamically growing software platform for analysis of radiographic images of cancer, currently focusing on brain, breast, and lung cancer. CaPTk leverages the value of quantitative imaging analytics along with machine learning to derive phenotypic imaging signatures, based on two-level functionality. First, image analysis algorithms are used to extract comprehensive panels of diverse and complementary features, such as multiparametric intensity histogram distributions, texture, shape, kinetics, connectomics, and spatial patterns. At the second level, these quantitative imaging signatures are fed into multivariate machine learning models to produce diagnostic, prognostic, and predictive biomarkers. Results from clinical studies in three areas are shown: (i) computational neuro-oncology of brain gliomas for precision diagnostics, prediction of outcome, and treatment planning; (ii) prediction of treatment response for breast and lung cancer, and (iii) risk assessment for breast cancer.
Application of a Snow Growth Model to Radar Remote Sensing
NASA Astrophysics Data System (ADS)
Erfani, E.; Mitchell, D. L.
2014-12-01
Microphysical growth processes of diffusion, aggregation and riming are incorporated analytically in a steady-state snow growth model (SGM) to solve the zeroth- and second- moment conservation equations with respect to mass. The SGM is initiated by radar reflectivity (Zw), supersaturation, temperature, and a vertical profile of the liquid water content (LWC), and it uses a gamma size distribution (SD) to predict the vertical evolution of size spectra. Aggregation seems to play an important role in the evolution of snowfall rates and the snowfall rates produced by aggregation, diffusion and riming are considerably greater than those produced by diffusion and riming alone, demonstrating the strong interaction between aggregation and riming. The impact of ice particle shape on particle growth rates and fall speeds is represented in the SGM in terms of ice particle mass-dimension (m-D) power laws (m = αDβ). These growth rates are qualitatively consistent with empirical growth rates, with slower (faster) growth rates predicted for higher (lower) β values. In most models, β is treated constant for a given ice particle habit, but it is well known that β is larger for the smaller crystals. Our recent work quantitatively calculates β and α for cirrus clouds as a function of D where the m-D expression is a second-order polynomial in log-log space. By adapting this method to the SGM, the ice particle growth rates and fall speeds are predicted more accurately. Moreover, the size spectra predicted by the SGM are in good agreement with those from aircraft measurements during Lagrangian spiral descents through frontal clouds, indicating the successful modeling of microphysical processes. Since the lowest Zw over complex topography is often significantly above cloud base, the precipitation is often underestimated by radar quantitative precipitation estimates (QPE). Our SGM is capable of being initialized with Zw at the lowest reliable radar echo and consequently improves QPE at ground level.
NASA Astrophysics Data System (ADS)
Fakhruddin, S. H. M.; Babel, Mukand S.; Kawasaki, Akiyuki
2014-05-01
Coastal inundations are an increasing threat to the lives and livelihoods of people living in low-lying, highly-populated coastal areas. According to a World Bank Report in 2005, at least 2.6 million people may have drowned due to coastal inundation, particularly caused by storm surges, over the last 200 years. Forecasting and prediction of natural events, such as tropical and extra-tropical cyclones, inland flooding, and severe winter weather, provide critical guidance to emergency managers and decision-makers from the local to the national level, with the goal of minimizing both human and economic losses. This guidance is used to facilitate evacuation route planning, post-disaster response and resource deployment, and critical infrastructure protection and securing, and it must be available within a time window in which decision makers can take appropriate action. Recognizing this extreme vulnerability of coastal areas to inundation/flooding, and with a view to improve safety-related services for the community, research should strongly enhance today's forecasting, prediction and early warning capabilities in order to improve the assessment of coastal vulnerability and risks and develop adequate prevention, mitigation and preparedness measures. This paper tries to develop an impact-oriented quantitative coastal inundation forecasting and early warning system with social and economic assessment to address the challenges faced by coastal communities to enhance their safety and to support sustainable development, through the improvement of coastal inundation forecasting and warning systems.
Antiferromagnetic nano-oscillator in external magnetic fields
NASA Astrophysics Data System (ADS)
Checiński, Jakub; Frankowski, Marek; Stobiecki, Tomasz
2017-11-01
We describe the dynamics of an antiferromagnetic nano-oscillator in an external magnetic field of any given time distribution. The oscillator is powered by a spin current originating from spin-orbit effects in a neighboring heavy metal layer and is capable of emitting a THz signal in the presence of an additional easy-plane anisotropy. We derive an analytical formula describing the interaction between such a system and an external field, which can affect the output signal character. Interactions with magnetic pulses of different shapes, with a sinusoidal magnetic field and with a sequence of rapidly changing magnetic fields are discussed. We also perform numerical simulations based on the Landau-Lifshitz-Gilbert equation with spin-transfer torque effects to verify the obtained results and find a very good quantitative agreement between analytical and numerical predictions.
He, Xiaojia; Aker, Winfred G; Leszczynski, Jerzy; Hwang, Huey-Min
2014-03-01
In this report, we critically reviewed selected intrinsic physicochemical properties of engineered nanomaterials (ENMs) and their role in the interaction of the ENMs with the immediate surroundings in representative aquatic environments. The behavior of ENMs with respect to dynamic microenvironments at the nano-bio-eco interface level, and the resulting impact on their toxicity, fate, and exposure potential are elaborated. Based on this literature review, we conclude that a holistic approach is urgently needed to fulfill our knowledge gap regarding the safety of discharged ENMs. This comparative approach affords the capability to recognize and understand the potential hazards of ENMs and their toxicity mechanisms, and ultimately to establish a quantitative and reliable system to predict such outcomes. Copyright © 2014. Published by Elsevier B.V.
A Man-Machine System for Contemporary Counseling Practice: Diagnosis and Prediction.
ERIC Educational Resources Information Center
Roach, Arthur J.
This paper looks at present and future capabilities for diagnosis and prediction in computer-based guidance efforts and reviews the problems and potentials which will accompany the implementation of such capabilities. In addition to necessary procedural refinement in prediction, future developments in computer-based educational and career…
Assessing physical activity using wearable monitors: measures of physical activity.
Butte, Nancy F; Ekelund, Ulf; Westerterp, Klaas R
2012-01-01
Physical activity may be defined broadly as "all bodily actions produced by the contraction of skeletal muscle that increase energy expenditure above basal level." Physical activity is a complex construct that can be classified into major categories qualitatively, quantitatively, or contextually. The quantitative assessment of physical activity using wearable monitors is grounded in the measurement of energy expenditure. Six main categories of wearable monitors are currently available to investigators: pedometers, load transducers/foot-contact monitors, accelerometers, HR monitors, combined accelerometer and HR monitors, and multiple sensor systems. Currently available monitors are capable of measuring total physical activity as well as components of physical activity that play important roles in human health. The selection of wearable monitors for measuring physical activity will depend on the physical activity component of interest, study objectives, characteristics of the target population, and study feasibility in terms of cost and logistics. Future development of sensors and analytical techniques for assessing physical activity should focus on the dynamic ranges of sensors, comparability for sensor output across manufacturers, and the application of advanced modeling techniques to predict energy expenditure and classify physical activities. New approaches for qualitatively classifying physical activity should be validated using direct observation or recording. New sensors and methods for quantitatively assessing physical activity should be validated in laboratory and free-living populations using criterion methods of calorimetry or doubly labeled water.
NASA Astrophysics Data System (ADS)
Singh, Vijay Raj; Yaqoob, Zahid; So, Peter T. C.
2017-02-01
Quantitative phase microscopy (QPM) techniques developed so far primarily belongs to high speed transmitted light based systems that has enough sensitivity to resolve membrane fluctuations and dynamics, but has no depth resolution. Therefore, most biomechanics studies using QPM today is confined to simple cells, such as RBCs, without internal organelles. An important instrument that will greatly extend the biomedical applications of QPM is to develop next generation microscope with 3D capability and sufficient temporal resolution to study biomechanics of complex eukaryotic cells including the mechanics of their internal compartments. For eukaryotic cells, the depth sectioning capability is critical and should be sufficient to distinguish nucleic membrane fluctuations from plasma membrane fluctuations. Further, this microscope must provide high temporal resolution since typical eukaryotes membranes are substantially stiffer than RBCs. A confocal reflectance quantitative phase microscope is presented based on multi-pinhole scanning, with the capabilities of higher temporal resolution and sensitivity for nucleic and plasma membranes of eukaryotic cells. System hardware is developed based on an array of confocal pinhole generated by using the `ON' state of subset of micro-mirrors of digital micro-mirror device (DMD, from Texas Instruments) and high-speed raster scanning provides 14ms imaging speed in wide-field mode. A common path interferometer is integrated at the imaging arm for detection of specimens' quantitative phase information. Theoretical investigation of quantitative phase reconstructed from system is investigated and application of system is presented for dimensional fluctuations measurements of both cellular plasma and nucleic membranes of embryonic stem cells.
A variable capacitance based modeling and power capability predicting method for ultracapacitor
NASA Astrophysics Data System (ADS)
Liu, Chang; Wang, Yujie; Chen, Zonghai; Ling, Qiang
2018-01-01
Methods of accurate modeling and power capability predicting for ultracapacitors are of great significance in management and application of lithium-ion battery/ultracapacitor hybrid energy storage system. To overcome the simulation error coming from constant capacitance model, an improved ultracapacitor model based on variable capacitance is proposed, where the main capacitance varies with voltage according to a piecewise linear function. A novel state-of-charge calculation approach is developed accordingly. After that, a multi-constraint power capability prediction is developed for ultracapacitor, in which a Kalman-filter-based state observer is designed for tracking ultracapacitor's real-time behavior. Finally, experimental results verify the proposed methods. The accuracy of the proposed model is verified by terminal voltage simulating results under different temperatures, and the effectiveness of the designed observer is proved by various test conditions. Additionally, the power capability prediction results of different time scales and temperatures are compared, to study their effects on ultracapacitor's power capability.
Confronting uncertainty in flood damage predictions
NASA Astrophysics Data System (ADS)
Schröter, Kai; Kreibich, Heidi; Vogel, Kristin; Merz, Bruno
2015-04-01
Reliable flood damage models are a prerequisite for the practical usefulness of the model results. Oftentimes, traditional uni-variate damage models as for instance depth-damage curves fail to reproduce the variability of observed flood damage. Innovative multi-variate probabilistic modelling approaches are promising to capture and quantify the uncertainty involved and thus to improve the basis for decision making. In this study we compare the predictive capability of two probabilistic modelling approaches, namely Bagging Decision Trees and Bayesian Networks. For model evaluation we use empirical damage data which are available from computer aided telephone interviews that were respectively compiled after the floods in 2002, 2005 and 2006, in the Elbe and Danube catchments in Germany. We carry out a split sample test by sub-setting the damage records. One sub-set is used to derive the models and the remaining records are used to evaluate the predictive performance of the model. Further we stratify the sample according to catchments which allows studying model performance in a spatial transfer context. Flood damage estimation is carried out on the scale of the individual buildings in terms of relative damage. The predictive performance of the models is assessed in terms of systematic deviations (mean bias), precision (mean absolute error) as well as in terms of reliability which is represented by the proportion of the number of observations that fall within the 95-quantile and 5-quantile predictive interval. The reliability of the probabilistic predictions within validation runs decreases only slightly and achieves a very good coverage of observations within the predictive interval. Probabilistic models provide quantitative information about prediction uncertainty which is crucial to assess the reliability of model predictions and improves the usefulness of model results.
A Deep Space Orbit Determination Software: Overview and Event Prediction Capability
NASA Astrophysics Data System (ADS)
Kim, Youngkwang; Park, Sang-Young; Lee, Eunji; Kim, Minsik
2017-06-01
This paper presents an overview of deep space orbit determination software (DSODS), as well as validation and verification results on its event prediction capabilities. DSODS was developed in the MATLAB object-oriented programming environment to support the Korea Pathfinder Lunar Orbiter (KPLO) mission. DSODS has three major capabilities: celestial event prediction for spacecraft, orbit determination with deep space network (DSN) tracking data, and DSN tracking data simulation. To achieve its functionality requirements, DSODS consists of four modules: orbit propagation (OP), event prediction (EP), data simulation (DS), and orbit determination (OD) modules. This paper explains the highest-level data flows between modules in event prediction, orbit determination, and tracking data simulation processes. Furthermore, to address the event prediction capability of DSODS, this paper introduces OP and EP modules. The role of the OP module is to handle time and coordinate system conversions, to propagate spacecraft trajectories, and to handle the ephemerides of spacecraft and celestial bodies. Currently, the OP module utilizes the General Mission Analysis Tool (GMAT) as a third-party software component for highfidelity deep space propagation, as well as time and coordinate system conversions. The role of the EP module is to predict celestial events, including eclipses, and ground station visibilities, and this paper presents the functionality requirements of the EP module. The validation and verification results show that, for most cases, event prediction errors were less than 10 millisec when compared with flight proven mission analysis tools such as GMAT and Systems Tool Kit (STK). Thus, we conclude that DSODS is capable of predicting events for the KPLO in real mission applications.
The ease and rapidity of quantitative DNA sequence detection by real-time PCR instruments promises to make their use increasingly common for the microbial analysis many different types of environmental samples. To fully exploit the capabilities of these instruments, correspondin...
USDA-ARS?s Scientific Manuscript database
Vibrio parahaemolyticus is a significant human pathogen capable of causing foodborne gastroenteritis associated with the consumption of contaminated raw or undercooked seafood. Quantitative RT-PCR (qRT-PCR) is a useful tool for studying gene expression in V. parahaemolyticus to characterize the viru...
ERIC Educational Resources Information Center
Burton, Hilary D.
TIS (Technology Information System) is an intelligent gateway system capable of performing quantitative evaluation and analysis of bibliographic citations using a set of Process functions. Originally developed by Lawrence Livermore National Laboratory (LLNL) to analyze information retrieved from three major federal databases, DOE/RECON,…
Analysis of the impact of safeguards criteria
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mullen, M.F.; Reardon, P.T.
As part of the US Program of Technical Assistance to IAEA Safeguards, the Pacific Northwest Laboratory (PNL) was asked to assist in developing and demonstrating a model for assessing the impact of setting criteria for the application of IAEA safeguards. This report presents the results of PNL's work on the task. The report is in three parts. The first explains the technical approach and methodology. The second contains an example application of the methodology. The third presents the conclusions of the study. PNL used the model and computer programs developed as part of Task C.5 (Estimation of Inspection Efforts) ofmore » the Program of Technical Assistance. The example application of the methodology involves low-enriched uranium conversion and fuel fabrication facilities. The effects of variations in seven parameters are considered: false alarm probability, goal probability of detection, detection goal quantity, the plant operator's measurement capability, the inspector's variables measurement capability, the inspector's attributes measurement capability, and annual plant throughput. Among the key results and conclusions of the analysis are the following: the variables with the greatest impact on the probability of detection are the inspector's measurement capability, the goal quantity, and the throughput; the variables with the greatest impact on inspection costs are the throughput, the goal quantity, and the goal probability of detection; there are important interactions between variables. That is, the effects of a given variable often depends on the level or value of some other variable. With the methodology used in this study, these interactions can be quantitatively analyzed; reasonably good approximate prediction equations can be developed using the methodology described here.« less
Essential Set of Molecular Descriptors for ADME Prediction in Drug and Environmental Chemical Space
Historically, the disciplines of pharmacology and toxicology have embraced quantitative structure-activity relationships (QSAR) and quantitative structure-property relationships (QSPR) to predict ADME properties or biological activities of untested chemicals. The question arises ...
NASA Astrophysics Data System (ADS)
Nuraeni, E.; Rahmat, A.
2018-05-01
Forming of cognitive schemes of plant anatomy concepts is performed by processing of qualitative and quantitative data obtained from microscopic observations. To enhancing student’s quantitative literacy, strategy of plant anatomy course was modified by adding the task to analyze quantitative data produced by quantitative measurement of plant anatomy guided by material course. Participant in this study was 24 biology students and 35 biology education students. Quantitative Literacy test, complex thinking in plant anatomy test and questioner used to evaluate the course. Quantitative literacy capability data was collected by quantitative literacy test with the rubric from the Association of American Colleges and Universities, Complex thinking in plant anatomy by test according to Marzano and questioner. Quantitative literacy data are categorized according to modified Rhodes and Finley categories. The results showed that quantitative literacy of biology education students is better than biology students.
Band Alignment and Controllable Electron Migration between Rutile and Anatase TiO2
Mi, Yang; Weng, Yuxiang
2015-01-01
TiO2 is the most promising semiconductor for photocatalytic splitting of water for hydrogen and degradation of pollutants. The highly photocatalytic active form is its mixed phase of two polymorphs anatase and rutile rather than their pristine compositions. Such a synergetic effect is understood by the staggered band alignment favorable to spatial charge separation. However, electron migration in either direction between the two phases has been reported, the reason of which is still unknown. We determined the band alignment by a novel method, i.e., transient infrared absorption-excitation energy scanning spectra, showing their conduction bands being aligned, thus the electron migration direction is controlled by dynamical factors, such as varying the particle size of anatase, putting electron or hole scavengers on either the surface of anatase or rutile phases, or both. A quantitative criterion capable of predicting the migration direction under various conditions including particle size and surface chemical reactions is proposed, the predictions have been verified experimentally in several typical cases. This would give rise to a great potential in designing more effective titania photocatalysts. PMID:26169699
NASA Technical Reports Server (NTRS)
Phillips, Dave; Haas, William; Barth, Tim; Benjamin, Perakath; Graul, Michael; Bagatourova, Olga
2005-01-01
Range Process Simulation Tool (RPST) is a computer program that assists managers in rapidly predicting and quantitatively assessing the operational effects of proposed technological additions to, and/or upgrades of, complex facilities and engineering systems such as the Eastern Test Range. Originally designed for application to space transportation systems, RPST is also suitable for assessing effects of proposed changes in industrial facilities and large organizations. RPST follows a model-based approach that includes finite-capacity schedule analysis and discrete-event process simulation. A component-based, scalable, open architecture makes RPST easily and rapidly tailorable for diverse applications. Specific RPST functions include: (1) definition of analysis objectives and performance metrics; (2) selection of process templates from a processtemplate library; (3) configuration of process models for detailed simulation and schedule analysis; (4) design of operations- analysis experiments; (5) schedule and simulation-based process analysis; and (6) optimization of performance by use of genetic algorithms and simulated annealing. The main benefits afforded by RPST are provision of information that can be used to reduce costs of operation and maintenance, and the capability for affordable, accurate, and reliable prediction and exploration of the consequences of many alternative proposed decisions.
In vitro ovine articular chondrocyte proliferation: experiments and modelling.
Mancuso, L; Liuzzo, M I; Fadda, S; Pisu, M; Cincotti, A; Arras, M; La Nasa, G; Concas, A; Cao, G
2010-06-01
This study focuses on analysis of in vitro cultures of chondrocytes from ovine articular cartilage. Isolated cells were seeded in Petri dishes, then expanded to confluence and phenotypically characterized by flow cytometry. The sigmoidal temporal profile of total counts was obtained by classic haemocytometry and corresponding cell size distributions were measured electronically using a Coulter Counter. A mathematical model recently proposed (1) was adopted for quantitative interpretation of these experimental data. The model is based on a 1-D (that is, mass-structured), single-staged population balance approach capable of taking into account contact inhibition at confluence. The model's parameters were determined by fitting measured total cell counts and size distributions. Model reliability was verified by predicting cell proliferation counts and corresponding size distributions at culture times longer than those used when tuning the model's parameters. It was found that adoption of cell mass as the intrinsic characteristic of a growing chondrocyte population enables sigmoidal temporal profiles of total counts in the Petri dish, as well as cell size distributions at 'balanced growth', to be adequately predicted.
NASA Astrophysics Data System (ADS)
Petković, Dalibor; Shamshirband, Shahaboddin; Saboohi, Hadi; Ang, Tan Fong; Anuar, Nor Badrul; Rahman, Zulkanain Abdul; Pavlović, Nenad T.
2014-07-01
The quantitative assessment of image quality is an important consideration in any type of imaging system. The modulation transfer function (MTF) is a graphical description of the sharpness and contrast of an imaging system or of its individual components. The MTF is also known and spatial frequency response. The MTF curve has different meanings according to the corresponding frequency. The MTF of an optical system specifies the contrast transmitted by the system as a function of image size, and is determined by the inherent optical properties of the system. In this study, the polynomial and radial basis function (RBF) are applied as the kernel function of Support Vector Regression (SVR) to estimate and predict estimate MTF value of the actual optical system according to experimental tests. Instead of minimizing the observed training error, SVR_poly and SVR_rbf attempt to minimize the generalization error bound so as to achieve generalized performance. The experimental results show that an improvement in predictive accuracy and capability of generalization can be achieved by the SVR_rbf approach in compare to SVR_poly soft computing methodology.
Multi-Dimensional Calibration of Impact Dynamic Models
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Annett, Martin S.; Jackson, Karen E.
2011-01-01
NASA Langley, under the Subsonic Rotary Wing Program, recently completed two helicopter tests in support of an in-house effort to study crashworthiness. As part of this effort, work is on-going to investigate model calibration approaches and calibration metrics for impact dynamics models. Model calibration of impact dynamics problems has traditionally assessed model adequacy by comparing time histories from analytical predictions to test at only a few critical locations. Although this approach provides for a direct measure of the model predictive capability, overall system behavior is only qualitatively assessed using full vehicle animations. In order to understand the spatial and temporal relationships of impact loads as they migrate throughout the structure, a more quantitative approach is needed. In this work impact shapes derived from simulated time history data are used to recommend sensor placement and to assess model adequacy using time based metrics and orthogonality multi-dimensional metrics. An approach for model calibration is presented that includes metric definitions, uncertainty bounds, parameter sensitivity, and numerical optimization to estimate parameters to reconcile test with analysis. The process is illustrated using simulated experiment data.
NASA Astrophysics Data System (ADS)
Asoodeh, Mojtaba; Bagheripour, Parisa
2012-01-01
Measurement of compressional, shear, and Stoneley wave velocities, carried out by dipole sonic imager (DSI) logs, provides invaluable data in geophysical interpretation, geomechanical studies and hydrocarbon reservoir characterization. The presented study proposes an improved methodology for making a quantitative formulation between conventional well logs and sonic wave velocities. First, sonic wave velocities were predicted from conventional well logs using artificial neural network, fuzzy logic, and neuro-fuzzy algorithms. Subsequently, a committee machine with intelligent systems was constructed by virtue of hybrid genetic algorithm-pattern search technique while outputs of artificial neural network, fuzzy logic and neuro-fuzzy models were used as inputs of the committee machine. It is capable of improving the accuracy of final prediction through integrating the outputs of aforementioned intelligent systems. The hybrid genetic algorithm-pattern search tool, embodied in the structure of committee machine, assigns a weight factor to each individual intelligent system, indicating its involvement in overall prediction of DSI parameters. This methodology was implemented in Asmari formation, which is the major carbonate reservoir rock of Iranian oil field. A group of 1,640 data points was used to construct the intelligent model, and a group of 800 data points was employed to assess the reliability of the proposed model. The results showed that the committee machine with intelligent systems performed more effectively compared with individual intelligent systems performing alone.
Huang, Ri-Bo; Du, Qi-Shi; Wei, Yu-Tuo; Pang, Zong-Wen; Wei, Hang; Chou, Kuo-Chen
2009-02-07
Predicting the bioactivity of peptides and proteins is an important challenge in drug development and protein engineering. In this study we introduce a novel approach, the so-called "physics and chemistry-driven artificial neural network (Phys-Chem ANN)", to deal with such a problem. Unlike the existing ANN approaches, which were designed under the inspiration of biological neural system, the Phys-Chem ANN approach is based on the physical and chemical principles, as well as the structural features of proteins. In the Phys-Chem ANN model the "hidden layers" are no longer virtual "neurons", but real structural units of proteins and peptides. It is a hybridization approach, which combines the linear free energy concept of quantitative structure-activity relationship (QSAR) with the advanced mathematical technique of ANN. The Phys-Chem ANN approach has adopted an iterative and feedback procedure, incorporating both machine-learning and artificial intelligence capabilities. In addition to making more accurate predictions for the bioactivities of proteins and peptides than is possible with the traditional QSAR approach, the Phys-Chem ANN approach can also provide more insights about the relationship between bioactivities and the structures involved than the ANN approach does. As an example of the application of the Phys-Chem ANN approach, a predictive model for the conformational stability of human lysozyme is presented.
Quantitative structure-activity relationships (QSARs) are being developed to predict the toxicological endpoints for untested chemicals similar in structure to chemicals that have known experimental toxicological data. Based on a very large number of predetermined descriptors, a...
NASA Astrophysics Data System (ADS)
Peysson, Y.; Bonoli, P. T.; Chen, J.; Garofalo, A.; Hillairet, J.; Li, M.; Qian, J.; Shiraiwa, S.; Decker, J.; Ding, B. J.; Ekedahl, A.; Goniche, M.; Zhai, X.
2017-10-01
The Lower Hybrid (LH) wave is widely used in existing tokamaks for tailoring current density profile or extending pulse duration to steady-state regimes. Its high efficiency makes it particularly attractive for a fusion reactor, leading to consider it for this purpose in ITER tokamak. Nevertheless, if basics of the LH wave in tokamak plasma are well known, quantitative modeling of experimental observations based on first principles remains a highly challenging exercise, despite considerable numerical efforts achieved so far. In this context, a rigorous methodology must be carried out in the simulations to identify the minimum number of physical mechanisms that must be considered to reproduce experimental shot to shot observations and also scalings (density, power spectrum). Based on recent simulations carried out for EAST, Alcator C-Mod and Tore Supra tokamaks, the state of the art in LH modeling is reviewed. The capability of fast electron bremsstrahlung, internal inductance li and LH driven current at zero loop voltage to constrain all together LH simulations is discussed, as well as the needs of further improvements (diagnostics, codes, LH model), for robust interpretative and predictive simulations.
Predicting episodic memory formation for movie events
Tang, Hanlin; Singer, Jed; Ison, Matias J.; Pivazyan, Gnel; Romaine, Melissa; Frias, Rosa; Meller, Elizabeth; Boulin, Adrianna; Carroll, James; Perron, Victoria; Dowcett, Sarah; Arellano, Marlise; Kreiman, Gabriel
2016-01-01
Episodic memories are long lasting and full of detail, yet imperfect and malleable. We quantitatively evaluated recollection of short audiovisual segments from movies as a proxy to real-life memory formation in 161 subjects at 15 minutes up to a year after encoding. Memories were reproducible within and across individuals, showed the typical decay with time elapsed between encoding and testing, were fallible yet accurate, and were insensitive to low-level stimulus manipulations but sensitive to high-level stimulus properties. Remarkably, memorability was also high for single movie frames, even one year post-encoding. To evaluate what determines the efficacy of long-term memory formation, we developed an extensive set of content annotations that included actions, emotional valence, visual cues and auditory cues. These annotations enabled us to document the content properties that showed a stronger correlation with recognition memory and to build a machine-learning computational model that accounted for episodic memory formation in single events for group averages and individual subjects with an accuracy of up to 80%. These results provide initial steps towards the development of a quantitative computational theory capable of explaining the subjective filtering steps that lead to how humans learn and consolidate memories. PMID:27686330
A new method of quantitative cavitation assessment in the field of a lithotripter.
Jöchle, K; Debus, J; Lorenz, W J; Huber, P
1996-01-01
Transient cavitation seems to be a very important effect regarding the interaction of pulsed high-energy ultrasound with biologic tissues. Using a newly developed laser optical system we are able to determine the life-span of transient cavities (relative error less than +/- 5%) in the focal region of a lithotripter (Lithostar, Siemens). The laser scattering method is based on the detection of scattered laser light reflected during a bubble's life. This method requires no sort of sensor material in the pathway of the sound field. Thus, the method avoids any interference with bubble dynamics during the measurement. The knowledge of the time of bubble decay allows conclusions to be reached on the destructive power of the cavities. By combining the results of life-span measurements with the maximum bubble radius using stroboscopic photographs we found that the measured time of bubble decay and the predicted time using Rayleigh's law only differs by about 13% even in the case of complex bubble fields. It can be shown that the laser scattering method is feasible to assess cavitation events quantitatively. Moreover, it will enable us to compare different medical ultrasound sources that have the capability to generate cavitation.
Predicting episodic memory formation for movie events.
Tang, Hanlin; Singer, Jed; Ison, Matias J; Pivazyan, Gnel; Romaine, Melissa; Frias, Rosa; Meller, Elizabeth; Boulin, Adrianna; Carroll, James; Perron, Victoria; Dowcett, Sarah; Arellano, Marlise; Kreiman, Gabriel
2016-09-30
Episodic memories are long lasting and full of detail, yet imperfect and malleable. We quantitatively evaluated recollection of short audiovisual segments from movies as a proxy to real-life memory formation in 161 subjects at 15 minutes up to a year after encoding. Memories were reproducible within and across individuals, showed the typical decay with time elapsed between encoding and testing, were fallible yet accurate, and were insensitive to low-level stimulus manipulations but sensitive to high-level stimulus properties. Remarkably, memorability was also high for single movie frames, even one year post-encoding. To evaluate what determines the efficacy of long-term memory formation, we developed an extensive set of content annotations that included actions, emotional valence, visual cues and auditory cues. These annotations enabled us to document the content properties that showed a stronger correlation with recognition memory and to build a machine-learning computational model that accounted for episodic memory formation in single events for group averages and individual subjects with an accuracy of up to 80%. These results provide initial steps towards the development of a quantitative computational theory capable of explaining the subjective filtering steps that lead to how humans learn and consolidate memories.
Mapping surface charge density of lipid bilayers by quantitative surface conductivity microscopy
Klausen, Lasse Hyldgaard; Fuhs, Thomas; Dong, Mingdong
2016-01-01
Local surface charge density of lipid membranes influences membrane–protein interactions leading to distinct functions in all living cells, and it is a vital parameter in understanding membrane-binding mechanisms, liposome design and drug delivery. Despite the significance, no method has so far been capable of mapping surface charge densities under physiologically relevant conditions. Here, we use a scanning nanopipette setup (scanning ion-conductance microscope) combined with a novel algorithm to investigate the surface conductivity near supported lipid bilayers, and we present a new approach, quantitative surface conductivity microscopy (QSCM), capable of mapping surface charge density with high-quantitative precision and nanoscale resolution. The method is validated through an extensive theoretical analysis of the ionic current at the nanopipette tip, and we demonstrate the capacity of QSCM by mapping the surface charge density of model cationic, anionic and zwitterionic lipids with results accurately matching theoretical values. PMID:27561322
Mapping surface charge density of lipid bilayers by quantitative surface conductivity microscopy
NASA Astrophysics Data System (ADS)
Klausen, Lasse Hyldgaard; Fuhs, Thomas; Dong, Mingdong
2016-08-01
Local surface charge density of lipid membranes influences membrane-protein interactions leading to distinct functions in all living cells, and it is a vital parameter in understanding membrane-binding mechanisms, liposome design and drug delivery. Despite the significance, no method has so far been capable of mapping surface charge densities under physiologically relevant conditions. Here, we use a scanning nanopipette setup (scanning ion-conductance microscope) combined with a novel algorithm to investigate the surface conductivity near supported lipid bilayers, and we present a new approach, quantitative surface conductivity microscopy (QSCM), capable of mapping surface charge density with high-quantitative precision and nanoscale resolution. The method is validated through an extensive theoretical analysis of the ionic current at the nanopipette tip, and we demonstrate the capacity of QSCM by mapping the surface charge density of model cationic, anionic and zwitterionic lipids with results accurately matching theoretical values.
Mapping surface charge density of lipid bilayers by quantitative surface conductivity microscopy.
Klausen, Lasse Hyldgaard; Fuhs, Thomas; Dong, Mingdong
2016-08-26
Local surface charge density of lipid membranes influences membrane-protein interactions leading to distinct functions in all living cells, and it is a vital parameter in understanding membrane-binding mechanisms, liposome design and drug delivery. Despite the significance, no method has so far been capable of mapping surface charge densities under physiologically relevant conditions. Here, we use a scanning nanopipette setup (scanning ion-conductance microscope) combined with a novel algorithm to investigate the surface conductivity near supported lipid bilayers, and we present a new approach, quantitative surface conductivity microscopy (QSCM), capable of mapping surface charge density with high-quantitative precision and nanoscale resolution. The method is validated through an extensive theoretical analysis of the ionic current at the nanopipette tip, and we demonstrate the capacity of QSCM by mapping the surface charge density of model cationic, anionic and zwitterionic lipids with results accurately matching theoretical values.
A Quantitative Analysis of the Benefits of Prototyping Fixed-Wing Aircraft
2012-06-14
in then-year dollars. The RDT&E costs through FSD were provided in then-year dollars as a lump sum. Additionally, the cost of full capability ...development was available in then-year dollars as a lump sum. Full capability development was the RDT&E that continued after the completion of the FSD...contract, which ended in July 1984. In [31] [31], the authors stated that full capability development occurred through approximately 1990
What do we gain with Probabilistic Flood Loss Models?
NASA Astrophysics Data System (ADS)
Schroeter, K.; Kreibich, H.; Vogel, K.; Merz, B.; Lüdtke, S.
2015-12-01
The reliability of flood loss models is a prerequisite for their practical usefulness. Oftentimes, traditional uni-variate damage models as for instance depth-damage curves fail to reproduce the variability of observed flood damage. Innovative multi-variate probabilistic modelling approaches are promising to capture and quantify the uncertainty involved and thus to improve the basis for decision making. In this study we compare the predictive capability of two probabilistic modelling approaches, namely Bagging Decision Trees and Bayesian Networks and traditional stage damage functions which are cast in a probabilistic framework. For model evaluation we use empirical damage data which are available from computer aided telephone interviews that were respectively compiled after the floods in 2002, 2005, 2006 and 2013 in the Elbe and Danube catchments in Germany. We carry out a split sample test by sub-setting the damage records. One sub-set is used to derive the models and the remaining records are used to evaluate the predictive performance of the model. Further we stratify the sample according to catchments which allows studying model performance in a spatial transfer context. Flood damage estimation is carried out on the scale of the individual buildings in terms of relative damage. The predictive performance of the models is assessed in terms of systematic deviations (mean bias), precision (mean absolute error) as well as in terms of reliability which is represented by the proportion of the number of observations that fall within the 95-quantile and 5-quantile predictive interval. The reliability of the probabilistic predictions within validation runs decreases only slightly and achieves a very good coverage of observations within the predictive interval. Probabilistic models provide quantitative information about prediction uncertainty which is crucial to assess the reliability of model predictions and improves the usefulness of model results.
Liquid Chromatography-Mass Spectrometry-based Quantitative Proteomics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Fang; Liu, Tao; Qian, Weijun
2011-07-22
Liquid chromatography-mass spectrometry (LC-MS)-based quantitative proteomics has become increasingly applied for a broad range of biological applications due to growing capabilities for broad proteome coverage and good accuracy in quantification. Herein, we review the current LC-MS-based quantification methods with respect to their advantages and limitations, and highlight their potential applications.
In this study, a quantitative liquid chromatography-mass spectrometry (LC-MS) technique capable of measuring the concentrations of heterocyclic nitrogen compounds in ambient fine aerosols (PM2.5) has been developed. Quadrupole time-of-flight (Q-TOF) MS technology is used to provi...
Building a Predictive Capability for Decision-Making that Supports MultiPEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carmichael, Joshua Daniel
Multi-phenomenological explosion monitoring (multiPEM) is a developing science that uses multiple geophysical signatures of explosions to better identify and characterize their sources. MultiPEM researchers seek to integrate explosion signatures together to provide stronger detection, parameter estimation, or screening capabilities between different sources or processes. This talk will address forming a predictive capability for screening waveform explosion signatures to support multiPEM.
New smoke predictions for Alaska in NOAA’s National Air Quality Forecast Capability
NASA Astrophysics Data System (ADS)
Davidson, P. M.; Ruminski, M.; Draxler, R.; Kondragunta, S.; Zeng, J.; Rolph, G.; Stajner, I.; Manikin, G.
2009-12-01
Smoke from wildfire is an important component of fine particle pollution, which is responsible for tens of thousands of premature deaths each year in the US. In Alaska, wildfire smoke is the leading cause of poor air quality in summer. Smoke forecast guidance helps air quality forecasters and the public take steps to limit exposure to airborne particulate matter. A new smoke forecast guidance tool, built by a cross-NOAA team, leverages efforts of NOAA’s partners at the USFS on wildfire emissions information, and with EPA, in coordinating with state/local air quality forecasters. Required operational deployment criteria, in categories of objective verification, subjective feedback, and production readiness, have been demonstrated in experimental testing during 2008-2009, for addition to the operational products in NOAA's National Air Quality Forecast Capability. The Alaska smoke forecast tool is an adaptation of NOAA’s smoke predictions implemented operationally for the lower 48 states (CONUS) in 2007. The tool integrates satellite information on location of wildfires with weather (North American mesoscale model) and smoke dispersion (HYSPLIT) models to produce daily predictions of smoke transport for Alaska, in binary and graphical formats. Hour-by hour predictions at 12km grid resolution of smoke at the surface and in the column are provided each day by 13 UTC, extending through midnight next day. Forecast accuracy and reliability are monitored against benchmark criteria for accuracy and reliability. While wildfire activity in the CONUS is year-round, the intense wildfire activity in AK is limited to the summer. Initial experimental testing during summer 2008 was hindered by unusually limited wildfire activity and very cloudy conditions. In contrast, heavier than average wildfire activity during summer 2009 provided a representative basis (more than 60 days of wildfire smoke) for demonstrating required prediction accuracy. A new satellite observation product was developed for routine near-real time verification of these predictions. The footprint of the predicted smoke from identified fires is verified with satellite observations of the spatial extent of smoke aerosols (5km resolution). Based on geostationary aerosol optical depth measurements that provide good time resolution of the horizontal spatial extent of the plumes, these observations do not yield quantitative concentrations of smoke particles at the surface. Predicted surface smoke concentrations are consistent with the limited number of in situ observations of total fine particle mass from all sources; however they are much higher than predicted for most CONUS fires. To assess uncertainty associated with fire emissions estimates, sensitivity analyses are in progress.
NASA Astrophysics Data System (ADS)
Shin, Yung C.; Bailey, Neil; Katinas, Christopher; Tan, Wenda
2018-05-01
This paper presents an overview of vertically integrated comprehensive predictive modeling capabilities for directed energy deposition processes, which have been developed at Purdue University. The overall predictive models consist of vertically integrated several modules, including powder flow model, molten pool model, microstructure prediction model and residual stress model, which can be used for predicting mechanical properties of additively manufactured parts by directed energy deposition processes with blown powder as well as other additive manufacturing processes. Critical governing equations of each model and how various modules are connected are illustrated. Various illustrative results along with corresponding experimental validation results are presented to illustrate the capabilities and fidelity of the models. The good correlations with experimental results prove the integrated models can be used to design the metal additive manufacturing processes and predict the resultant microstructure and mechanical properties.
NASA Astrophysics Data System (ADS)
Shin, Yung C.; Bailey, Neil; Katinas, Christopher; Tan, Wenda
2018-01-01
This paper presents an overview of vertically integrated comprehensive predictive modeling capabilities for directed energy deposition processes, which have been developed at Purdue University. The overall predictive models consist of vertically integrated several modules, including powder flow model, molten pool model, microstructure prediction model and residual stress model, which can be used for predicting mechanical properties of additively manufactured parts by directed energy deposition processes with blown powder as well as other additive manufacturing processes. Critical governing equations of each model and how various modules are connected are illustrated. Various illustrative results along with corresponding experimental validation results are presented to illustrate the capabilities and fidelity of the models. The good correlations with experimental results prove the integrated models can be used to design the metal additive manufacturing processes and predict the resultant microstructure and mechanical properties.
Brackman, Emily H; Morris, Blair W; Andover, Margaret S
2016-01-01
The interpersonal psychological theory of suicide provides a useful framework for considering the relationship between non-suicidal self-injury and suicide. Researchers propose that NSSI increases acquired capability for suicide. We predicted that both NSSI frequency and the IPTS acquired capability construct (decreased fear of death and increased pain tolerance) would separately interact with suicidal ideation to predict suicide attempts. Undergraduate students (N = 113) completed self-report questionnaires, and a subsample (n = 66) also completed a pain sensitivity task. NSSI frequency significantly moderated the association between suicidal ideation and suicide attempts. However, in a separate model, acquired capability did not moderate this relationship. Our understanding of the relationship between suicidal ideation and suicidal behavior can be enhanced by factors associated with NSSI that are distinct from the acquired capability construct.
Chen, Kun; Wu, Tao; Wei, Haoyun; Zhou, Tian; Li, Yan
2016-01-01
Coherent anti-Stokes Raman microscopy (CARS) is a quantitative, chemically specific, and label-free optical imaging technique for studying inhomogeneous systems. However, the complicating influence of the nonresonant response on the CARS signal severely limits its sensitivity and specificity and especially limits the extent to which CARS microscopy has been used as a fully quantitative imaging technique. On the basis of spectral focusing mechanism, we establish a dual-soliton Stokes based CARS microspectroscopy and microscopy scheme capable of quantifying the spatial information of densities and chemical composition within inhomogeneous samples, using a single fiber laser. Dual-soliton Stokes scheme not only removes the nonresonant background but also allows robust acquisition of multiple characteristic vibrational frequencies. This all-fiber based laser source can cover the entire fingerprint (800-2200 cm−1) region with a spectral resolution of 15 cm−1. We demonstrate that quantitative degree determination of lipid-chain unsaturation in the fatty acids mixture can be achieved by the characterization of C = C stretching and CH2 deformation vibrations. For microscopy purposes, we show that the spatially inhomogeneous distribution of lipid droplets can be further quantitatively visualized using this quantified degree of lipid unsaturation in the acyl chain for contrast in the hyperspectral CARS images. The combination of compact excitation source and background-free capability to facilitate extraction of quantitative composition information with multiplex spectral peaks will enable wider applications of quantitative chemical imaging in studying biological and material systems. PMID:27867704
Electric Potential and Electric Field Imaging with Dynamic Applications & Extensions
NASA Technical Reports Server (NTRS)
Generazio, Ed
2017-01-01
The technology and methods for remote quantitative imaging of electrostatic potentials and electrostatic fields in and around objects and in free space is presented. Electric field imaging (EFI) technology may be applied to characterize intrinsic or existing electric potentials and electric fields, or an externally generated electrostatic field made be used for volumes to be inspected with EFI. The baseline sensor technology (e-Sensor) and its construction, optional electric field generation (quasi-static generator), and current e- Sensor enhancements (ephemeral e-Sensor) are discussed. Critical design elements of current linear and real-time two-dimensional (2D) measurement systems are highlighted, and the development of a three dimensional (3D) EFI system is presented. Demonstrations for structural, electronic, human, and memory applications are shown. Recent work demonstrates that phonons may be used to create and annihilate electric dipoles within structures. Phonon induced dipoles are ephemeral and their polarization, strength, and location may be quantitatively characterized by EFI providing a new subsurface Phonon-EFI imaging technology. Results from real-time imaging of combustion and ion flow, and their measurement complications, will be discussed. Extensions to environment, Space and subterranean applications will be presented, and initial results for quantitative characterizing material properties are shown. A wearable EFI system has been developed by using fundamental EFI concepts. These new EFI capabilities are demonstrated to characterize electric charge distribution creating a new field of study embracing areas of interest including electrostatic discharge (ESD) mitigation, manufacturing quality control, crime scene forensics, design and materials selection for advanced sensors, combustion science, on-orbit space potential, container inspection, remote characterization of electronic circuits and level of activation, dielectric morphology of structures, tether integrity, organic molecular memory, atmospheric science, weather prediction, earth quake prediction, and medical diagnostic and treatment efficacy applications such as cardiac polarization wave propagation and electromyography imaging.
A direct broadcast satellite-audio experiment
NASA Technical Reports Server (NTRS)
Vaisnys, Arvydas; Abbe, Brian; Motamedi, Masoud
1992-01-01
System studies have been carried out over the past three years at the Jet Propulsion Laboratory (JPL) on digital audio broadcasting (DAB) via satellite. The thrust of the work to date has been on designing power and bandwidth efficient systems capable of providing reliable service to fixed, mobile, and portable radios. It is very difficult to predict performance in an environment which produces random periods of signal blockage, such as encountered in mobile reception where a vehicle can quickly move from one type of terrain to another. For this reason, some signal blockage mitigation techniques were built into an experimental DAB system and a satellite experiment was conducted to obtain both qualitative and quantitative measures of performance in a range of reception environments. This paper presents results from the experiment and some conclusions on the effectiveness of these blockage mitigation techniques.
Photoacoustic microscopy of bilirubin in tissue phantoms
NASA Astrophysics Data System (ADS)
Zhou, Yong; Zhang, Chi; Yao, Da-Kang; Wang, Lihong V.
2012-12-01
Determining both bilirubin's concentration and its spatial distribution are important in disease diagnosis. Here, for the first time, we applied quantitative multiwavelength photoacoustic microscopy (PAM) to detect bilirubin concentration and distribution simultaneously. By measuring tissue-mimicking phantoms with different bilirubin concentrations, we showed that the root-mean-square error of prediction has reached 0.52 and 0.83 mg/dL for pure bilirubin and for blood-mixed bilirubin detection (with 100% oxygen saturation), respectively. We further demonstrated the capability of the PAM system to image bilirubin distribution both with and without blood. Finally, by underlaying bilirubin phantoms with mouse skins, we showed that bilirubin can be imaged with consistent accuracy down to >400 μm in depth. Our results show that PAM has potential for noninvasive bilirubin monitoring in vivo, as well as for further clinical applications.
Application of Deep Learning in Automated Analysis of Molecular Images in Cancer: A Survey
Xue, Yong; Chen, Shihui; Liu, Yong
2017-01-01
Molecular imaging enables the visualization and quantitative analysis of the alterations of biological procedures at molecular and/or cellular level, which is of great significance for early detection of cancer. In recent years, deep leaning has been widely used in medical imaging analysis, as it overcomes the limitations of visual assessment and traditional machine learning techniques by extracting hierarchical features with powerful representation capability. Research on cancer molecular images using deep learning techniques is also increasing dynamically. Hence, in this paper, we review the applications of deep learning in molecular imaging in terms of tumor lesion segmentation, tumor classification, and survival prediction. We also outline some future directions in which researchers may develop more powerful deep learning models for better performance in the applications in cancer molecular imaging. PMID:29114182
Coulomb Blockade in a Two-Dimensional Conductive Polymer Monolayer.
Akai-Kasaya, M; Okuaki, Y; Nagano, S; Mitani, T; Kuwahara, Y
2015-11-06
Electronic transport was investigated in poly(3-hexylthiophene-2,5-diyl) monolayers. At low temperatures, nonlinear behavior was observed in the current-voltage characteristics, and a nonzero threshold voltage appeared that increased with decreasing temperature. The current-voltage characteristics could be best fitted using a power law. These results suggest that the nonlinear conductivity can be explained using a Coulomb blockade (CB) mechanism. A model is proposed in which an isotropic extended charge state exists, as predicted by quantum calculations, and percolative charge transport occurs within an array of small conductive islands. Using quantitatively evaluated capacitance values for the islands, this model was found to be capable of explaining the observed experimental data. It is, therefore, suggested that percolative charge transport based on the CB effect is a significant factor giving rise to nonlinear conductivity in organic materials.
Model studies of crosswind landing-gear configurations for STOL aircraft
NASA Technical Reports Server (NTRS)
Stubbs, S. M.; Byrdsong, T. A.
1973-01-01
A dynamic model was used to directly compare four different crosswind landing gear mechanisms. The model was landed as a free body onto a laterally sloping runway used to simulate a crosswind side force. A radio control system was used for steering to oppose the side force as the model rolled to a stop. The configuration in which the landing gears are alined by the pilot and locked in the direction of motion prior to touchdown gave the smoothest runout behavior with the vehicle maintaining its crab angle throughout the landing roll. Nose wheel steering was confirmed to be better than steering with nose and main gears differentially or together. Testing is continuing to obtain quantitative data to establish an experimental data base for validation of an analytical program that will be capable of predicting full scale results.
Prediction of circulation control performance characteristics for Super STOL and STOL applications
NASA Astrophysics Data System (ADS)
Naqvi, Messam Abbas
The rapid air travel growth during the last three decades, has resulted in runway congestion at major airports. The current airports infrastructure will not be able to support the rapid growth trends expected in the next decade. Changes or upgrades in infrastructure alone would not be able to satisfy the growth requirements, and new airplane concepts such as the NASA proposed Super Short Takeoff and Landing and Extremely Short Takeoff & Landing (ESTOL) are being vigorously pursued. Aircraft noise pollution during Takeoff & Landing is another serious concern and efforts are aimed to reduce the airframe noise produced by Conventional High Lift Devices during Takeoff & Landing. Circulation control technology has the prospect of being a good alternative to resolve both the aforesaid issues. Circulation control airfoils are not only capable of producing very high values of lift (Cl values in excess of 8.0) at zero degree angle of attack, but also eliminate the noise generated by the conventional high lift devices and their associated weight penalty as well as their complex operation and storage. This will ensure not only satisfying the small takeoff and landing distances, but minimal acoustic signature in accordance with FAA requirements. The Circulation Control relies on the tendency of an emanating wall jet to independently control the circulation and lift on an airfoil. Unlike, conventional airfoil where rear stagnation point is located at the sharp trailing edge, circulation control airfoils possess a round trailing edge, therefore the rear stagnation point is free to move. The location of rear stagnation point is controlled by the blown jet momentum. This provides a secondary control in the form of jet momentum with which the lift generated can be controlled rather the only available control of incidence (angle of attack) in case of conventional airfoils. The use of Circulation control despite its promising potential has been limited only to research applications due to the lack of a simple prediction capability. This research effort was focused on the creation of a rapid prediction capability of Circulation Control Aerodynamic Characteristics which could help designers with rapid performance estimates for design space exploration. A morphological matrix was created with the available set of options which could be chosen to create this prediction capability starting with purely analytical physics based modeling to high fidelity CFD codes. Based on the available constraints, and desired accuracy meta-models have been created around the two dimensional circulation control performance results computed using Navier Stokes Equations (Computational Fluid Dynamics). DSS2, a two dimensional RANS code written by Professor Lakshmi Sankar was utilized for circulation control airfoil characteristics. The CFD code was first applied to the NCCR 1510-7607N airfoil to validate the model with available experimental results. It was then applied to compute the results of a fractional factorial design of experiments array. Metamodels were formulated using the neural networks to the results obtained from the Design of Experiments. Additional validation runs were performed to validate the model predictions. Metamodels are not only capable of rapid performance prediction, but also help generate the relation trends of response matrices with control variables and capture the complex interactions between control variables. Quantitative as well as qualitative assessments of results were performed by computation of aerodynamic forces & moments and flow field visualizations. Wing characteristics in three dimensions were obtained by integration over the whole wing using Prandtl's Wing Theory. The baseline Super STOL configuration [3] was then analyzed with the application of circulation control technology. The desired values of lift and drag to achieve the target values of Takeoff & Landing performance were compared with the optimal configurations obtained by the model. The same optimal configurations were then subjected to Super STOL cruise conditions to perform a trade off analysis between Takeoff and Cruise Performance. Supercritical airfoils modified for circulation control were also thoroughly analyzed for Takeoff and Cruise performance and may constitute a viable option for Super STOL & STOL Designs. The prediction capability produced by this research effort can be integrated with the current conceptual aircraft modeling & simulation framework. The prediction tool is applicable within the selected ranges of each variable, but methodology and formulation scheme adopted can be applied to any other design space exploration.
Modeling of adipose/blood partition coefficient for environmental chemicals.
Papadaki, K C; Karakitsios, S P; Sarigiannis, D A
2017-12-01
A Quantitative Structure Activity Relationship (QSAR) model was developed in order to predict the adipose/blood partition coefficient of environmental chemical compounds. The first step of QSAR modeling was the collection of inputs. Input data included the experimental values of adipose/blood partition coefficient and two sets of molecular descriptors for 67 organic chemical compounds; a) the descriptors from Linear Free Energy Relationship (LFER) and b) the PaDEL descriptors. The datasets were split to training and prediction set and were analysed using two statistical methods; Genetic Algorithm based Multiple Linear Regression (GA-MLR) and Artificial Neural Networks (ANN). The models with LFER and PaDEL descriptors, coupled with ANN, produced satisfying performance results. The fitting performance (R 2 ) of the models, using LFER and PaDEL descriptors, was 0.94 and 0.96, respectively. The Applicability Domain (AD) of the models was assessed and then the models were applied to a large number of chemical compounds with unknown values of adipose/blood partition coefficient. In conclusion, the proposed models were checked for fitting, validity and applicability. It was demonstrated that they are stable, reliable and capable to predict the values of adipose/blood partition coefficient of "data poor" chemical compounds that fall within the applicability domain. Copyright © 2017. Published by Elsevier Ltd.
TRAC-PF1/MOD1 pretest predictions of MIST experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boyack, B.E.; Steiner, J.L.; Siebe, D.A.
Los Alamos National Laboratory is a participant in the Integral System Test (IST) program initiated in June 1983 to provide integral system test data on specific issues and phenomena relevant to post small-break loss-of-coolant accidents (SBLOCAs) in Babcock and Wilcox plant designs. The Multi-Loop Integral System Test (MIST) facility is the largest single component in the IST program. During Fiscal Year 1986, Los Alamos performed five MIST pretest analyses. The five experiments were chosen on the basis of their potential either to approach the facility limits or to challenge the predictive capability of the TRAC-PF1/MOD1 code. Three SBLOCA tests weremore » examined which included nominal test conditions, throttled auxiliary feedwater and asymmetric steam-generator cooldown, and reduced high-pressure-injection (HPI) capacity, respectively. Also analyzed were two ''feed-and-bleed'' cooling tests with reduced HPI and delayed HPI initiation. Results of the tests showed that the MIST facility limits would not be approached in the five tests considered. Early comparisons with preliminary test data indicate that the TRAC-PF1/MOD1 code is correctly calculating the dominant phenomena occurring in the MIST facility during the tests. Posttest analyses are planned to provide a quantitative assessment of the code's ability to predict MIST transients.« less
NASA Astrophysics Data System (ADS)
Sistaninia, M.; Phillion, A. B.; Drezet, J.-M.; Rappaz, M.
2011-01-01
As a necessary step toward the quantitative prediction of hot tearing defects, a three-dimensional stress-strain simulation based on a combined finite element (FE)/discrete element method (DEM) has been developed that is capable of predicting the mechanical behavior of semisolid metallic alloys during solidification. The solidification model used for generating the initial solid-liquid structure is based on a Voronoi tessellation of randomly distributed nucleation centers and a solute diffusion model for each element of this tessellation. At a given fraction of solid, the deformation is then simulated with the solid grains being modeled using an elastoviscoplastic constitutive law, whereas the remaining liquid layers at grain boundaries are approximated by flexible connectors, each consisting of a spring element and a damper element acting in parallel. The model predictions have been validated against Al-Cu alloy experimental data from the literature. The results show that a combined FE/DEM approach is able to express the overall mechanical behavior of semisolid alloys at the macroscale based on the morphology of the grain structure. For the first time, the localization of strain in the intergranular regions is taken into account. Thus, this approach constitutes an indispensible step towards the development of a comprehensive model of hot tearing.
Predictive QSAR modeling workflow, model applicability domains, and virtual screening.
Tropsha, Alexander; Golbraikh, Alexander
2007-01-01
Quantitative Structure Activity Relationship (QSAR) modeling has been traditionally applied as an evaluative approach, i.e., with the focus on developing retrospective and explanatory models of existing data. Model extrapolation was considered if only in hypothetical sense in terms of potential modifications of known biologically active chemicals that could improve compounds' activity. This critical review re-examines the strategy and the output of the modern QSAR modeling approaches. We provide examples and arguments suggesting that current methodologies may afford robust and validated models capable of accurate prediction of compound properties for molecules not included in the training sets. We discuss a data-analytical modeling workflow developed in our laboratory that incorporates modules for combinatorial QSAR model development (i.e., using all possible binary combinations of available descriptor sets and statistical data modeling techniques), rigorous model validation, and virtual screening of available chemical databases to identify novel biologically active compounds. Our approach places particular emphasis on model validation as well as the need to define model applicability domains in the chemistry space. We present examples of studies where the application of rigorously validated QSAR models to virtual screening identified computational hits that were confirmed by subsequent experimental investigations. The emerging focus of QSAR modeling on target property forecasting brings it forward as predictive, as opposed to evaluative, modeling approach.
Recent advances in the in silico modelling of UDP glucuronosyltransferase substrates.
Sorich, Michael J; Smith, Paul A; Miners, John O; Mackenzie, Peter I; McKinnon, Ross A
2008-01-01
UDP glucurononosyltransferases (UGT) are a superfamily of enzymes that catalyse the conjugation of a range of structurally diverse drugs, environmental and endogenous chemicals with glucuronic acid. This process plays a significant role in the clearance and detoxification of many chemicals. Over the last decade the regulation and substrate profiles of UGT isoforms have been increasingly characterised. The resulting data has facilitated the prototyping of ligand based in silico models capable of predicting, and gaining insights into, binding affinity and the substrate- and regio- selectivity of glucuronidation by UGT isoforms. Pharmacophore modelling has produced particularly insightful models and quantitative structure-activity relationships based on machine learning algorithms result in accurate predictions. Simple structural chemical descriptors were found to capture much of the chemical information relevant to UGT metabolism. However, quantum chemical properties of molecules and the nucleophilic atoms in the molecule can enhance both the predictivity and chemical intuitiveness of structure-activity models. Chemical diversity analysis of known substrates has shown some bias towards chemicals with aromatic and aliphatic hydroxyl groups. Future progress in in silico development will depend on larger and more diverse high quality metabolic datasets. Furthermore, improved protein structure data on UGTs will enable the application of structural modelling techniques likely leading to greater insight into the binding and reactive processes of UGT catalysed glucuronidation.
NASA Technical Reports Server (NTRS)
Horvath, Thomas; Splinter, Scott; Daryabeigi, Kamran; Wood, William; Schwartz, Richard; Ross, Martin
2008-01-01
High resolution calibrated infrared imagery of vehicles during hypervelocity atmospheric entry or sustained hypersonic cruise has the potential to provide flight data on the distribution of surface temperature and the state of the airflow over the vehicle. In the early 1980 s NASA sought to obtain high spatial resolution infrared imagery of the Shuttle during entry. Despite mission execution with a technically rigorous pre-planning capability, the single airborne optical system for this attempt was considered developmental and the scientific return was marginal. In 2005 the Space Shuttle Program again sponsored an effort to obtain imagery of the Orbiter. Imaging requirements were targeted towards Shuttle ascent; companion requirements for entry did not exist. The engineering community was allowed to define observation goals and incrementally demonstrate key elements of a quantitative spatially resolved measurement capability over a series of flights. These imaging opportunities were extremely beneficial and clearly demonstrated capability to capture infrared imagery with mature and operational assets of the US Navy and the Missile Defense Agency. While successful, the usefulness of the imagery was, from an engineering perspective, limited. These limitations were mainly associated with uncertainties regarding operational aspects of data acquisition. These uncertainties, in turn, came about because of limited pre-flight mission planning capability, a poor understanding of several factors including the infrared signature of the Shuttle, optical hardware limitations, atmospheric effects and detector response characteristics. Operational details of sensor configuration such as detector integration time and tracking system algorithms were carried out ad hoc (best practices) which led to low probability of target acquisition and detector saturation. Leveraging from the qualified success during Return-to-Flight, the NASA Engineering and Safety Center sponsored an assessment study focused on increasing the probability of returning spatially resolved scientific/engineering thermal imagery. This paper provides an overview of the assessment task and the systematic approach designed to establish confidence in the ability of existing assets to reliably acquire, track and return global quantitative surface temperatures of the Shuttle during entry. A discussion of capability demonstration in support of a potential Shuttle boundary layer transition flight test is presented. Successful demonstration of a quantitative, spatially resolved, global temperature measurement on the proposed Shuttle boundary layer transition flight test could lead to potential future applications with hypersonic flight test programs within the USAF and DARPA along with flight test opportunities supporting NASA s project Constellation.
Realtime Data to Enable Earth-Observing Sensor Web Capabilities
NASA Astrophysics Data System (ADS)
Seablom, M. S.
2015-12-01
Over the past decade NASA's Earth Science Technology Office (ESTO) has invested in new technologies for information systems to enhance the Earth-observing capabilities of satellites, aircraft, and ground-based in situ observations. One focus area has been to create a common infrastructure for coordinated measurements from multiple vantage points which could be commanded either manually or through autonomous means, such as from a numerical model. This paradigm became known as the sensor web, formally defined to be "a coherent set of heterogeneous, loosely-coupled, distributed observing nodes interconnected by a communications fabric that can collectively behave as a single dynamically adaptive and reconfigurable observing system". This would allow for adaptive targeting of rapidly evolving, transient, or variable meteorological features to improve our ability to monitor, understand, and predict their evolution. It would also enable measurements earmarked at critical regions of the atmosphere that are highly sensitive to data analysis errors, thus offering the potential for significant improvements in the predictive skill of numerical weather forecasts. ESTO's investment strategy was twofold. Recognizing that implementation of an operational sensor web would not only involve technical cost and risk but also would require changes to the culture of how flight missions were designed and operated, ESTO funded the development of a mission-planning simulator that would quantitatively assess the added value of coordinated observations. The simulator was designed to provide the capability to perform low-cost engineering and design trade studies using synthetic data generated by observing system simulation experiments (OSSEs). The second part of the investment strategy was to invest in prototype applications that implemented key features of a sensor web, with the dual goals of developing a sensor web reference architecture as well as supporting useful science activities that would produce immediate benefit. We briefly discuss three of ESTO's sensor web projects that resulted from solicitations released in 2008 and 2011: the Earth System Sensor Web Simulator, the Earth Phenomena Observing System, and the Sensor Web 3G Namibia Flood Pilot.
National Centers for Environmental Prediction
ENSEMBLE PRODUCTS & DATA SOURCES Probabilistic Forecasts of Quantitative Precipitation from the NCEP Predictability Research with Indian Monsoon Examples - PDF - 28 Mar 2005 North American Ensemble Forecast System QUANTITATIVE PRECIPITATION *PQPF* In these charts, the probability that 24-hour precipitation amounts over a
NASA Technical Reports Server (NTRS)
Schoeberl, Mark; Rychekewkitsch, Michael; Andrucyk, Dennis; McConaughy, Gail; Meeson, Blanche; Hildebrand, Peter; Einaudi, Franco (Technical Monitor)
2000-01-01
NASA's Earth Science Enterprise's long range vision is to enable the development of a national proactive environmental predictive capability through targeted scientific research and technological innovation. Proactive environmental prediction means the prediction of environmental events and their secondary consequences. These consequences range from disasters and disease outbreak to improved food production and reduced transportation, energy and insurance costs. The economic advantage of this predictive capability will greatly outweigh the cost of development. Developing this predictive capability requires a greatly improved understanding of the earth system and the interaction of the various components of that system. It also requires a change in our approach to gathering data about the earth and a change in our current methodology in processing that data including its delivery to the customers. And, most importantly, it requires a renewed partnership between NASA and its sister agencies. We identify six application themes that summarize the potential of proactive environmental prediction. We also identify four technology themes that articulate our approach to implementing proactive environmental prediction.
Chu, Felicia W.; vanMarle, Kristy; Geary, David C.
2016-01-01
One hundred children (44 boys) participated in a 3-year longitudinal study of the development of basic quantitative competencies and the relation between these competencies and later mathematics and reading achievement. The children's preliteracy knowledge, intelligence, executive functions, and parental educational background were also assessed. The quantitative tasks assessed a broad range of symbolic and nonsymbolic knowledge and were administered four times across 2 years of preschool. Mathematics achievement was assessed at the end of each of 2 years of preschool, and mathematics and word reading achievement were assessed at the end of kindergarten. Our goals were to determine how domain-general abilities contribute to growth in children's quantitative knowledge and to determine how domain-general and domain-specific abilities contribute to children's preschool mathematics achievement and kindergarten mathematics and reading achievement. We first identified four core quantitative competencies (e.g., knowledge of the cardinal value of number words) that predict later mathematics achievement. The domain-general abilities were then used to predict growth in these competencies across 2 years of preschool, and the combination of domain-general abilities, preliteracy skills, and core quantitative competencies were used to predict mathematics achievement across preschool and mathematics and word reading achievement at the end of kindergarten. Both intelligence and executive functions predicted growth in the four quantitative competencies, especially across the first year of preschool. A combination of domain-general and domain-specific competencies predicted preschoolers' mathematics achievement, with a trend for domain-specific skills to be more strongly related to achievement at the beginning of preschool than at the end of preschool. Preschool preliteracy skills, sensitivity to the relative quantities of collections of objects, and cardinal knowledge predicted reading and mathematics achievement at the end of kindergarten. Preliteracy skills were more strongly related to word reading, whereas sensitivity to relative quantity was more strongly related to mathematics achievement. The overall results indicate that a combination of domain-general and domain-specific abilities contribute to development of children's early mathematics and reading achievement. PMID:27252675
Chu, Felicia W; vanMarle, Kristy; Geary, David C
2016-01-01
One hundred children (44 boys) participated in a 3-year longitudinal study of the development of basic quantitative competencies and the relation between these competencies and later mathematics and reading achievement. The children's preliteracy knowledge, intelligence, executive functions, and parental educational background were also assessed. The quantitative tasks assessed a broad range of symbolic and nonsymbolic knowledge and were administered four times across 2 years of preschool. Mathematics achievement was assessed at the end of each of 2 years of preschool, and mathematics and word reading achievement were assessed at the end of kindergarten. Our goals were to determine how domain-general abilities contribute to growth in children's quantitative knowledge and to determine how domain-general and domain-specific abilities contribute to children's preschool mathematics achievement and kindergarten mathematics and reading achievement. We first identified four core quantitative competencies (e.g., knowledge of the cardinal value of number words) that predict later mathematics achievement. The domain-general abilities were then used to predict growth in these competencies across 2 years of preschool, and the combination of domain-general abilities, preliteracy skills, and core quantitative competencies were used to predict mathematics achievement across preschool and mathematics and word reading achievement at the end of kindergarten. Both intelligence and executive functions predicted growth in the four quantitative competencies, especially across the first year of preschool. A combination of domain-general and domain-specific competencies predicted preschoolers' mathematics achievement, with a trend for domain-specific skills to be more strongly related to achievement at the beginning of preschool than at the end of preschool. Preschool preliteracy skills, sensitivity to the relative quantities of collections of objects, and cardinal knowledge predicted reading and mathematics achievement at the end of kindergarten. Preliteracy skills were more strongly related to word reading, whereas sensitivity to relative quantity was more strongly related to mathematics achievement. The overall results indicate that a combination of domain-general and domain-specific abilities contribute to development of children's early mathematics and reading achievement.
NASA Astrophysics Data System (ADS)
Chou, Shuo-Ju
2011-12-01
In recent years the United States has shifted from a threat-based acquisition policy that developed systems for countering specific threats to a capabilities-based strategy that emphasizes the acquisition of systems that provide critical national defense capabilities. This shift in policy, in theory, allows for the creation of an "optimal force" that is robust against current and future threats regardless of the tactics and scenario involved. In broad terms, robustness can be defined as the insensitivity of an outcome to "noise" or non-controlled variables. Within this context, the outcome is the successful achievement of defense strategies and the noise variables are tactics and scenarios that will be associated with current and future enemies. Unfortunately, a lack of system capability, budget, and schedule robustness against technology performance and development uncertainties has led to major setbacks in recent acquisition programs. This lack of robustness stems from the fact that immature technologies have uncertainties in their expected performance, development cost, and schedule that cause to variations in system effectiveness and program development budget and schedule requirements. Unfortunately, the Technology Readiness Assessment process currently used by acquisition program managers and decision-makers to measure technology uncertainty during critical program decision junctions does not adequately capture the impact of technology performance and development uncertainty on program capability and development metrics. The Technology Readiness Level metric employed by the TRA to describe program technology elements uncertainties can only provide a qualitative and non-descript estimation of the technology uncertainties. In order to assess program robustness, specifically requirements robustness, against technology performance and development uncertainties, a new process is needed. This process should provide acquisition program managers and decision-makers with the ability to assess or measure the robustness of program requirements against such uncertainties. A literature review of techniques for forecasting technology performance and development uncertainties and subsequent impacts on capability, budget, and schedule requirements resulted in the conclusion that an analysis process that coupled a probabilistic analysis technique such as Monte Carlo Simulations with quantitative and parametric models of technology performance impact and technology development time and cost requirements would allow the probabilities of meeting specific constraints of these requirements to be established. These probabilities of requirements success metrics can then be used as a quantitative and probabilistic measure of program requirements robustness against technology uncertainties. Combined with a Multi-Objective Genetic Algorithm optimization process and computer-based Decision Support System, critical information regarding requirements robustness against technology uncertainties can be captured and quantified for acquisition decision-makers. This results in a more informed and justifiable selection of program technologies during initial program definition as well as formulation of program development and risk management strategies. To meet the stated research objective, the ENhanced TEchnology Robustness Prediction and RISk Evaluation (ENTERPRISE) methodology was formulated to provide a structured and transparent process for integrating these enabling techniques to provide a probabilistic and quantitative assessment of acquisition program requirements robustness against technology performance and development uncertainties. In order to demonstrate the capabilities of the ENTERPRISE method and test the research Hypotheses, an demonstration application of this method was performed on a notional program for acquiring the Carrier-based Suppression of Enemy Air Defenses (SEAD) using Unmanned Combat Aircraft Systems (UCAS) and their enabling technologies. The results of this implementation provided valuable insights regarding the benefits and inner workings of this methodology as well as its limitations that should be addressed in the future to narrow the gap between current state and the desired state.
A review on principles, theory and practices of 2D-QSAR.
Roy, Kunal; Das, Rudra Narayan
2014-01-01
The central axiom of science purports the explanation of every natural phenomenon using all possible logics coming from pure as well as mixed scientific background. The quantitative structure-activity relationship (QSAR) analysis is a study correlating the behavioral manifestation of compounds with their structures employing the interdisciplinary knowledge of chemistry, mathematics, biology as well as physics. Several studies have attempted to mathematically correlate the chemistry and property (physicochemical/ biological/toxicological) of molecules using various computationally or experimentally derived quantitative parameters termed as descriptors. The dimensionality of the descriptors depends on the type of algorithm employed and defines the nature of QSAR analysis. The most interesting feature of predictive QSAR models is that the behavior of any new or even hypothesized molecule can be predicted by the use of the mathematical equations. The phrase "2D-QSAR" signifies development of QSAR models using 2D-descriptors. Such predictor variables are the most widely practised ones because of their simple and direct mathematical algorithmic nature involving no time consuming energy computations and having reproducible operability. 2D-descriptors have a deluge of contributions in extracting chemical attributes and they are also capable of representing the 3D molecular features to some extent; although in no case they should be considered as the ultimate one, since they often suffer from the problems of intercorrelation, insufficient chemical information as well as lack of interpretation. However, by following rational approaches, novel 2D-descriptors may be developed to obviate various existing problems giving potential 2D-QSAR equations, thereby solving the innumerable chemical mysteries still unexplored.
Creasy, John M; Midya, Abhishek; Chakraborty, Jayasree; Adams, Lauryn B; Gomes, Camilla; Gonen, Mithat; Seastedt, Kenneth P; Sutton, Elizabeth J; Cercek, Andrea; Kemeny, Nancy E; Shia, Jinru; Balachandran, Vinod P; Kingham, T Peter; Allen, Peter J; DeMatteo, Ronald P; Jarnagin, William R; D'Angelica, Michael I; Do, Richard K G; Simpson, Amber L
2018-06-19
This study investigates whether quantitative image analysis of pretreatment CT scans can predict volumetric response to chemotherapy for patients with colorectal liver metastases (CRLM). Patients treated with chemotherapy for CRLM (hepatic artery infusion (HAI) combined with systemic or systemic alone) were included in the study. Patients were imaged at baseline and approximately 8 weeks after treatment. Response was measured as the percentage change in tumour volume from baseline. Quantitative imaging features were derived from the index hepatic tumour on pretreatment CT, and features statistically significant on univariate analysis were included in a linear regression model to predict volumetric response. The regression model was constructed from 70% of data, while 30% were reserved for testing. Test data were input into the trained model. Model performance was evaluated with mean absolute prediction error (MAPE) and R 2 . Clinicopatholologic factors were assessed for correlation with response. 157 patients were included, split into training (n = 110) and validation (n = 47) sets. MAPE from the multivariate linear regression model was 16.5% (R 2 = 0.774) and 21.5% in the training and validation sets, respectively. Stratified by HAI utilisation, MAPE in the validation set was 19.6% for HAI and 25.1% for systemic chemotherapy alone. Clinical factors associated with differences in median tumour response were treatment strategy, systemic chemotherapy regimen, age and KRAS mutation status (p < 0.05). Quantitative imaging features extracted from pretreatment CT are promising predictors of volumetric response to chemotherapy in patients with CRLM. Pretreatment predictors of response have the potential to better select patients for specific therapies. • Colorectal liver metastases (CRLM) are downsized with chemotherapy but predicting the patients that will respond to chemotherapy is currently not possible. • Heterogeneity and enhancement patterns of CRLM can be measured with quantitative imaging. • Prediction model constructed that predicts volumetric response with 20% error suggesting that quantitative imaging holds promise to better select patients for specific treatments.
NASA Astrophysics Data System (ADS)
Min, Junwei; Yao, Baoli; Ketelhut, Steffi; Kemper, Björn
2017-02-01
The modular combination of optical microscopes with digital holographic microscopy (DHM) has been proven to be a powerful tool for quantitative live cell imaging. The introduction of condenser and different microscope objectives (MO) simplifies the usage of the technique and makes it easier to measure different kinds of specimens with different magnifications. However, the high flexibility of illumination and imaging also causes variable phase aberrations that need to be eliminated for high resolution quantitative phase imaging. The existent phase aberrations compensation methods either require add additional elements into the reference arm or need specimen free reference areas or separate reference holograms to build up suitable digital phase masks. These inherent requirements make them unpractical for usage with highly variable illumination and imaging systems and prevent on-line monitoring of living cells. In this paper, we present a simple numerical method for phase aberration compensation based on the analysis of holograms in spatial frequency domain with capabilities for on-line quantitative phase imaging. From a single shot off-axis hologram, the whole phase aberration can be eliminated automatically without numerical fitting or pre-knowledge of the setup. The capabilities and robustness for quantitative phase imaging of living cancer cells are demonstrated.
NASA Astrophysics Data System (ADS)
Furlong, Cosme; Pryputniewicz, Ryszard J.
2002-06-01
Recent technological trends based on miniaturization of mechanical, electro-mechanical, and photonic devices to the microscopic scale, have led to the development of microelectromechanical systems (MEMS). Effective development of MEMS components requires the synergism of advanced design, analysis, and fabrication methodologies, and also of quantitative metrology techniques for characterizing their performance, reliability, and integrity during the electronic packaging cycle. In this paper, we describe opto-electronic techniques for measuring, with sub-micrometer accuracy, shape and changes in states of deformation of MEMS strictures. With the described opto-electronic techniques, it is possible to characterize MEMS components using the display and data modes. In the display mode, interferometric information related to shape and deformation is displayed at video frame rates, providing the capability for adjusting and setting experimental conditions. In the data mode, interferometric information related to shape and deformation is recorded as high-spatial and high-digital resolution images, which are further processed to provide quantitative 3D information. Furthermore, the quantitative 3D data are exported to computer-aided design (CAD) environments and utilized for analysis and optimization of MEMS devices. Capabilities of opto- electronic techniques are illustrated with representative applications demonstrating their applicability to provide indispensable quantitative information for the effective development and optimization of MEMS devices.
Computational aeroelasticity using a pressure-based solver
NASA Astrophysics Data System (ADS)
Kamakoti, Ramji
A computational methodology for performing fluid-structure interaction computations for three-dimensional elastic wing geometries is presented. The flow solver used is based on an unsteady Reynolds-Averaged Navier-Stokes (RANS) model. A well validated k-ε turbulence model with wall function treatment for near wall region was used to perform turbulent flow calculations. Relative merits of alternative flow solvers were investigated. The predictor-corrector-based Pressure Implicit Splitting of Operators (PISO) algorithm was found to be computationally economic for unsteady flow computations. Wing structure was modeled using Bernoulli-Euler beam theory. A fully implicit time-marching scheme (using the Newmark integration method) was used to integrate the equations of motion for structure. Bilinear interpolation and linear extrapolation techniques were used to transfer necessary information between fluid and structure solvers. Geometry deformation was accounted for by using a moving boundary module. The moving grid capability was based on a master/slave concept and transfinite interpolation techniques. Since computations were performed on a moving mesh system, the geometric conservation law must be preserved. This is achieved by appropriately evaluating the Jacobian values associated with each cell. Accurate computation of contravariant velocities for unsteady flows using the momentum interpolation method on collocated, curvilinear grids was also addressed. Flutter computations were performed for the AGARD 445.6 wing at subsonic, transonic and supersonic Mach numbers. Unsteady computations were performed at various dynamic pressures to predict the flutter boundary. Results showed favorable agreement of experiment and previous numerical results. The computational methodology exhibited capabilities to predict both qualitative and quantitative features of aeroelasticity.
2013-01-01
Background Arguably, genotypes and phenotypes may be linked in functional forms that are not well addressed by the linear additive models that are standard in quantitative genetics. Therefore, developing statistical learning models for predicting phenotypic values from all available molecular information that are capable of capturing complex genetic network architectures is of great importance. Bayesian kernel ridge regression is a non-parametric prediction model proposed for this purpose. Its essence is to create a spatial distance-based relationship matrix called a kernel. Although the set of all single nucleotide polymorphism genotype configurations on which a model is built is finite, past research has mainly used a Gaussian kernel. Results We sought to investigate the performance of a diffusion kernel, which was specifically developed to model discrete marker inputs, using Holstein cattle and wheat data. This kernel can be viewed as a discretization of the Gaussian kernel. The predictive ability of the diffusion kernel was similar to that of non-spatial distance-based additive genomic relationship kernels in the Holstein data, but outperformed the latter in the wheat data. However, the difference in performance between the diffusion and Gaussian kernels was negligible. Conclusions It is concluded that the ability of a diffusion kernel to capture the total genetic variance is not better than that of a Gaussian kernel, at least for these data. Although the diffusion kernel as a choice of basis function may have potential for use in whole-genome prediction, our results imply that embedding genetic markers into a non-Euclidean metric space has very small impact on prediction. Our results suggest that use of the black box Gaussian kernel is justified, given its connection to the diffusion kernel and its similar predictive performance. PMID:23763755
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valerio, Luis G.; Arvidson, Kirk B.; Chanderbhan, Ronald F.
2007-07-01
Consistent with the U.S. Food and Drug Administration (FDA) Critical Path Initiative, predictive toxicology software programs employing quantitative structure-activity relationship (QSAR) models are currently under evaluation for regulatory risk assessment and scientific decision support for highly sensitive endpoints such as carcinogenicity, mutagenicity and reproductive toxicity. At the FDA's Center for Food Safety and Applied Nutrition's Office of Food Additive Safety and the Center for Drug Evaluation and Research's Informatics and Computational Safety Analysis Staff (ICSAS), the use of computational SAR tools for both qualitative and quantitative risk assessment applications are being developed and evaluated. One tool of current interest ismore » MDL-QSAR predictive discriminant analysis modeling of rodent carcinogenicity, which has been previously evaluated for pharmaceutical applications by the FDA ICSAS. The study described in this paper aims to evaluate the utility of this software to estimate the carcinogenic potential of small, organic, naturally occurring chemicals found in the human diet. In addition, a group of 19 known synthetic dietary constituents that were positive in rodent carcinogenicity studies served as a control group. In the test group of naturally occurring chemicals, 101 were found to be suitable for predictive modeling using this software's discriminant analysis modeling approach. Predictions performed on these compounds were compared to published experimental evidence of each compound's carcinogenic potential. Experimental evidence included relevant toxicological studies such as rodent cancer bioassays, rodent anti-carcinogenicity studies, genotoxic studies, and the presence of chemical structural alerts. Statistical indices of predictive performance were calculated to assess the utility of the predictive modeling method. Results revealed good predictive performance using this software's rodent carcinogenicity module of over 1200 chemicals, comprised primarily of pharmaceutical, industrial and some natural products developed under an FDA-MDL cooperative research and development agreement (CRADA). The predictive performance for this group of dietary natural products and the control group was 97% sensitivity and 80% concordance. Specificity was marginal at 53%. This study finds that the in silico QSAR analysis employing this software's rodent carcinogenicity database is capable of identifying the rodent carcinogenic potential of naturally occurring organic molecules found in the human diet with a high degree of sensitivity. It is the first study to demonstrate successful QSAR predictive modeling of naturally occurring carcinogens found in the human diet using an external validation test. Further test validation of this software and expansion of the training data set for dietary chemicals will help to support the future use of such QSAR methods for screening and prioritizing the risk of dietary chemicals when actual animal data are inadequate, equivocal, or absent.« less
Fu, Yong-Bi; Yang, Mo-Hua; Zeng, Fangqin; Biligetu, Bill
2017-01-01
Molecular plant breeding with the aid of molecular markers has played an important role in modern plant breeding over the last two decades. Many marker-based predictions for quantitative traits have been made to enhance parental selection, but the trait prediction accuracy remains generally low, even with the aid of dense, genome-wide SNP markers. To search for more accurate trait-specific prediction with informative SNP markers, we conducted a literature review on the prediction issues in molecular plant breeding and on the applicability of an RNA-Seq technique for developing function-associated specific trait (FAST) SNP markers. To understand whether and how FAST SNP markers could enhance trait prediction, we also performed a theoretical reasoning on the effectiveness of these markers in a trait-specific prediction, and verified the reasoning through computer simulation. To the end, the search yielded an alternative to regular genomic selection with FAST SNP markers that could be explored to achieve more accurate trait-specific prediction. Continuous search for better alternatives is encouraged to enhance marker-based predictions for an individual quantitative trait in molecular plant breeding. PMID:28729875
Reproducibility and Prognosis of Quantitative Features Extracted from CT Images12
Balagurunathan, Yoganand; Gu, Yuhua; Wang, Hua; Kumar, Virendra; Grove, Olya; Hawkins, Sam; Kim, Jongphil; Goldgof, Dmitry B; Hall, Lawrence O; Gatenby, Robert A; Gillies, Robert J
2014-01-01
We study the reproducibility of quantitative imaging features that are used to describe tumor shape, size, and texture from computed tomography (CT) scans of non-small cell lung cancer (NSCLC). CT images are dependent on various scanning factors. We focus on characterizing image features that are reproducible in the presence of variations due to patient factors and segmentation methods. Thirty-two NSCLC nonenhanced lung CT scans were obtained from the Reference Image Database to Evaluate Response data set. The tumors were segmented using both manual (radiologist expert) and ensemble (software-automated) methods. A set of features (219 three-dimensional and 110 two-dimensional) was computed, and quantitative image features were statistically filtered to identify a subset of reproducible and nonredundant features. The variability in the repeated experiment was measured by the test-retest concordance correlation coefficient (CCCTreT). The natural range in the features, normalized to variance, was measured by the dynamic range (DR). In this study, there were 29 features across segmentation methods found with CCCTreT and DR ≥ 0.9 and R2Bet ≥ 0.95. These reproducible features were tested for predicting radiologist prognostic score; some texture features (run-length and Laws kernels) had an area under the curve of 0.9. The representative features were tested for their prognostic capabilities using an independent NSCLC data set (59 lung adenocarcinomas), where one of the texture features, run-length gray-level nonuniformity, was statistically significant in separating the samples into survival groups (P ≤ .046). PMID:24772210
Fan, Lihua; Shuai, Jiangbing; Zeng, Ruoxue; Mo, Hongfei; Wang, Suhua; Zhang, Xiaofeng; He, Yongqiang
2017-12-01
Genome fragment enrichment (GFE) method was applied to identify host-specific bacterial genetic markers that differ among different fecal metagenomes. To enrich for swine-specific DNA fragments, swine fecal DNA composite (n = 34) was challenged against a DNA composite consisting of cow, human, goat, sheep, chicken, duck and goose fecal DNA extracts (n = 83). Bioinformatic analyses of 384 non-redundant swine enriched metagenomic sequences indicated a preponderance of Bacteroidales-like regions predicted to encode metabolism-associated, cellular processes and information storage and processing. After challenged against fecal DNA extracted from different animal sources, four sequences from the clone libraries targeting two Bacteroidales- (genes 1-38 and 3-53), a Clostridia- (gene 2-109) as well as a Bacilli-like sequence (gene 2-95), respectively, showed high specificity to swine feces based on PCR analysis. Host-specificity and host-sensitivity analysis confirmed that oligonucleotide primers and probes capable of annealing to select Bacteroidales-like sequences (1-38 and 3-53) exhibited high specificity (>90%) in quantitative PCR assays with 71 fecal DNAs from non-target animal sources. The two assays also demonstrated broad distributions of corresponding genetic markers (>94% positive) among 72 swine feces. After evaluation with environmental water samples from different areas, swine-targeted assays based on two Bacteroidales-like GFE sequences appear to be suitable quantitative tracing tools for swine fecal pollution. Copyright © 2017 Elsevier Ltd. All rights reserved.
Quantitative prediction of phase transformations in silicon during nanoindentation
NASA Astrophysics Data System (ADS)
Zhang, Liangchi; Basak, Animesh
2013-08-01
This paper establishes the first quantitative relationship between the phases transformed in silicon and the shape characteristics of nanoindentation curves. Based on an integrated analysis using TEM and unit cell properties of phases, the volumes of the phases emerged in a nanoindentation are formulated as a function of pop-out size and depth of nanoindentation impression. This simple formula enables a fast, accurate and quantitative prediction of the phases in a nanoindentation cycle, which has been impossible before.
Quantitative analysis of single-molecule superresolution images
Coltharp, Carla; Yang, Xinxing; Xiao, Jie
2014-01-01
This review highlights the quantitative capabilities of single-molecule localization-based superresolution imaging methods. In addition to revealing fine structural details, the molecule coordinate lists generated by these methods provide the critical ability to quantify the number, clustering, and colocalization of molecules with 10 – 50 nm resolution. Here we describe typical workflows and precautions for quantitative analysis of single-molecule superresolution images. These guidelines include potential pitfalls and essential control experiments, allowing critical assessment and interpretation of superresolution images. PMID:25179006
Tranca, D. E.; Stanciu, S. G.; Hristu, R.; Stoichita, C.; Tofail, S. A. M.; Stanciu, G. A.
2015-01-01
A new method for high-resolution quantitative measurement of the dielectric function by using scattering scanning near-field optical microscopy (s-SNOM) is presented. The method is based on a calibration procedure that uses the s-SNOM oscillating dipole model of the probe-sample interaction and quantitative s-SNOM measurements. The nanoscale capabilities of the method have the potential to enable novel applications in various fields such as nano-electronics, nano-photonics, biology or medicine. PMID:26138665
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gutmacher, R.; Crawford, R.
This comprehensive guide to the analytical capabilities of Lawrence Livermore Laboratory's General Chemistry Division describes each analytical method in terms of its principle, field of application, and qualitative and quantitative uses. Also described are the state and quantity of sample required for analysis, processing time, available instrumentation, and responsible personnel.
The adverse outcome pathway (AOP) framework can be used to support the use of mechanistic toxicology data as a basis for risk assessment. For certain risk contexts this includes defining, quantitative linkages between the molecular initiating event (MIE) and subsequent key events...
Studying Biology to Understand Risk: Dosimetry Models and Quantitative Adverse Outcome Pathways
Confidence in the quantitative prediction of risk is increased when the prediction is based to as great an extent as possible on the relevant biological factors that constitute the pathway from exposure to adverse outcome. With the first examples now over 40 years old, physiologi...
Dosimetry Modeling of Inhaled Formaldehyde: Binning Nasal Flux Predictions for Quantitative Risk Assessment. Kimbell, J.S., Overton, J.H., Subramaniam, R.P., Schlosser, P.M., Morgan, K.T., Conolly, R.B., and Miller, F.J. (2001). Toxicol. Sci. 000, 000:000.
Interspecies e...
The present study explores the merit of utilizing available pharmaceutical data to construct a quantitative structure-activity relationship (QSAR) for prediction of the fraction of a chemical unbound to plasma protein (Fub) in environmentally relevant compounds. Independent model...
Energy-absorption capability of composite tubes and beams. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Farley, Gary L.; Jones, Robert M.
1989-01-01
In this study the objective was to develop a method of predicting the energy-absorption capability of composite subfloor beam structures. Before it is possible to develop such an analysis capability, an in-depth understanding of the crushing process of composite materials must be achieved. Many variables affect the crushing process of composite structures, such as the constituent materials' mechanical properties, specimen geometry, and crushing speed. A comprehensive experimental evaluation of tube specimens was conducted to develop insight into how composite structural elements crush and what are the controlling mechanisms. In this study the four characteristic crushing modes, transverse shearing, brittle fracturing, lamina bending, and local buckling were identified and the mechanisms that control the crushing process defined. An in-depth understanding was developed of how material properties affect energy-absorption capability. For example, an increase in fiber and matrix stiffness and failure strain can, depending upon the configuration of the tube, increase energy-absorption capability. An analysis to predict the energy-absorption capability of composite tube specimens was developed and verified. Good agreement between experiment and prediction was obtained.
A Quantitative Model of Expert Transcription Typing
1993-03-08
side of pure psychology, several researchers have argued that transcription typing is a particularly good activity for the study of human skilled...phenomenon with a quantitative METT prediction. The first, quick and dirty analysis gives a good prediction of the copy span, in fact, it is even...typing, it should be demonstrated that the mechanism of the model does not get in the way of good predictions. If situations occur where the entire
Measurement with microscopic MRI and simulation of flow in different aneurysm models.
Edelhoff, Daniel; Walczak, Lars; Frank, Frauke; Heil, Marvin; Schmitz, Inge; Weichert, Frank; Suter, Dieter
2015-10-01
The impact and the development of aneurysms depend to a significant degree on the exchange of liquid between the regular vessel and the pathological extension. A better understanding of this process will lead to improved prediction capabilities. The aim of the current study was to investigate fluid-exchange in aneurysm models of different complexities by combining microscopic magnetic resonance measurements with numerical simulations. In order to evaluate the accuracy and applicability of these methods, the fluid-exchange process between the unaltered vessel lumen and the aneurysm phantoms was analyzed quantitatively using high spatial resolution. Magnetic resonance flow imaging was used to visualize fluid-exchange in two different models produced with a 3D printer. One model of an aneurysm was based on histological findings. The flow distribution in the different models was measured on a microscopic scale using time of flight magnetic resonance imaging. The whole experiment was simulated using fast graphics processing unit-based numerical simulations. The obtained simulation results were compared qualitatively and quantitatively with the magnetic resonance imaging measurements, taking into account flow and spin-lattice relaxation. The results of both presented methods compared well for the used aneurysm models and the chosen flow distributions. The results from the fluid-exchange analysis showed comparable characteristics concerning measurement and simulation. Similar symmetry behavior was observed. Based on these results, the amount of fluid-exchange was calculated. Depending on the geometry of the models, 7% to 45% of the liquid was exchanged per second. The result of the numerical simulations coincides well with the experimentally determined velocity field. The rate of fluid-exchange between vessel and aneurysm was well-predicted. Hence, the results obtained by simulation could be validated by the experiment. The observed deviations can be caused by the noise in the measurement and by the limited resolution of the simulation. The resulting differences are small enough to allow reliable predictions of the flow distribution in vessels with stents and for pulsed blood flow.
Escrig-Doménech, Aarón; Simó-Alfonso, Ernesto F; Ramis-Ramos, Guillermo
2016-08-17
A method for the simultaneous determination of the most frequently used surfactant families -linear alkyl benzenesulphonates (LAS), alkyl ether sulphates (AES), fatty alcohol ethoxylates (FAE) and oleins (soaps, fatty acid salts) - in cleaning products, has been developed. The common reversed phase octyl (C8), pentafluorophenyl and biphenyl columns were not capable of separating the anionic LAS and AES classes; however, since only LAS absorbs in the UV, these two classes were independently quantified using a C8 column and serially connected UV and ELSD detection. The best compromise to resolve the four surfactant classes and the oligomers within the classes was achieved with a C8 column and an ACN/water gradient. To enhance retention of the anionic surfactants, ammonium acetate, as an ion-pairing agent compatible with ELSD detection, was used. Also, to shift the olein peaks with respect to that of the FAE oligomers, acetic acid was used. In the optimized method, modulation of the mobile phase, using ammonium acetate during elution of LAS and AES, and acetic acid after elution of LAS and AES, was provided. Quantitation of the overlapped LAS and AES classes was achieved by using the UV detector to quantitate LAS and the ELSD to determine AES by difference. Accuracy in the determination of AES was achieved by using a quadratic model, and by correcting the predicted AES concentration according to the LAS concentration previously established using the UV chromatogram. Another approach also leading to accurate predictions of the AES concentration was to increase the AES concentrations in the samples by adding a standard solution. In the samples reinforced with AES, correction of the predicted AES concentration was not required. FAE and olein were quantified using also quadratic calibration. Copyright © 2016 Elsevier B.V. All rights reserved.
Nolte, Tom M; Pinto-Gil, Kevin; Hendriks, A Jan; Ragas, Ad M J; Pastor, Manuel
2018-01-24
Microbial biomass and acclimation can affect the removal of organic chemicals in natural surface waters. In order to account for these effects and develop more robust models for biodegradation, we have compiled and curated removal data for un-acclimated (pristine) surface waters on which we developed quantitative structure-activity relationships (QSARs). Global analysis of the very heterogeneous dataset including neutral, anionic, cationic and zwitterionic chemicals (N = 233) using a random forest algorithm showed that useful predictions were possible (Q ext 2 = 0.4-0.5) though relatively large standard errors were associated (SDEP ∼0.7). Classification of the chemicals based on speciation state and metabolic pathway showed that biodegradation is influenced by the two, and that the dependence of biodegradation on chemical characteristics is non-linear. Class-specific QSAR analysis indicated that shape and charge distribution determine the biodegradation of neutral chemicals (R 2 ∼ 0.6), e.g. through membrane permeation or binding to P450 enzymes, whereas the average biodegradation of charged chemicals is 1 to 2 orders of magnitude lower, for which degradation depends more directly on cellular uptake (R 2 ∼ 0.6). Further analysis showed that specific chemical classes such as peptides and organic halogens are relatively less biodegradable in pristine surface waters, resulting in the need for the microbial consortia to acclimate. Additional literature data was used to verify an acclimation model (based on Monod-type kinetics) capable of extrapolating QSAR predictions to acclimating conditions such as in water treatment, downstream lakes and large rivers under μg L -1 to mg L -1 concentrations. The framework developed, despite being based on multiple assumptions, is promising and needs further validation using experimentation with more standardised and homogenised conditions as well as adequate characterization of the inoculum used.
NASA Astrophysics Data System (ADS)
Künneth, Christopher; Materlik, Robin; Kersch, Alfred
2017-05-01
Size effects from surface or interface energy play a pivotal role in stabilizing the ferroelectric phase in recently discovered thin film Zirconia-Hafnia. However, sufficient quantitative understanding has been lacking due to the interference with the stabilizing effect from dopants. For the important class of undoped Hf1-xZrxO2, a phase stability model based on free energy from Density functional theory (DFT) and surface energy values adapted to the sparse experimental and theoretical data has been successful to describe key properties of the available thin film data. Since surfaces and interfaces are prone to interference, the predictive capability of the model is surprising and directs to a hitherto undetected, underlying reason. New experimental data hint on the existence of an interlayer on the grain surface fixed in the tetragonal phase possibly shielding from external influence. To explore the consequences of such a mechanism, we develop an interface free energy model to include the fixed interlayer, generalize the grain model to include a grain radius distribution, calculate average polarization and permittivity, and compare the model with available experimental data. Since values for interface energies are sparse or uncertain, we obtain its values from minimizing the least square difference between predicted key parameters to experimental data in a global optimization. Since the detailed values for DFT energies depend on the chosen method, we repeat the search for different computed data sets and come out with quantitatively different but qualitatively consistent values for interface energies. The resulting values are physically very reasonable and the model is able to give qualitative prediction. On the other hand, the optimization reveals that the model is not able to fully capture the experimental data. We discuss possible physical effects and directions of research to possibly close this gap.
Caetano, Fabiana A; Dirk, Brennan S; Tam, Joshua H K; Cavanagh, P Craig; Goiko, Maria; Ferguson, Stephen S G; Pasternak, Stephen H; Dikeakos, Jimmy D; de Bruyn, John R; Heit, Bryan
2015-12-01
Our current understanding of the molecular mechanisms which regulate cellular processes such as vesicular trafficking has been enabled by conventional biochemical and microscopy techniques. However, these methods often obscure the heterogeneity of the cellular environment, thus precluding a quantitative assessment of the molecular interactions regulating these processes. Herein, we present Molecular Interactions in Super Resolution (MIiSR) software which provides quantitative analysis tools for use with super-resolution images. MIiSR combines multiple tools for analyzing intermolecular interactions, molecular clustering and image segmentation. These tools enable quantification, in the native environment of the cell, of molecular interactions and the formation of higher-order molecular complexes. The capabilities and limitations of these analytical tools are demonstrated using both modeled data and examples derived from the vesicular trafficking system, thereby providing an established and validated experimental workflow capable of quantitatively assessing molecular interactions and molecular complex formation within the heterogeneous environment of the cell.
Widefield quantitative multiplex surface enhanced Raman scattering imaging in vivo
NASA Astrophysics Data System (ADS)
McVeigh, Patrick Z.; Mallia, Rupananda J.; Veilleux, Israel; Wilson, Brian C.
2013-04-01
In recent years numerous studies have shown the potential advantages of molecular imaging in vitro and in vivo using contrast agents based on surface enhanced Raman scattering (SERS), however the low throughput of traditional point-scanned imaging methodologies have limited their use in biological imaging. In this work we demonstrate that direct widefield Raman imaging based on a tunable filter is capable of quantitative multiplex SERS imaging in vivo, and that this imaging is possible with acquisition times which are orders of magnitude lower than achievable with comparable point-scanned methodologies. The system, designed for small animal imaging, has a linear response from (0.01 to 100 pM), acquires typical in vivo images in <10 s, and with suitable SERS reporter molecules is capable of multiplex imaging without compensation for spectral overlap. To demonstrate the utility of widefield Raman imaging in biological applications, we show quantitative imaging of four simultaneous SERS reporter molecules in vivo with resulting probe quantification that is in excellent agreement with known quantities (R2>0.98).
NASA Astrophysics Data System (ADS)
Pham, Binh Thai; Tien Bui, Dieu; Pourghasemi, Hamid Reza; Indra, Prakash; Dholakia, M. B.
2017-04-01
The objective of this study is to make a comparison of the prediction performance of three techniques, Functional Trees (FT), Multilayer Perceptron Neural Networks (MLP Neural Nets), and Naïve Bayes (NB) for landslide susceptibility assessment at the Uttarakhand Area (India). Firstly, a landslide inventory map with 430 landslide locations in the study area was constructed from various sources. Landslide locations were then randomly split into two parts (i) 70 % landslide locations being used for training models (ii) 30 % landslide locations being employed for validation process. Secondly, a total of eleven landslide conditioning factors including slope angle, slope aspect, elevation, curvature, lithology, soil, land cover, distance to roads, distance to lineaments, distance to rivers, and rainfall were used in the analysis to elucidate the spatial relationship between these factors and landslide occurrences. Feature selection of Linear Support Vector Machine (LSVM) algorithm was employed to assess the prediction capability of these conditioning factors on landslide models. Subsequently, the NB, MLP Neural Nets, and FT models were constructed using training dataset. Finally, success rate and predictive rate curves were employed to validate and compare the predictive capability of three used models. Overall, all the three models performed very well for landslide susceptibility assessment. Out of these models, the MLP Neural Nets and the FT models had almost the same predictive capability whereas the MLP Neural Nets (AUC = 0.850) was slightly better than the FT model (AUC = 0.849). The NB model (AUC = 0.838) had the lowest predictive capability compared to other models. Landslide susceptibility maps were final developed using these three models. These maps would be helpful to planners and engineers for the development activities and land-use planning.
Comparative Analysis of Predictive Models for Liver Toxicity Using ToxCast Assays and Quantitative Structure-Activity Relationships Jie Liu1,2, Richard Judson1, Matthew T. Martin1, Huixiao Hong3, Imran Shah1 1National Center for Computational Toxicology (NCCT), US EPA, RTP, NC...
Investigation of a redox-sensitive predictive model of mouse embryonic stem cell differentiation via quantitative nuclease protection assays and glutathione redox status Chandler KJ,Hansen JM, Knudsen T,and Hunter ES 1. U.S. Environmental Protection Agency, Research Triangl...
NASA Astrophysics Data System (ADS)
Arjunan, A.; Wang, C. J.; Yahiaoui, K.; Mynors, D. J.; Morgan, T.; Nguyen, V. B.; English, M.
2014-11-01
Building standards incorporating quantitative acoustical criteria to ensure adequate sound insulation are now being implemented. Engineers are making great efforts to design acoustically efficient double-wall structures. Accordingly, efficient simulation models to predict the acoustic insulation of double-leaf wall structures are needed. This paper presents the development of a numerical tool that can predict the frequency dependent sound reduction index R of stud based double-leaf walls at one-third-octave band frequency range. A fully vibro-acoustic 3D model consisting of two rooms partitioned using a double-leaf wall, considering the structure and acoustic fluid coupling incorporating the existing fluid and structural solvers are presented. The validity of the finite element (FE) model is assessed by comparison with experimental test results carried out in a certified laboratory. Accurate representation of the structural damping matrix to effectively predict the R values are studied. The possibilities of minimising the simulation time using a frequency dependent mesh model was also investigated. The FEA model presented in this work is capable of predicting the weighted sound reduction index Rw along with A-weighted pink noise C and A-weighted urban noise Ctr within an error of 1 dB. The model developed can also be used to analyse the acoustically induced frequency dependent geometrical behaviour of the double-leaf wall components to optimise them for best acoustic performance. The FE modelling procedure reported in this paper can be extended to other building components undergoing fluid-structure interaction (FSI) to evaluate their acoustic insulation.
NASA Astrophysics Data System (ADS)
Schultz, L. A.; Smith, M. R.; Fuell, K.; Stano, G. T.; LeRoy, A.; Berndt, E.
2015-12-01
Instruments aboard the Joint Polar Satellite System (JPSS) series of satellites will provide imagery and other data sets relevant to operational weather forecasts. To prepare current and future weather forecasters in application of these data sets, Proving Ground activities have been established that demonstrate future JPSS capabilities through use of similar sensors aboard NASA's Terra and Aqua satellites, and the S-NPP mission. As part of these efforts, NASA's Short-term Prediction Research and Transition (SPoRT) Center in Huntsville, Alabama partners with near real-time providers of S-NPP products (e.g., NASA, UW/CIMSS, UAF/GINA, etc.) to demonstrate future capabilities of JPSS. This includes training materials and product distribution of multi-spectral false color composites of the visible, near-infrared, and infrared bands of MODIS and VIIRS. These are designed to highlight phenomena of interest to help forecasters digest the multispectral data provided by the VIIRS sensor. In addition, forecasters have been trained on the use of the VIIRS day-night band, which provides imagery of moonlit clouds, surface, and lights emitted by human activities. Hyperspectral information from the S-NPP/CrIS instrument provides thermodynamic profiles that aid in the detection of extremely cold air aloft, helping to map specific aviation hazards at high latitudes. Hyperspectral data also support the estimation of ozone concentration, which can highlight the presence of much drier stratospheric air, and map its interaction with mid-latitude or tropical cyclones to improve predictions of their strengthening or decay. Proving Ground activities are reviewed, including training materials and methods that have been provided to forecasters, and forecaster feedback on these products that has been acquired through formal, detailed assessment of their applicability to a given forecast threat or task. Future opportunities for collaborations around the delivery of training are proposed, along with other applications of multispectral data and derived, more quantitative products.
van Rossum, Peter S N; Fried, David V; Zhang, Lifei; Hofstetter, Wayne L; van Vulpen, Marco; Meijer, Gert J; Court, Laurence E; Lin, Steven H
2016-05-01
A reliable prediction of a pathologic complete response (pathCR) to chemoradiotherapy before surgery for esophageal cancer would enable investigators to study the feasibility and outcome of an organ-preserving strategy after chemoradiotherapy. So far no clinical parameters or diagnostic studies are able to accurately predict which patients will achieve a pathCR. The aim of this study was to determine whether subjective and quantitative assessment of baseline and postchemoradiation (18)F-FDG PET can improve the accuracy of predicting pathCR to preoperative chemoradiotherapy in esophageal cancer beyond clinical predictors. This retrospective study was approved by the institutional review board, and the need for written informed consent was waived. Clinical parameters along with subjective and quantitative parameters from baseline and postchemoradiation (18)F-FDG PET were derived from 217 esophageal adenocarcinoma patients who underwent chemoradiotherapy followed by surgery. The associations between these parameters and pathCR were studied in univariable and multivariable logistic regression analysis. Four prediction models were constructed and internally validated using bootstrapping to study the incremental predictive values of subjective assessment of (18)F-FDG PET, conventional quantitative metabolic features, and comprehensive (18)F-FDG PET texture/geometry features, respectively. The clinical benefit of (18)F-FDG PET was determined using decision-curve analysis. A pathCR was found in 59 (27%) patients. A clinical prediction model (corrected c-index, 0.67) was improved by adding (18)F-FDG PET-based subjective assessment of response (corrected c-index, 0.72). This latter model was slightly improved by the addition of 1 conventional quantitative metabolic feature only (i.e., postchemoradiation total lesion glycolysis; corrected c-index, 0.73), and even more by subsequently adding 4 comprehensive (18)F-FDG PET texture/geometry features (corrected c-index, 0.77). However, at a decision threshold of 0.9 or higher, representing a clinically relevant predictive value for pathCR at which one may be willing to omit surgery, there was no clear incremental value. Subjective and quantitative assessment of (18)F-FDG PET provides statistical incremental value for predicting pathCR after preoperative chemoradiotherapy in esophageal cancer. However, the discriminatory improvement beyond clinical predictors does not translate into a clinically relevant benefit that could change decision making. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Early prediction of coma recovery after cardiac arrest with blinded pupillometry.
Solari, Daria; Rossetti, Andrea O; Carteron, Laurent; Miroz, John-Paul; Novy, Jan; Eckert, Philippe; Oddo, Mauro
2017-06-01
Prognostication studies on comatose cardiac arrest (CA) patients are limited by lack of blinding, potentially causing overestimation of outcome predictors and self-fulfilling prophecy. Using a blinded approach, we analyzed the value of quantitative automated pupillometry to predict neurological recovery after CA. We examined a prospective cohort of 103 comatose adult patients who were unconscious 48 hours after CA and underwent repeated measurements of quantitative pupillary light reflex (PLR) using the Neurolight-Algiscan device. Clinical examination, electroencephalography (EEG), somatosensory evoked potentials (SSEP), and serum neuron-specific enolase were performed in parallel, as part of standard multimodal assessment. Automated pupillometry results were blinded to clinicians involved in patient care. Cerebral Performance Categories (CPC) at 1 year was the outcome endpoint. Survivors (n = 50 patients; 32 CPC 1, 16 CPC 2, 2 CPC 3) had higher quantitative PLR (median = 20 [range = 13-41] vs 11 [0-55] %, p < 0.0001) and constriction velocity (1.46 [0.85-4.63] vs 0.94 [0.16-4.97] mm/s, p < 0.0001) than nonsurvivors. At 48 hours, a quantitative PLR < 13% had 100% specificity and positive predictive value to predict poor recovery (0% false-positive rate), and provided equal performance to that of EEG and SSEP. Reduced quantitative PLR correlated with higher serum neuron-specific enolase (Spearman r = -0.52, p < 0.0001). Reduced quantitative PLR correlates with postanoxic brain injury and, when compared to standard multimodal assessment, is highly accurate in predicting long-term prognosis after CA. This is the first prognostication study to show the value of automated pupillometry using a blinded approach to minimize self-fulfilling prophecy. Ann Neurol 2017;81:804-810. © 2017 American Neurological Association.
Research on Improved Depth Belief Network-Based Prediction of Cardiovascular Diseases
Zhang, Hongpo
2018-01-01
Quantitative analysis and prediction can help to reduce the risk of cardiovascular disease. Quantitative prediction based on traditional model has low accuracy. The variance of model prediction based on shallow neural network is larger. In this paper, cardiovascular disease prediction model based on improved deep belief network (DBN) is proposed. Using the reconstruction error, the network depth is determined independently, and unsupervised training and supervised optimization are combined. It ensures the accuracy of model prediction while guaranteeing stability. Thirty experiments were performed independently on the Statlog (Heart) and Heart Disease Database data sets in the UCI database. Experimental results showed that the mean of prediction accuracy was 91.26% and 89.78%, respectively. The variance of prediction accuracy was 5.78 and 4.46, respectively. PMID:29854369
A Method to Constrain Genome-Scale Models with 13C Labeling Data
García Martín, Héctor; Kumar, Vinay Satish; Weaver, Daniel; Ghosh, Amit; Chubukov, Victor; Mukhopadhyay, Aindrila; Arkin, Adam; Keasling, Jay D.
2015-01-01
Current limitations in quantitatively predicting biological behavior hinder our efforts to engineer biological systems to produce biofuels and other desired chemicals. Here, we present a new method for calculating metabolic fluxes, key targets in metabolic engineering, that incorporates data from 13C labeling experiments and genome-scale models. The data from 13C labeling experiments provide strong flux constraints that eliminate the need to assume an evolutionary optimization principle such as the growth rate optimization assumption used in Flux Balance Analysis (FBA). This effective constraining is achieved by making the simple but biologically relevant assumption that flux flows from core to peripheral metabolism and does not flow back. The new method is significantly more robust than FBA with respect to errors in genome-scale model reconstruction. Furthermore, it can provide a comprehensive picture of metabolite balancing and predictions for unmeasured extracellular fluxes as constrained by 13C labeling data. A comparison shows that the results of this new method are similar to those found through 13C Metabolic Flux Analysis (13C MFA) for central carbon metabolism but, additionally, it provides flux estimates for peripheral metabolism. The extra validation gained by matching 48 relative labeling measurements is used to identify where and why several existing COnstraint Based Reconstruction and Analysis (COBRA) flux prediction algorithms fail. We demonstrate how to use this knowledge to refine these methods and improve their predictive capabilities. This method provides a reliable base upon which to improve the design of biological systems. PMID:26379153
Bioinactivation: Software for modelling dynamic microbial inactivation.
Garre, Alberto; Fernández, Pablo S; Lindqvist, Roland; Egea, Jose A
2017-03-01
This contribution presents the bioinactivation software, which implements functions for the modelling of isothermal and non-isothermal microbial inactivation. This software offers features such as user-friendliness, modelling of dynamic conditions, possibility to choose the fitting algorithm and generation of prediction intervals. The software is offered in two different formats: Bioinactivation core and Bioinactivation SE. Bioinactivation core is a package for the R programming language, which includes features for the generation of predictions and for the fitting of models to inactivation experiments using non-linear regression or a Markov Chain Monte Carlo algorithm (MCMC). The calculations are based on inactivation models common in academia and industry (Bigelow, Peleg, Mafart and Geeraerd). Bioinactivation SE supplies a user-friendly interface to selected functions of Bioinactivation core, namely the model fitting of non-isothermal experiments and the generation of prediction intervals. The capabilities of bioinactivation are presented in this paper through a case study, modelling the non-isothermal inactivation of Bacillus sporothermodurans. This study has provided a full characterization of the response of the bacteria to dynamic temperature conditions, including confidence intervals for the model parameters and a prediction interval of the survivor curve. We conclude that the MCMC algorithm produces a better characterization of the biological uncertainty and variability than non-linear regression. The bioinactivation software can be relevant to the food and pharmaceutical industry, as well as to regulatory agencies, as part of a (quantitative) microbial risk assessment. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Greenberg, Paul S.; Wernet, Mark P.
1999-01-01
Systems have been developed and demonstrated for performing quantitative velocity measurements in reduced gravity combustion science and fluid physics investigations. The unique constraints and operational environments inherent to reduced-gravity experimental facilities pose special challenges to the development of hardware and software systems. Both point and planar velocimetric capabilities are described, with particular attention being given to the development of systems to support the International Space Station laboratory. Emphasis has been placed on optical methods, primarily arising from the sensitivity of the phenomena of interest to intrusive probes. Limitations on available power, volume, data storage, and attendant expertise have motivated the use of solid-state sources and detectors, as well as efficient analysis capabilities emphasizing interactive data display and parameter control.
NASA Astrophysics Data System (ADS)
Ibrahim, Heide; Wales, Benji; Beaulieu, Samuel; Schmidt, Bruno E.; Thiré, Nicolas; Fowe, Emmanuel P.; Bisson, Éric; Hebeisen, Christoph T.; Wanie, Vincent; Giguére, Mathieu; Kieffer, Jean-Claude; Spanner, Michael; Bandrauk, André D.; Sanderson, Joseph; Schuurman, Michael S.; Légaré, François
2014-07-01
The introduction of femto-chemistry has made it a primary goal to follow the nuclear and electronic evolution of a molecule in time and space as it undergoes a chemical reaction. Using Coulomb Explosion Imaging, we have shot the first high-resolution molecular movie of a to and fro isomerization process in the acetylene cation. So far, this kind of phenomenon could only be observed using vacuum ultraviolet light from a free-electron laser. Here we show that 266 nm ultrashort laser pulses are capable of initiating rich dynamics through multiphoton ionization. With our generally applicable tabletop approach that can be used for other small organic molecules, we have investigated two basic chemical reactions simultaneously: proton migration and C=C bond breaking, triggered by multiphoton ionization. The experimental results are in excellent agreement with the timescales and relaxation pathways predicted by new and quantitative ab initio trajectory simulations.
Photoacoustic microscopy of bilirubin in tissue phantoms
Zhou, Yong; Zhang, Chi; Yao, Da-Kang
2012-01-01
Abstract. Determining both bilirubin’s concentration and its spatial distribution are important in disease diagnosis. Here, for the first time, we applied quantitative multiwavelength photoacoustic microscopy (PAM) to detect bilirubin concentration and distribution simultaneously. By measuring tissue-mimicking phantoms with different bilirubin concentrations, we showed that the root-mean-square error of prediction has reached 0.52 and 0.83 mg/dL for pure bilirubin and for blood-mixed bilirubin detection (with 100% oxygen saturation), respectively. We further demonstrated the capability of the PAM system to image bilirubin distribution both with and without blood. Finally, by underlaying bilirubin phantoms with mouse skins, we showed that bilirubin can be imaged with consistent accuracy down to >400 μm in depth. Our results show that PAM has potential for noninvasive bilirubin monitoring in vivo, as well as for further clinical applications. PMID:23235894
Quantifying Mapping Orbit Performance in the Vicinity of Primitive Bodies
NASA Technical Reports Server (NTRS)
Pavlak, Thomas A.; Broschart, Stephen B.; Lantoine, Gregory
2015-01-01
Predicting and quantifying the capability of mapping orbits in the vicinity of primitive bodies is challenging given the complex orbit geometries that exist and the irregular shape of the bodies themselves. This paper employs various quantitative metrics to characterize the performance and relative effectiveness of various types of mapping orbits including terminator, quasi-terminator, hovering, pingpong, and conic-like trajectories. Metrics of interest include surface area coverage, lighting conditions, and the variety of viewing angles achieved. The metrics discussed in this investigation are intended to enable mission designers and project stakeholders to better characterize candidate mapping orbits during preliminary mission formulation activities.The goal of this investigation is to understand the trade space associated with carrying out remotesensing campaigns at small primitive bodies in the context of a robotic space mission. Specifically,this study seeks to understand the surface viewing geometries, ranges, etc. that are available fromseveral commonly proposed mapping orbits architectures.
Quantifying Mapping Orbit Performance in the Vicinity of Primitive Bodies
NASA Technical Reports Server (NTRS)
Pavlak, Thomas A.; Broschart, Stephen B.; Lantoine, Gregory
2015-01-01
Predicting and quantifying the capability of mapping orbits in the vicinity of primitive bodies is challenging given the complex orbit geometries that exist and the irregular shape of the bodies themselves. This paper employs various quantitative metrics to characterize the performance and relative effectiveness of various types of mapping orbits including terminator, quasi-terminator, hovering, ping pong, and conic-like trajectories. Metrics of interest include surface area coverage, lighting conditions, and the variety of viewing angles achieved. The metrics discussed in this investigation are intended to enable mission designers and project stakeholders to better characterize candidate mapping orbits during preliminary mission formulation activities. The goal of this investigation is to understand the trade space associated with carrying out remote sensing campaigns at small primitive bodies in the context of a robotic space mission. Specifically, this study seeks to understand the surface viewing geometries, ranges, etc. that are available from several commonly proposed mapping orbits architectures
Simulating dynamical features of escape panic
NASA Astrophysics Data System (ADS)
Helbing, Dirk; Farkas, Illés; Vicsek, Tamás
2000-09-01
One of the most disastrous forms of collective human behaviour is the kind of crowd stampede induced by panic, often leading to fatalities as people are crushed or trampled. Sometimes this behaviour is triggered in life-threatening situations such as fires in crowded buildings; at other times, stampedes can arise during the rush for seats or seemingly without cause. Although engineers are finding ways to alleviate the scale of such disasters, their frequency seems to be increasing with the number and size of mass events. But systematic studies of panic behaviour and quantitative theories capable of predicting such crowd dynamics are rare. Here we use a model of pedestrian behaviour to investigate the mechanisms of (and preconditions for) panic and jamming by uncoordinated motion in crowds. Our simulations suggest practical ways to prevent dangerous crowd pressures. Moreover, we find an optimal strategy for escape from a smoke-filled room, involving a mixture of individualistic behaviour and collective `herding' instinct.
Systematic coarse-grained modeling of complexation between small interfering RNA and polycations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, Zonghui; Luijten, Erik, E-mail: luijten@northwestern.edu; Department of Materials Science and Engineering, Northwestern University, Evanston, Illinois 60208
All-atom molecular dynamics simulations can provide insight into the properties of polymeric gene-delivery carriers by elucidating their interactions and detailed binding patterns with nucleic acids. However, to explore nanoparticle formation through complexation of these polymers and nucleic acids and study their behavior at experimentally relevant time and length scales, a reliable coarse-grained model is needed. Here, we systematically develop such a model for the complexation of small interfering RNA (siRNA) and grafted polyethyleneimine copolymers, a promising candidate for siRNA delivery. We compare the predictions of this model with all-atom simulations and demonstrate that it is capable of reproducing detailed bindingmore » patterns, charge characteristics, and water release kinetics. Since the coarse-grained model accelerates the simulations by one to two orders of magnitude, it will make it possible to quantitatively investigate nanoparticle formation involving multiple siRNA molecules and cationic copolymers.« less
NASA Astrophysics Data System (ADS)
Zhai, Mengting; Chen, Yan; Li, Jing; Zhou, Jun
2017-12-01
The molecular electrongativity distance vector (MEDV-13) was used to describe the molecular structure of benzyl ether diamidine derivatives in this paper, Based on MEDV-13, The three-parameter (M 3, M 15, M 47) QSAR model of insecticidal activity (pIC 50) for 60 benzyl ether diamidine derivatives was constructed by leaps-and-bounds regression (LBR) . The traditional correlation coefficient (R) and the cross-validation correlation coefficient (R CV ) were 0.975 and 0.971, respectively. The robustness of the regression model was validated by Jackknife method, the correlation coefficient R were between 0.971 and 0.983. Meanwhile, the independent variables in the model were tested to be no autocorrelation. The regression results indicate that the model has good robust and predictive capabilities. The research would provide theoretical guidance for the development of new generation of anti African trypanosomiasis drugs with efficiency and low toxicity.
Song, Hayoung; Kim, Hyunho; Lee, Eunsung
2018-05-16
Herein, a coumaraz-2-on-4-ylidene (1) as a new example of ambiphilic N-heterocyclic carbenes with fine tunable electronic properties is reported. The N-carbamic and aryl groups on carbene carbon provide exceptionally high electrophilicity and nucleophilicity simultaneously to the carbene center, as evidenced by the 77Se NMR chemical shifts of their selenoketone derivatives and the CO stretching strengths of their rhodium carbonyl complexes. Since the precursors of 1 could be synthesized from various functionalized Schiff bases in a practical and scalable manner, the electronic properties of 1 can be fine-tuned in quantitative and predictable way using the Hammett σ constant of the functional groups on aryl ring. The facile electronic tuning capability of 1 may be further applicable to eliciting novel properties in main-group and transition metal chemistry. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Modeling of solid-state and excimer laser processes for 3D micromachining
NASA Astrophysics Data System (ADS)
Holmes, Andrew S.; Onischenko, Alexander I.; George, David S.; Pedder, James E.
2005-04-01
An efficient simulation method has recently been developed for multi-pulse ablation processes. This is based on pulse-by-pulse propagation of the machined surface according to one of several phenomenological models for the laser-material interaction. The technique allows quantitative predictions to be made about the surface shapes of complex machined parts, given only a minimal set of input data for parameter calibration. In the case of direct-write machining of polymers or glasses with ns-duration pulses, this data set can typically be limited to the surface profiles of a small number of standard test patterns. The use of phenomenological models for the laser-material interaction, calibrated by experimental feedback, allows fast simulation, and can achieve a high degree of accuracy for certain combinations of material, laser and geometry. In this paper, the capabilities and limitations of the approach are discussed, and recent results are presented for structures machined in SU8 photoresist.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Franović, Igor, E-mail: franovic@ipb.ac.rs; Todorović, Kristina; Burić, Nikola
We use the mean-field approach to analyze the collective dynamics in macroscopic networks of stochastic Fitzhugh-Nagumo units with delayed couplings. The conditions for validity of the two main approximations behind the model, called the Gaussian approximation and the Quasi-independence approximation, are examined. It is shown that the dynamics of the mean-field model may indicate in a self-consistent fashion the parameter domains where the Quasi-independence approximation fails. Apart from a network of globally coupled units, we also consider the paradigmatic setup of two interacting assemblies to demonstrate how our framework may be extended to hierarchical and modular networks. In both cases,more » the mean-field model can be used to qualitatively analyze the stability of the system, as well as the scenarios for the onset and the suppression of the collective mode. In quantitative terms, the mean-field model is capable of predicting the average oscillation frequency corresponding to the global variables of the exact system.« less
Wan, Cai-Feng; Liu, Xue-Song; Wang, Lin; Zhang, Jie; Lu, Jin-Song; Li, Feng-Hua
2018-06-01
To clarify whether the quantitative parameters of contrast-enhanced ultrasound (CEUS) can be used to predict pathological complete response (pCR) in patients with locally advanced breast cancer receiving neoadjuvant chemotherapy (NAC). Fifty-one patients with histologically proved locally advanced breast cancer scheduled for NAC were enrolled. The quantitative data for CEUS and the tumor diameter were collected at baseline and before surgery, and compared with the pathological response. Multiple logistic regression analysis was performed to examine quantitative parameters at CEUS and the tumor diameter to predict the pCR, and receiver operating characteristic (ROC) curve analysis was used as a summary statistic. Multiple logistic regression analysis revealed that PEAK (the maximum intensity of the time-intensity curve during bolus transit), PEAK%, TTP% (time to peak), and diameter% were significant independent predictors of pCR, and the area under the ROC curve was 0.932(Az 1 ), and the sensitivity and specificity to predict pCR were 93.7% and 80.0%. The area under the ROC curve for the quantitative parameters was 0.927(Az 2 ), and the sensitivity and specificity to predict pCR were 81.2% and 94.3%. For diameter%, the area under the ROC curve was 0.786 (Az 3 ), and the sensitivity and specificity to predict pCR were 93.8% and 54.3%. The values of Az 1 and Az 2 were significantly higher than that of Az 3 (P = 0.027 and P = 0.034, respectively). However, there was no significant difference between the values of Az 1 and Az 2 (P = 0.825). Quantitative analysis of tumor blood perfusion with CEUS is superior to diameter% to predict pCR, and can be used as a functional technique to evaluate tumor response to NAC. Copyright © 2018. Published by Elsevier B.V.
Quantitative prediction of solute strengthening in aluminium alloys.
Leyson, Gerard Paul M; Curtin, William A; Hector, Louis G; Woodward, Christopher F
2010-09-01
Despite significant advances in computational materials science, a quantitative, parameter-free prediction of the mechanical properties of alloys has been difficult to achieve from first principles. Here, we present a new analytic theory that, with input from first-principles calculations, is able to predict the strengthening of aluminium by substitutional solute atoms. Solute-dislocation interaction energies in and around the dislocation core are first calculated using density functional theory and a flexible-boundary-condition method. An analytic model for the strength, or stress to move a dislocation, owing to the random field of solutes, is then presented. The theory, which has no adjustable parameters and is extendable to other metallic alloys, predicts both the energy barriers to dislocation motion and the zero-temperature flow stress, allowing for predictions of finite-temperature flow stresses. Quantitative comparisons with experimental flow stresses at temperature T=78 K are made for Al-X alloys (X=Mg, Si, Cu, Cr) and good agreement is obtained.
Wang, Chia-Chen; Lai, Yin-Hung; Ou, Yu-Meng; Chang, Huan-Tsung; Wang, Yi-Sheng
2016-01-01
Quantitative analysis with mass spectrometry (MS) is important but challenging. Matrix-assisted laser desorption/ionization (MALDI) coupled with time-of-flight (TOF) MS offers superior sensitivity, resolution and speed, but such techniques have numerous disadvantages that hinder quantitative analyses. This review summarizes essential obstacles to analyte quantification with MALDI-TOF MS, including the complex ionization mechanism of MALDI, sensitive characteristics of the applied electric fields and the mass-dependent detection efficiency of ion detectors. General quantitative ionization and desorption interpretations of ion production are described. Important instrument parameters and available methods of MALDI-TOF MS used for quantitative analysis are also reviewed. This article is part of the themed issue ‘Quantitative mass spectrometry’. PMID:27644968
Portable, one-step, and rapid GMR biosensor platform with smartphone interface.
Choi, Joohong; Gani, Adi Wijaya; Bechstein, Daniel J B; Lee, Jung-Rok; Utz, Paul J; Wang, Shan X
2016-11-15
Quantitative immunoassay tests in clinical laboratories require trained technicians, take hours to complete with multiple steps, and the instruments used are generally immobile-patient samples have to be sent in to the labs for analysis. This prevents quantitative immunoassay tests to be performed outside laboratory settings. A portable, quantitative immunoassay device will be valuable in rural and resource-limited areas, where access to healthcare is scarce or far away. We have invented Eigen Diagnosis Platform (EDP), a portable quantitative immunoassay platform based on Giant Magnetoresistance (GMR) biosensor technology. The platform does not require a trained technician to operate, and only requires one-step user involvement. It displays quantitative results in less than 15min after sample insertion, and each test costs less than US$4. The GMR biosensor employed in EDP is capable of detecting multiple biomarkers in one test, enabling a wide array of immune diagnostics to be performed simultaneously. In this paper, we describe the design of EDP, and demonstrate its capability. Multiplexed assay of human immunoglobulin G and M (IgG and IgM) antibodies with EDP achieves sensitivities down to 0.07 and 0.33 nanomolar, respectively. The platform will allow lab testing to be performed in remote areas, and open up applications of immunoassay testing in other non-clinical settings, such as home, school, and office. Copyright © 2016 Elsevier B.V. All rights reserved.
A unified theory of impact crises and mass extinctions: quantitative tests.
Rampino, M R; Haggerty, B M; Pagano, T C
1997-05-30
Several quantitative tests of a general hypothesis linking impacts of large asteroids and comets with mass extinctions of life are possible based on astronomical data, impact dynamics, and geological information. The waiting times of large-body impacts on the Earth derived from the flux of Earth-crossing asteroids and comets, and the estimated size of impacts capable of causing, large-scale environmental disasters, predict the impacts of objects > or = 5 km in diameter (> or = 10(7) Mt TNT equivalent) could be sufficient to explain the record of approximately 25 extinction pulses in the last 540 Myr, with the 5 recorded major mass extinctions related to impacts of the largest objects of > or = 10 km in diameter (> or = 10(8) Mt events). Smaller impacts (approximately 10(6) Mt), with significant regional environmental effects, could be responsible for the lesser boundaries in the geologic record. Tests of the "kill curve" relationship for impact-induced extinctions based on new data on extinction intensities, and several well-dated large impact craters, also suggest that major mass extinctions require large impacts, and that a step in the kill curve may exist at impacts that produce craters of approximately 100 km diameter, smaller impacts being capable of only relatively weak extinction pulses. Single impact craters less than approximately 60 km in diameter should not be associated with detectable global extinction pulses (although they may explain stage and zone boundaries marked by lesser faunal turnover), but multiple impacts in that size range may produce significant stepped extinction pulses. Statistical tests of the last occurrences of species at mass-extinction boundaries are generally consistent with predictions for abrupt or stepped extinctions, and several boundaries are known to show "catastrophic" signatures of environmental disasters and biomass crash, impoverished postextinction fauna and flora dominated by stress-tolerant and opportunistic species, and gradual ecological recovery and radiation of new taxa. Isotopic and other geochemical signatures are also generally consistent with the expected after-effects of catastrophic impacts. Seven of the recognized extinction pulses seem to be associated with concurrent (in some cases multiple) stratigraphic impact markers (e.g., layers with high iridium, shocked minerals, microtektites), and/or large, dated impact craters. Other less well-studied crisis intervals show elevated iridium, but well below that of the K/T spike, which might be explained by low-Ir impactors, ejecta blowoff, or sedimentary reworking and dilution of impact signatures. The best explanation for a possible periodic component of approximately 30 Myr in mass extinctions and clusters of impacts is the pulselike modulation of the comet flux associated with the solar system's periodic passage through the plane of the Milky Way Galaxy. The quantitative agreement between paleontologic and astronomical data suggests an important underlying unification of the processes involved.
High Fidelity Ion Beam Simulation of High Dose Neutron Irradiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Was, Gary; Wirth, Brian; Motta, Athur
The objective of this proposal is to demonstrate the capability to predict the evolution of microstructure and properties of structural materials in-reactor and at high doses, using ion irradiation as a surrogate for reactor irradiations. “Properties” includes both physical properties (irradiated microstructure) and the mechanical properties of the material. Demonstration of the capability to predict properties has two components. One is ion irradiation of a set of alloys to yield an irradiated microstructure and corresponding mechanical behavior that are substantially the same as results from neutron exposure in the appropriate reactor environment. Second is the capability to predict the irradiatedmore » microstructure and corresponding mechanical behavior on the basis of improved models, validated against both ion and reactor irradiations and verified against ion irradiations. Taken together, achievement of these objectives will yield an enhanced capability for simulating the behavior of materials in reactor irradiations.« less
NASA Astrophysics Data System (ADS)
Qiu, Yuchen; Tan, Maxine; McMeekin, Scott; Thai, Theresa; Moore, Kathleen; Ding, Kai; Liu, Hong; Zheng, Bin
2015-03-01
The purpose of this study is to identify and apply quantitative image biomarkers for early prediction of the tumor response to the chemotherapy among the ovarian cancer patients participated in the clinical trials of testing new drugs. In the experiment, we retrospectively selected 30 cases from the patients who participated in Phase I clinical trials of new drug or drug agents for ovarian cancer treatment. Each case is composed of two sets of CT images acquired pre- and post-treatment (4-6 weeks after starting treatment). A computer-aided detection (CAD) scheme was developed to extract and analyze the quantitative image features of the metastatic tumors previously tracked by the radiologists using the standard Response Evaluation Criteria in Solid Tumors (RECIST) guideline. The CAD scheme first segmented 3-D tumor volumes from the background using a hybrid tumor segmentation scheme. Then, for each segmented tumor, CAD computed three quantitative image features including the change of tumor volume, tumor CT number (density) and density variance. The feature changes were calculated between the matched tumors tracked on the CT images acquired pre- and post-treatments. Finally, CAD predicted patient's 6-month progression-free survival (PFS) using a decision-tree based classifier. The performance of the CAD scheme was compared with the RECIST category. The result shows that the CAD scheme achieved a prediction accuracy of 76.7% (23/30 cases) with a Kappa coefficient of 0.493, which is significantly higher than the performance of RECIST prediction with a prediction accuracy and Kappa coefficient of 60% (17/30) and 0.062, respectively. This study demonstrated the feasibility of analyzing quantitative image features to improve the early predicting accuracy of the tumor response to the new testing drugs or therapeutic methods for the ovarian cancer patients.
Making predictions of mangrove deforestation: a comparison of two methods in Kenya.
Rideout, Alasdair J R; Joshi, Neha P; Viergever, Karin M; Huxham, Mark; Briers, Robert A
2013-11-01
Deforestation of mangroves is of global concern given their importance for carbon storage, biogeochemical cycling and the provision of other ecosystem services, but the links between rates of loss and potential drivers or risk factors are rarely evaluated. Here, we identified key drivers of mangrove loss in Kenya and compared two different approaches to predicting risk. Risk factors tested included various possible predictors of anthropogenic deforestation, related to population, suitability for land use change and accessibility. Two approaches were taken to modelling risk; a quantitative statistical approach and a qualitative categorical ranking approach. A quantitative model linking rates of loss to risk factors was constructed based on generalized least squares regression and using mangrove loss data from 1992 to 2000. Population density, soil type and proximity to roads were the most important predictors. In order to validate this model it was used to generate a map of losses of Kenyan mangroves predicted to have occurred between 2000 and 2010. The qualitative categorical model was constructed using data from the same selection of variables, with the coincidence of different risk factors in particular mangrove areas used in an additive manner to create a relative risk index which was then mapped. Quantitative predictions of loss were significantly correlated with the actual loss of mangroves between 2000 and 2010 and the categorical risk index values were also highly correlated with the quantitative predictions. Hence, in this case the relatively simple categorical modelling approach was of similar predictive value to the more complex quantitative model of mangrove deforestation. The advantages and disadvantages of each approach are discussed, and the implications for mangroves are outlined. © 2013 Blackwell Publishing Ltd.
Yao, Mingyin; Yang, Hui; Huang, Lin; Chen, Tianbing; Rao, Gangfu; Liu, Muhua
2017-05-10
In seeking a novel method with the ability of green analysis in monitoring toxic heavy metals residue in fresh leafy vegetables, laser-induced breakdown spectroscopy (LIBS) was applied to prove its capability in performing this work. The spectra of fresh vegetable samples polluted in the lab were collected by optimized LIBS experimental setup, and the reference concentrations of cadmium (Cd) from samples were obtained by conventional atomic absorption spectroscopy after wet digestion. The direct calibration employing intensity of single Cd line and Cd concentration exposed the weakness of this calibration method. Furthermore, the accuracy of linear calibration can be improved a little by triple Cd lines as characteristic variables, especially after the spectra were pretreated. However, it is not enough in predicting Cd in samples. Therefore, partial least-squares regression (PLSR) was utilized to enhance the robustness of quantitative analysis. The results of the PLSR model showed that the prediction accuracy of the Cd target can meet the requirement of determination in food safety. This investigation presented that LIBS is a promising and emerging method in analyzing toxic compositions in agricultural products, especially combined with suitable chemometrics.
How Reliable Is the Prediction of Solar Wind Background?
NASA Astrophysics Data System (ADS)
Jian, Lan K.; MacNeice, Peter; Taktakishvili, Aleksandre; Odstrcil, Dusan; Jackson, Bernard; Yu, Hsiu-Shan; Riley, Pete; Sokolov, Igor
2015-04-01
The prediction of solar wind background is a necessary part of space weather forecasting. Multiple coronal and heliospheric models have been installed at the Community Coordinated Modeling Center (CCMC) to produce the solar wind, including the Wang-Sheely-Arge (WSA)-Enlil model, MHD-Around-a-Sphere (MAS)-Enlil model, Space Weather Modeling Framework (SWMF), and heliospheric tomography using interplanetary scintillation (IPS) data. By comparing the modeling results with the OMNI data over 7 Carrington rotations in 2007, we have conducted a third-party validation of these models for the near-Earth solar wind. This work will help the models get ready for the transition from research to operation. Besides visual comparison, we have quantitatively assessed the models’ capabilities in reproducing the time series and statistics of solar wind parameters. Using improved algorithms, we have identified magnetic field sector boundaries (SBs) and slow-to-fast stream interaction regions (SIRs) as focused structures. The success rate of capturing them and the time offset vary largely with models. For this period, the 2014 version of MAS-Enlil model works best for SBs, and the heliospheric tomography works best for SIRs. General strengths and weaknesses for each model are identified to provide an unbiased reference to model developers and users.
Hou, X; Chen, X; Zhang, M; Yan, A
2016-01-01
Plasmodium falciparum, the most fatal parasite that causes malaria, is responsible for over one million deaths per year. P. falciparum dihydroorotate dehydrogenase (PfDHODH) has been validated as a promising drug development target for antimalarial therapy since it catalyzes the rate-limiting step for DNA and RNA biosynthesis. In this study, we investigated the quantitative structure-activity relationships (QSAR) of the antimalarial activity of PfDHODH inhibitors by generating four computational models using a multilinear regression (MLR) and a support vector machine (SVM) based on a dataset of 255 PfDHODH inhibitors. All the models display good prediction quality with a leave-one-out q(2) >0.66, a correlation coefficient (r) >0.85 on both training sets and test sets, and a mean square error (MSE) <0.32 on training sets and <0.37 on test sets, respectively. The study indicated that the hydrogen bonding ability, atom polarizabilities and ring complexity are predominant factors for inhibitors' antimalarial activity. The models are capable of predicting inhibitors' antimalarial activity and the molecular descriptors for building the models could be helpful in the development of new antimalarial drugs.
NASA Astrophysics Data System (ADS)
Fedosov, Dmitry
2011-03-01
Computational biophysics is a large and rapidly growing area of computational physics. In this talk, we will focus on a number of biophysical problems related to blood cells and blood flow in health and disease. Blood flow plays a fundamental role in a wide range of physiological processes and pathologies in the organism. To understand and, if necessary, manipulate the course of these processes it is essential to investigate blood flow under realistic conditions including deformability of blood cells, their interactions, and behavior in the complex microvascular network. Using a multiscale cell model we are able to accurately capture red blood cell mechanics, rheology, and dynamics in agreement with a number of single cell experiments. Further, this validated model yields accurate predictions of the blood rheological properties, cell migration, cell-free layer, and hemodynamic resistance in microvessels. In addition, we investigate blood related changes in malaria, which include a considerable stiffening of red blood cells and their cytoadherence to endothelium. For these biophysical problems computational modeling is able to provide new physical insights and capabilities for quantitative predictions of blood flow in health and disease.
Shahbazy, Mohammad; Kompany-Zareh, Mohsen; Najafpour, Mohammad Mahdi
2015-11-01
Water oxidation is among the most important reactions in artificial photosynthesis, and nano-sized layered manganese-calcium oxides are efficient catalysts toward this reaction. Herein, a quantitative structure-activity relationship (QSAR) model was constructed to predict the catalytic activities of twenty manganese-calcium oxides toward water oxidation using multiple linear regression (MLR) and genetic algorithm (GA) for multivariate calibration and feature selection, respectively. Although there are eight controlled parameters during synthesizing of the desired catalysts including ripening time, temperature, manganese content, calcium content, potassium content, the ratio of calcium:manganese, the average manganese oxidation state and the surface of catalyst, by using GA only three of them (potassium content, the ratio of calcium:manganese and the average manganese oxidation state) were selected as the most effective parameters on catalytic activities of these compounds. The model's accuracy criteria such as R(2)test and Q(2)test in order to predict catalytic rate for external test set experiments; were equal to 0.941 and 0.906, respectively. Therefore, model reveals acceptable capability to anticipate the catalytic activity. Copyright © 2015 Elsevier B.V. All rights reserved.
Comparing multiple statistical methods for inverse prediction in nuclear forensics applications
Lewis, John R.; Zhang, Adah; Anderson-Cook, Christine Michaela
2017-10-29
Forensic science seeks to predict source characteristics using measured observables. Statistically, this objective can be thought of as an inverse problem where interest is in the unknown source characteristics or factors ( X) of some underlying causal model producing the observables or responses (Y = g ( X) + error). Here, this paper reviews several statistical methods for use in inverse problems and demonstrates that comparing results from multiple methods can be used to assess predictive capability. Motivation for assessing inverse predictions comes from the desired application to historical and future experiments involving nuclear material production for forensics research inmore » which inverse predictions, along with an assessment of predictive capability, are desired.« less
Comparing multiple statistical methods for inverse prediction in nuclear forensics applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, John R.; Zhang, Adah; Anderson-Cook, Christine Michaela
Forensic science seeks to predict source characteristics using measured observables. Statistically, this objective can be thought of as an inverse problem where interest is in the unknown source characteristics or factors ( X) of some underlying causal model producing the observables or responses (Y = g ( X) + error). Here, this paper reviews several statistical methods for use in inverse problems and demonstrates that comparing results from multiple methods can be used to assess predictive capability. Motivation for assessing inverse predictions comes from the desired application to historical and future experiments involving nuclear material production for forensics research inmore » which inverse predictions, along with an assessment of predictive capability, are desired.« less
Opportunities of probabilistic flood loss models
NASA Astrophysics Data System (ADS)
Schröter, Kai; Kreibich, Heidi; Lüdtke, Stefan; Vogel, Kristin; Merz, Bruno
2016-04-01
Oftentimes, traditional uni-variate damage models as for instance depth-damage curves fail to reproduce the variability of observed flood damage. However, reliable flood damage models are a prerequisite for the practical usefulness of the model results. Innovative multi-variate probabilistic modelling approaches are promising to capture and quantify the uncertainty involved and thus to improve the basis for decision making. In this study we compare the predictive capability of two probabilistic modelling approaches, namely Bagging Decision Trees and Bayesian Networks and traditional stage damage functions. For model evaluation we use empirical damage data which are available from computer aided telephone interviews that were respectively compiled after the floods in 2002, 2005, 2006 and 2013 in the Elbe and Danube catchments in Germany. We carry out a split sample test by sub-setting the damage records. One sub-set is used to derive the models and the remaining records are used to evaluate the predictive performance of the model. Further we stratify the sample according to catchments which allows studying model performance in a spatial transfer context. Flood damage estimation is carried out on the scale of the individual buildings in terms of relative damage. The predictive performance of the models is assessed in terms of systematic deviations (mean bias), precision (mean absolute error) as well as in terms of sharpness of the predictions the reliability which is represented by the proportion of the number of observations that fall within the 95-quantile and 5-quantile predictive interval. The comparison of the uni-variable Stage damage function and the multivariable model approach emphasises the importance to quantify predictive uncertainty. With each explanatory variable, the multi-variable model reveals an additional source of uncertainty. However, the predictive performance in terms of precision (mbe), accuracy (mae) and reliability (HR) is clearly improved in comparison to uni-variable Stage damage function. Overall, Probabilistic models provide quantitative information about prediction uncertainty which is crucial to assess the reliability of model predictions and improves the usefulness of model results.
Potyrailo, Radislav A; Chisholm, Bret J; Morris, William G; Cawse, James N; Flanagan, William P; Hassib, Lamyaa; Molaison, Chris A; Ezbiansky, Karin; Medford, George; Reitz, Hariklia
2003-01-01
Coupling of combinatorial chemistry methods with high-throughput (HT) performance testing and measurements of resulting properties has provided a powerful set of tools for the 10-fold accelerated discovery of new high-performance coating materials for automotive applications. Our approach replaces labor-intensive steps with automated systems for evaluation of adhesion of 8 x 6 arrays of coating elements that are discretely deposited on a single 9 x 12 cm plastic substrate. Performance of coatings is evaluated with respect to their resistance to adhesion loss, because this parameter is one of the primary considerations in end-use automotive applications. Our HT adhesion evaluation provides previously unavailable capabilities of high speed and reproducibility of testing by using a robotic automation, an expanded range of types of tested coatings by using the coating tagging strategy, and an improved quantitation by using high signal-to-noise automatic imaging. Upon testing, the coatings undergo changes that are impossible to quantitatively predict using existing knowledge. Using our HT methodology, we have developed several coatings leads. These HT screening results for the best coating compositions have been validated on the traditional scales of coating formulation and adhesion loss testing. These validation results have confirmed the superb performance of combinatorially developed coatings over conventional coatings on the traditional scale.
Quantitative dual-probe microdialysis: mathematical model and analysis.
Chen, Kevin C; Höistad, Malin; Kehr, Jan; Fuxe, Kjell; Nicholson, Charles
2002-04-01
Steady-state microdialysis is a widely used technique to monitor the concentration changes and distributions of substances in tissues. To obtain more information about brain tissue properties from microdialysis, a dual-probe approach was applied to infuse and sample the radiotracer, [3H]mannitol, simultaneously both in agar gel and in the rat striatum. Because the molecules released by one probe and collected by the other must diffuse through the interstitial space, the concentration profile exhibits dynamic behavior that permits the assessment of the diffusion characteristics in the brain extracellular space and the clearance characteristics. In this paper a mathematical model for dual-probe microdialysis was developed to study brain interstitial diffusion and clearance processes. Theoretical expressions for the spatial distribution of the infused tracer in the brain extracellular space and the temporal concentration at the probe outlet were derived. A fitting program was developed using the simplex algorithm, which finds local minima of the standard deviations between experiments and theory by adjusting the relevant parameters. The theoretical curves accurately fitted the experimental data and generated realistic diffusion parameters, implying that the mathematical model is capable of predicting the interstitial diffusion behavior of [3H]mannitol and that it will be a valuable quantitative tool in dual-probe microdialysis.
Bokulich, Nicholas A.
2013-01-01
Ultra-high-throughput sequencing (HTS) of fungal communities has been restricted by short read lengths and primer amplification bias, slowing the adoption of newer sequencing technologies to fungal community profiling. To address these issues, we evaluated the performance of several common internal transcribed spacer (ITS) primers and designed a novel primer set and work flow for simultaneous quantification and species-level interrogation of fungal consortia. Primer comparison and validation were predicted in silico and by sequencing a “mock community” of mixed yeast species to explore the challenges of amplicon length and amplification bias for reconstructing defined yeast community structures. The amplicon size and distribution of this primer set are smaller than for all preexisting ITS primer sets, maximizing sequencing coverage of hypervariable ITS domains by very-short-amplicon, high-throughput sequencing platforms. This feature also enables the optional integration of quantitative PCR (qPCR) directly into the HTS preparatory work flow by substituting qPCR with these primers for standard PCR, yielding quantification of individual community members. The complete work flow described here, utilizing any of the qualified primer sets evaluated, can rapidly profile mixed fungal communities and capably reconstructed well-characterized beer and wine fermentation fungal communities. PMID:23377949
Near infrared spectroscopic evaluation of water in hyaline cartilage.
Padalkar, M V; Spencer, R G; Pleshko, N
2013-11-01
In diseased conditions of cartilage such as osteoarthritis, there is typically an increase in water content from the average normal of 60-85% to greater than 90%. As cartilage has very little capability for self-repair, methods of early detection of degeneration are required, and assessment of water could prove to be a useful diagnostic method. Current assessment methods are either destructive, time consuming, or have limited sensitivity. Here, we investigated the hypotheses that non-destructive near infrared spectroscopy (NIRS) of articular cartilage can be used to differentiate between free and bound water, and to quantitatively assess water content. The absorbances centered at 5200 and 6890 cm(-1) were attributed to a combination of free and bound water, and to free water only, respectively. The integrated areas of both absorbance bands were found to correlate linearly with the absolute water content (R = 0.87 and 0.86) and with percent water content (R = 0.97 and 0.96) of the tissue. Partial least square models were also successfully developed and were used to predict water content, and percent free water. These data demonstrate that NIRS can be utilized to quantitatively determine water content in articular cartilage, and may aid in early detection of degenerative tissue changes in a laboratory setting, and with additional validations, possibly in a clinical setting.
Dynamics of Individual cilia to external loading- A simple one dimensional picture
NASA Astrophysics Data System (ADS)
Swaminathan, Vinay; Hill, David; Superfine, R.
2008-10-01
From being called the cellular janitors to swinging debauchers, cilia have captured the fascinations of researchers for over 200 years. In cystic fibrosis and chronic obstructive pulmonary disease where the cilia loses it's function, the protective mucus layer in the lung thickens and mucociliary clearance breaks down, leading to inflammation along the airways and an increased rate of infection. The mechanistic understanding of mucus clearance depends on a quantitative assessment of the axoneme dynamics and the maximum force the cilia are capable of generating and imparting to the mucus layer. Similar to the situation in molecular motors, detailed quantitative measurements of dynamics under applied load conditions are expected to be essential in developing predictive models. Based on our measurements of the dynamics of individual ciliary motion in the human bronchial epithelial cell under the application of an applied load, we present a simple one dimensional model for the axoneme dynamics and quantify the axoneme stiffness, the internal force generated by the axoneme, the stall force and show how the dynamics sheds insight on the time dependence of the internal force generation. The internal force generated by the axoneme is related to the ability of cilia to propel fluids and to their potential role in force sensing.
Hattotuwagama, Channa K; Guan, Pingping; Doytchinova, Irini A; Flower, Darren R
2004-11-21
Quantitative structure-activity relationship (QSAR) analysis is a main cornerstone of modern informatic disciplines. Predictive computational models, based on QSAR technology, of peptide-major histocompatibility complex (MHC) binding affinity have now become a vital component of modern day computational immunovaccinology. Historically, such approaches have been built around semi-qualitative, classification methods, but these are now giving way to quantitative regression methods. The additive method, an established immunoinformatics technique for the quantitative prediction of peptide-protein affinity, was used here to identify the sequence dependence of peptide binding specificity for three mouse class I MHC alleles: H2-D(b), H2-K(b) and H2-K(k). As we show, in terms of reliability the resulting models represent a significant advance on existing methods. They can be used for the accurate prediction of T-cell epitopes and are freely available online ( http://www.jenner.ac.uk/MHCPred).
Huang, An-Min; Fei, Ben-Hua; Jiang, Ze-Hui; Hse, Chung-Yun
2007-09-01
Near infrared spectroscopy is widely used as a quantitative method, and the main multivariate techniques consist of regression methods used to build prediction models, however, the accuracy of analysis results will be affected by many factors. In the present paper, the influence of different sample roughness on the mathematical model of NIR quantitative analysis of wood density was studied. The result of experiments showed that if the roughness of predicted samples was consistent with that of calibrated samples, the result was good, otherwise the error would be much higher. The roughness-mixed model was more flexible and adaptable to different sample roughness. The prediction ability of the roughness-mixed model was much better than that of the single-roughness model.
Respiratory trace feature analysis for the prediction of respiratory-gated PET quantification.
Wang, Shouyi; Bowen, Stephen R; Chaovalitwongse, W Art; Sandison, George A; Grabowski, Thomas J; Kinahan, Paul E
2014-02-21
The benefits of respiratory gating in quantitative PET/CT vary tremendously between individual patients. Respiratory pattern is among many patient-specific characteristics that are thought to play an important role in gating-induced imaging improvements. However, the quantitative relationship between patient-specific characteristics of respiratory pattern and improvements in quantitative accuracy from respiratory-gated PET/CT has not been well established. If such a relationship could be estimated, then patient-specific respiratory patterns could be used to prospectively select appropriate motion compensation during image acquisition on a per-patient basis. This study was undertaken to develop a novel statistical model that predicts quantitative changes in PET/CT imaging due to respiratory gating. Free-breathing static FDG-PET images without gating and respiratory-gated FDG-PET images were collected from 22 lung and liver cancer patients on a PET/CT scanner. PET imaging quality was quantified with peak standardized uptake value (SUV(peak)) over lesions of interest. Relative differences in SUV(peak) between static and gated PET images were calculated to indicate quantitative imaging changes due to gating. A comprehensive multidimensional extraction of the morphological and statistical characteristics of respiratory patterns was conducted, resulting in 16 features that characterize representative patterns of a single respiratory trace. The six most informative features were subsequently extracted using a stepwise feature selection approach. The multiple-regression model was trained and tested based on a leave-one-subject-out cross-validation. The predicted quantitative improvements in PET imaging achieved an accuracy higher than 90% using a criterion with a dynamic error-tolerance range for SUV(peak) values. The results of this study suggest that our prediction framework could be applied to determine which patients would likely benefit from respiratory motion compensation when clinicians quantitatively assess PET/CT for therapy target definition and response assessment.
Respiratory trace feature analysis for the prediction of respiratory-gated PET quantification
NASA Astrophysics Data System (ADS)
Wang, Shouyi; Bowen, Stephen R.; Chaovalitwongse, W. Art; Sandison, George A.; Grabowski, Thomas J.; Kinahan, Paul E.
2014-02-01
The benefits of respiratory gating in quantitative PET/CT vary tremendously between individual patients. Respiratory pattern is among many patient-specific characteristics that are thought to play an important role in gating-induced imaging improvements. However, the quantitative relationship between patient-specific characteristics of respiratory pattern and improvements in quantitative accuracy from respiratory-gated PET/CT has not been well established. If such a relationship could be estimated, then patient-specific respiratory patterns could be used to prospectively select appropriate motion compensation during image acquisition on a per-patient basis. This study was undertaken to develop a novel statistical model that predicts quantitative changes in PET/CT imaging due to respiratory gating. Free-breathing static FDG-PET images without gating and respiratory-gated FDG-PET images were collected from 22 lung and liver cancer patients on a PET/CT scanner. PET imaging quality was quantified with peak standardized uptake value (SUVpeak) over lesions of interest. Relative differences in SUVpeak between static and gated PET images were calculated to indicate quantitative imaging changes due to gating. A comprehensive multidimensional extraction of the morphological and statistical characteristics of respiratory patterns was conducted, resulting in 16 features that characterize representative patterns of a single respiratory trace. The six most informative features were subsequently extracted using a stepwise feature selection approach. The multiple-regression model was trained and tested based on a leave-one-subject-out cross-validation. The predicted quantitative improvements in PET imaging achieved an accuracy higher than 90% using a criterion with a dynamic error-tolerance range for SUVpeak values. The results of this study suggest that our prediction framework could be applied to determine which patients would likely benefit from respiratory motion compensation when clinicians quantitatively assess PET/CT for therapy target definition and response assessment.
NASA Astrophysics Data System (ADS)
Melchiorre, C.; Castellanos Abella, E. A.; van Westen, C. J.; Matteucci, M.
2011-04-01
This paper describes a procedure for landslide susceptibility assessment based on artificial neural networks, and focuses on the estimation of the prediction capability, robustness, and sensitivity of susceptibility models. The study is carried out in the Guantanamo Province of Cuba, where 186 landslides were mapped using photo-interpretation. Twelve conditioning factors were mapped including geomorphology, geology, soils, landuse, slope angle, slope direction, internal relief, drainage density, distance from roads and faults, rainfall intensity, and ground peak acceleration. A methodology was used that subdivided the database in 3 subsets. A training set was used for updating the weights. A validation set was used to stop the training procedure when the network started losing generalization capability, and a test set was used to calculate the performance of the network. A 10-fold cross-validation was performed in order to show that the results are repeatable. The prediction capability, the robustness analysis, and the sensitivity analysis were tested on 10 mutually exclusive datasets. The results show that by means of artificial neural networks it is possible to obtain models with high prediction capability and high robustness, and that an exploration of the effect of the individual variables is possible, even if they are considered as a black-box model.
A publicly available toxicogenomics capability for supporting predictive toxicology and meta-analysis depends on availability of gene expression data for chemical treatment scenarios, the ability to locate and aggregate such information by chemical, and broad data coverage within...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Casella, Amanda J.; Hylden, Laura R.; Campbell, Emily L.
Knowledge of real-time solution properties and composition is a necessity for any spent nuclear fuel reprocessing method. Metal-ligand speciation in aqueous solutions derived from the dissolved commercial spent fuel is highly dependent upon the acid concentration/pH, which influences extraction efficiency and the resulting speciation in the organic phase. Spectroscopic process monitoring capabilities, incorporated in a counter current centrifugal contactor bank, provide a pathway for on-line real-time measurement of solution pH. The spectroscopic techniques are process-friendly and can be easily configured for on-line applications, while classic potentiometric pH measurements require frequent calibration/maintenance and have poor long-term stability in aggressive chemical andmore » radiation environments. Our research is focused on developing a general method for on-line determination of pH of aqueous solutions through chemometric analysis of Raman spectra. Interpretive quantitative models have been developed and validated under the range of chemical composition and pH using a lactic acid/lactate buffer system. The developed model was applied to spectra obtained on-line during solvent extractions performed in a centrifugal contactor bank. The model predicted the pH within 11% for pH > 2, thus demonstrating that this technique could provide the capability of monitoring pH on-line in applications such as nuclear fuel reprocessing.« less
Evaluating Lignocellulosic Biomass, Its Derivatives, and Downstream Products with Raman Spectroscopy
Lupoi, Jason S.; Gjersing, Erica; Davis, Mark F.
2015-01-01
The creation of fuels, chemicals, and materials from plants can aid in replacing products fabricated from non-renewable energy sources. Before using biomass in downstream applications, it must be characterized to assess chemical traits, such as cellulose, lignin, or lignin monomer content, or the sugars released following an acid or enzymatic hydrolysis. The measurement of these traits allows researchers to gage the recalcitrance of the plants and develop efficient deconstruction strategies to maximize yields. Standard methods for assessing biomass phenotypes often have experimental protocols that limit their use for screening sizeable numbers of plant species. Raman spectroscopy, a non-destructive, non-invasive vibrational spectroscopy technique, is capable of providing qualitative, structural information and quantitative measurements. Applications of Raman spectroscopy have aided in alleviating the constraints of standard methods by coupling spectral data with multivariate analysis to construct models capable of predicting analytes. Hydrolysis and fermentation products, such as glucose and ethanol, can be quantified off-, at-, or on-line. Raman imaging has enabled researchers to develop a visual understanding of reactions, such as different pretreatment strategies, in real-time, while also providing integral chemical information. This review provides an overview of what Raman spectroscopy is, and how it has been applied to the analysis of whole lignocellulosic biomass, its derivatives, and downstream process monitoring. PMID:25941674
Evaluating lignocellulosic biomass, its derivatives, and downstream products with Raman spectroscopy
Lupoi, Jason S.; Gjersing, Erica; Davis, Mark F.
2015-04-20
The creation of fuels, chemicals, and materials from plants can aid in replacing products fabricated from non-renewable energy sources. Before using biomass in downstream applications, it must be characterized to assess chemical traits, such as cellulose, lignin, or lignin monomer content, or the sugars released following an acid or enzymatic hydrolysis. The measurement of these traits allows researchers to gage the recalcitrance of the plants and develop efficient deconstruction strategies to maximize yields. Standard methods for assessing biomass phenotypes often have experimental protocols that limit their use for screening sizeable numbers of plant species. Raman spectroscopy, a non-destructive, non-invasive vibrationalmore » spectroscopy technique, is capable of providing qualitative, structural information and quantitative measurements. Applications of Raman spectroscopy have aided in alleviating the constraints of standard methods by coupling spectral data with multivariate analysis to construct models capable of predicting analytes. Hydrolysis and fermentation products, such as glucose and ethanol, can be quantified off-, at-, or on-line. Raman imaging has enabled researchers to develop a visual understanding of reactions, such as different pretreatment strategies, in real-time, while also providing integral chemical information. Finally, this review provides an overview of what Raman spectroscopy is, and how it has been applied to the analysis of whole lignocellulosic biomass, its derivatives, and downstream process monitoring.« less
NASA Astrophysics Data System (ADS)
Linton, Mark; Leake, James; Schuck, Peter W.
2016-05-01
The magnetic field of the solar atmosphere is the primary driver of solar activity. Understanding the magnetic state of the solar atmosphere is therefore of key importance to predicting solaractivity. One promising means of studying the magnetic atmosphere is to dynamically build up and evolve this atmosphere from the time evolution of the magnetic field at the photosphere, where it can be measured with current solar vector magnetograms at high temporal and spatial resolution.We report here on a series of numerical experiments investigating the capabilities and limits of magnetohydrodynamical simulations of such a process, where a magnetic corona is dynamically built up and evolved from a time series of synthetic photospheric data. These synthetic data are composed of photospheric slices taken from self consistent convection zone to corona simulations of flux emergence. The driven coronae are then quantitatively compared against the coronae of the original simulations. We investigate and report on the fidelity of these driven simulations, both as a function of the emergence timescale of the magnetic flux, and as a function of the driving cadence of the input data.This work was supported by the Chief of Naval Research and the NASA Living with a Star and Heliophysics Supporting Research programs.
NASA Astrophysics Data System (ADS)
Nikoueeyan, Pourya; Naughton, Jonathan
2016-11-01
Particle Image Velocimetry is a common choice for qualitative and quantitative characterization of unsteady flows associated with moving bodies (e.g. pitching and plunging airfoils). Characterizing the separated flow behavior is of great importance in understanding the flow physics and developing predictive reduced-order models. In most studies, the model under investigation moves within a fixed camera field-of-view, and vector fields are calculated based on this fixed coordinate system. To better characterize the genesis and evolution of vortical structures in these unsteady flows, the velocity fields need to be transformed into the moving-body frame of reference. Data converted to this coordinate system allow for a more detailed analysis of the flow field using advanced statistical tools. In this work, a pitching NACA0015 airfoil has been used to demonstrate the capability of photogrammetry for such an analysis. Photogrammetry has been used first to locate the airfoil within the image and then to determine an appropriate mask for processing the PIV data. The photogrammetry results are then further used to determine the rotation matrix that transforms the velocity fields to airfoil coordinates. Examples of the important capabilities such a process enables are discussed. P. Nikoueeyan is supported by a fellowship from the University of Wyoming's Engineering Initiative.
Cloud Forcing and the Earth's Radiation Budget: New Ideas and New Observations
NASA Technical Reports Server (NTRS)
Barkstrom, Bruce R.
1997-01-01
1. NEW PERSPECTIVES ON CLOUD-RADIATIVE FORCING. When the Earth Radiation Budget Experiment (ERBE) produced the first measurements of cloud-radiative forcing, the climate community interpreted the results from a context in which the atmosphere was a single column, strongly coupled to the Earth's surface. 2. NEW PERSPECTIVES ON CLOUD-RADIATION OBSERVATIONS. The climate community is also on the verge of adding a new dimension to its observational capability. In classic thinking about atmospheric circulation and climate, surface pressure was a readily available quantity. As meteorology developed, it was possible to develop quantitative predictions of future weather by bringing together a network of surface pressure observations and then of profiles of temperature and humidity obtained from balloons. 3. ON COMBINING OBSERVATIONS AND THE - ORY. With this new capability, it is natural to seek recognizable features in the observations we make of the Earth. There are techniques we can use to group the remotely sensed data in the individual footprints into objects that we can track. We will present one such image-processing application to radiation budget data, showing how we can interpret the radiation budget data in terms of cloud systems that are organized into systematic patterns of behavior - an ecosystem-like view of cloud behavior.
SNP by SNP by environment interaction network of alcoholism.
Zollanvari, Amin; Alterovitz, Gil
2017-03-14
Alcoholism has a strong genetic component. Twin studies have demonstrated the heritability of a large proportion of phenotypic variance of alcoholism ranging from 50-80%. The search for genetic variants associated with this complex behavior has epitomized sequence-based studies for nearly a decade. The limited success of genome-wide association studies (GWAS), possibly precipitated by the polygenic nature of complex traits and behaviors, however, has demonstrated the need for novel, multivariate models capable of quantitatively capturing interactions between a host of genetic variants and their association with non-genetic factors. In this regard, capturing the network of SNP by SNP or SNP by environment interactions has recently gained much interest. Here, we assessed 3,776 individuals to construct a network capable of detecting and quantifying the interactions within and between plausible genetic and environmental factors of alcoholism. In this regard, we propose the use of first-order dependence tree of maximum weight as a potential statistical learning technique to delineate the pattern of dependencies underpinning such a complex trait. Using a predictive based analysis, we further rank the genes, demographic factors, biological pathways, and the interactions represented by our SNP [Formula: see text]SNP[Formula: see text]E network. The proposed framework is quite general and can be potentially applied to the study of other complex traits.
Bomber Deployments: A New Power Projection Strategy
2016-08-21
civilian cargo airlift. The second quantitative analysis will assess the B-52 direct aviation support UTCs containing support equipment. B-52 UTCs...troops and cargo back and forth to the theater of operations. Operation IRAQI FREEDOM tested airlift capabilities when multiple services placed their...the quantitative analysis shows, to move all of the support equipment for one bomber squadron can be expensive and tie up valuable cargo aircraft
Umesh Agarwal; Sally A. Ralph
2003-01-01
With the objective of using FT-Raman to quantitatively analyze ethylenic units in lignin in thermomechanical pulps (TMPs), coniferyl alcohol, coniferin, coniferaldehyde, and G-DHP lignin models were used to first demonstrate that the technique was fully capable of quantifying ring conjugated ethylenic units. Based on this result, the amount of ethylenic units in TMP...
Learning a peptide-protein binding affinity predictor with kernel ridge regression
2013-01-01
Background The cellular function of a vast majority of proteins is performed through physical interactions with other biomolecules, which, most of the time, are other proteins. Peptides represent templates of choice for mimicking a secondary structure in order to modulate protein-protein interaction. They are thus an interesting class of therapeutics since they also display strong activity, high selectivity, low toxicity and few drug-drug interactions. Furthermore, predicting peptides that would bind to a specific MHC alleles would be of tremendous benefit to improve vaccine based therapy and possibly generate antibodies with greater affinity. Modern computational methods have the potential to accelerate and lower the cost of drug and vaccine discovery by selecting potential compounds for testing in silico prior to biological validation. Results We propose a specialized string kernel for small bio-molecules, peptides and pseudo-sequences of binding interfaces. The kernel incorporates physico-chemical properties of amino acids and elegantly generalizes eight kernels, comprised of the Oligo, the Weighted Degree, the Blended Spectrum, and the Radial Basis Function. We provide a low complexity dynamic programming algorithm for the exact computation of the kernel and a linear time algorithm for it’s approximation. Combined with kernel ridge regression and SupCK, a novel binding pocket kernel, the proposed kernel yields biologically relevant and good prediction accuracy on the PepX database. For the first time, a machine learning predictor is capable of predicting the binding affinity of any peptide to any protein with reasonable accuracy. The method was also applied to both single-target and pan-specific Major Histocompatibility Complex class II benchmark datasets and three Quantitative Structure Affinity Model benchmark datasets. Conclusion On all benchmarks, our method significantly (p-value ≤ 0.057) outperforms the current state-of-the-art methods at predicting peptide-protein binding affinities. The proposed approach is flexible and can be applied to predict any quantitative biological activity. Moreover, generating reliable peptide-protein binding affinities will also improve system biology modelling of interaction pathways. Lastly, the method should be of value to a large segment of the research community with the potential to accelerate the discovery of peptide-based drugs and facilitate vaccine development. The proposed kernel is freely available at http://graal.ift.ulaval.ca/downloads/gs-kernel/. PMID:23497081
Resource for the Development of Biomedical Accelerator Mass Spectrometry (AMS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuerteltaub, K. W.; Bench, G.; Buchholz, B. A.
The NIH Research Resource for Biomedical AMS was originally funded at Lawrence Livermore National Laboratory in 1999 to develop and apply the technology of accelerator mass spectrometry (AMS) in broad- based biomedical research. The Resource’s niche is to fill needs for ultra high sensitivity quantitation when isotope-labeled agents are used. The Research Resource’s Technology Research and Development (TR&D) efforts will focus on the needs of the biomedical research community in the context of seven Driving Biomedical Projects (DBPs) that will drive the Center’s technical capabilities through three core TR&Ds. We will expand our present capabilities by developing a fully integratedmore » HPLC AMS to increase our capabilities for metabolic measurements, we will develop methods to understand cellular processes and we will develop and validate methods for the application of AMS in human studies, which is a growing area of demand by collaborators and service users. In addition, we will continue to support new and ongoing collaborative and service projects that require the capabilities of the Resource. The Center will continue to train researchers in the use of the AMS capabilities being developed, and the results of all efforts will be widely disseminated to advance progress in biomedical research. Towards these goals, our specific aims are to:1.) Increase the value and information content of AMS measurements by combining molecular speciation with quantitation of defined macromolecular isolates. Specifically, develop and validate methods for macromolecule labeling, characterization and quantitation.2.) Develop and validate methods and strategies to enable AMS to become more broadly used in human studies. Specifically, demonstrate robust methods for conducting pharmacokinetic/pharmacodynamics studies in humans and model systems.3.) Increase the accessibility of AMS to the Biomedical research community and the throughput of AMS through direct coupling to separatory instruments.4.) Provide high throughput 14C BioAMS analysis for collaborative and service clients.« less
Resource for the Development of Biomedical Accelerator Mass Spectrometry (AMS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turteltaub, K. W.; Bench, G.; Buchholz, B. A.
2016-04-08
The NIH Research Resource for Biomedical AMS was originally funded at Lawrence Livermore National Laboratory in 1999 to develop and apply the technology of accelerator mass spectrometry (AMS) in broad- based biomedical research. The Resource’s niche is to fill needs for ultra high sensitivity quantitation when isotope-labeled agents are used. The Research Resource’s Technology Research and Development (TR&D) efforts will focus on the needs of the biomedical research community in the context of seven Driving Biomedical Projects (DBPs) that will drive the Center’s technical capabilities through three core TR&Ds. We will expand our present capabilities by developing a fully integratedmore » HPLC AMS to increase our capabilities for metabolic measurements, we will develop methods to understand cellular processes and we will develop and validate methods for the application of AMS in human studies, which is a growing area of demand by collaborators and service users. In addition, we will continue to support new and ongoing collaborative and service projects that require the capabilities of the Resource. The Center will continue to train researchers in the use of the AMS capabilities being developed, and the results of all efforts will be widely disseminated to advance progress in biomedical research. Towards these goals, our specific aims are to:1.) Increase the value and information content of AMS measurements by combining molecular speciation with quantitation of defined macromolecular isolates. Specifically, develop and validate methods for macromolecule labeling, characterization and quantitation.2.) Develop and validate methods and strategies to enable AMS to become more broadly used in human studies. Specifically, demonstrate robust methods for conducting pharmacokinetic/pharmacodynamics studies in humans and model systems.3.) Increase the accessibility of AMS to the Biomedical research community and the throughput of AMS through direct coupling to separatory instruments.4.) Provide high throughput 14C BioAMS analysis for collaborative and service clients.« less
A correlational approach to predicting operator status
NASA Technical Reports Server (NTRS)
Shingledecker, Clark A.
1988-01-01
This paper discusses a research approach for identifying and validating candidate physiological and behavioral parameters which can be used to predict the performance capabilities of aircrew and other system operators. In this methodology, concurrent and advance correlations are computed between predictor values and criterion performance measures. Continuous performance and sleep loss are used as stressors to promote performance variation. Preliminary data are presented which suggest dependence of prediction capability on the resource allocation policy of the operator.
Murrell, Daniel S; Cortes-Ciriano, Isidro; van Westen, Gerard J P; Stott, Ian P; Bender, Andreas; Malliavin, Thérèse E; Glen, Robert C
2015-01-01
In silico predictive models have proved to be valuable for the optimisation of compound potency, selectivity and safety profiles in the drug discovery process. camb is an R package that provides an environment for the rapid generation of quantitative Structure-Property and Structure-Activity models for small molecules (including QSAR, QSPR, QSAM, PCM) and is aimed at both advanced and beginner R users. camb's capabilities include the standardisation of chemical structure representation, computation of 905 one-dimensional and 14 fingerprint type descriptors for small molecules, 8 types of amino acid descriptors, 13 whole protein sequence descriptors, filtering methods for feature selection, generation of predictive models (using an interface to the R package caret), as well as techniques to create model ensembles using techniques from the R package caretEnsemble). Results can be visualised through high-quality, customisable plots (R package ggplot2). Overall, camb constitutes an open-source framework to perform the following steps: (1) compound standardisation, (2) molecular and protein descriptor calculation, (3) descriptor pre-processing and model training, visualisation and validation, and (4) bioactivity/property prediction for new molecules. camb aims to speed model generation, in order to provide reproducibility and tests of robustness. QSPR and proteochemometric case studies are included which demonstrate camb's application.Graphical abstractFrom compounds and data to models: a complete model building workflow in one package.
Parcell, Benjamin J; Jarchow-MacDonald, Anna A; Seagar, Amie-Louise; Laurenson, Ian F; Prescott, Gordon J; Lockhart, Michael
2017-05-01
Xpert MTB/RIF (Cepheid) is a rapid molecular assay shown to be sensitive and specific for pulmonary tuberculosis (TB) diagnosis in highly endemic countries. We evaluated its diagnostic performance in a low TB prevalence setting, examined rifampicin resistance detection and quantitative capabilities predicting graded auramine microscopy and time to positivity (TTP) of culture. Xpert MTB/RIF was used to test respiratory samples over a 3 year period. Samples underwent graded auramine microscopy, solid/liquid culture, in-house IS6110 real-time PCR, and GenoType MTBDRplus (HAIN Lifescience) to determine rifampicin and/or isoniazid resistance. A total of 2103 Xpert MTB/RIF tests were performed. Compared to culture sensitivity was 95.8%, specificity 99.5%, positive predictive value (PPV) 82.1%, and negative predictive value (NPV) 99.9%. A positive correlation was found between auramine microscopy grade and Xpert MTB/RIF assay load. We found a clear reduction in the median TTP as Xpert MTB/RIF assay load increased. Rifampicin resistance was detected. Xpert MTB/RIF was rapid and accurate in diagnosing pulmonary TB in a low prevalence area. Rapid results will influence infection prevention and control and treatment measures. The excellent NPV obtained suggests further work should be carried out to assess its role in replacing microscopy. Copyright © 2017 The British Infection Association. Published by Elsevier Ltd. All rights reserved.
How predictive are photosensitive epilepsy models as proof of principle trials for epilepsy?
Yuen, Eunice S M; Sims, John R
2014-06-01
Human photosensitive epilepsy models have been used as proof of principle (POP) trials for epilepsy. Photosensitive patients are exposed to intermittent photic stimulation and the reduction in sensitivity to the number of standard visual stimulation frequencies is used as an endpoint. The aim of this research was to quantify the predictive capabilities of photosensitive POP trials, through a survey of current literature. A literature search was undertaken to identify articles describing photosensitive POP trials. Minimally efficacious doses (MEDs) in epilepsy were compared to doses in the POP trials that produced 50-100% response (ED50-100). Ratios of these doses were calculated and summarised statistically. The search identified ten articles describing a total of 17 anti-epileptic drugs. Of these, data for both MED and ED50-100 were available for 13 anti-epileptic drugs. The average ratio of MED to ED50-100 was 0.95 (95% CI 0.60-1.30). The difference in MED to ED50-100 ratios between partial epilepsy (0.82) was not significantly different from that of generalised epilepsy (1.08) (p=0.51). Photosensitive POP trials are a useful tool to quantitatively predict efficacy in epilepsy, and can be useful as early and informative indicators in anti-epileptic drug discovery and development. Copyright © 2014 British Epilepsy Association. Published by Elsevier Ltd. All rights reserved.
Garijo, N; Manzano, R; Osta, R; Perez, M A
2012-12-07
Cell migration and proliferation has been modelled in the literature as a process similar to diffusion. However, using diffusion models to simulate the proliferation and migration of cells tends to create a homogeneous distribution in the cell density that does not correlate to empirical observations. In fact, the mechanism of cell dispersal is not diffusion. Cells disperse by crawling or proliferation, or are transported in a moving fluid. The use of cellular automata, particle models or cell-based models can overcome this limitation. This paper presents a stochastic cellular automata model to simulate the proliferation, migration and differentiation of cells. These processes are considered as completely stochastic as well as discrete. The model developed was applied to predict the behaviour of in vitro cell cultures performed with adult muscle satellite cells. Moreover, non homogeneous distribution of cells has been observed inside the culture well and, using the above mentioned stochastic cellular automata model, we have been able to predict this heterogeneous cell distribution and compute accurate quantitative results. Differentiation was also incorporated into the computational simulation. The results predicted the myotube formation that typically occurs with adult muscle satellite cells. In conclusion, we have shown how a stochastic cellular automata model can be implemented and is capable of reproducing the in vitro behaviour of adult muscle satellite cells. Copyright © 2012 Elsevier Ltd. All rights reserved.
Critical thresholds in species` responses to landscape structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
With, K.A.; Crist, T.O.
1995-12-01
Critical thresholds are transition ranges across which small changes in spatial pattern produce abrupt shifts in ecological responses. Habitat fragmentation provides a familiar example of a critical threshold. As the landscape becomes dissected into smaller parcels of habitat. landscape connectivity-the functional linkage among habitat patches - may suddenly become disrupted, which may have important consequences for the distribution and persistence of populations. Landscape connectivity depends not only on the abundance and spatial patterning of habitat. but also on the habitat specificity and dispersal abilities of species. Habitat specialists with limited dispersal capabilities presumably have a much lower threshold to habitatmore » fragmentation than highly vagile species, which may perceive the landscape as functionally connected across a greater range of fragmentation severity. To determine where threshold effects in species, responses to landscape structure are likely to occur, a simulation model modified from percolation theory was developed. Our simulations predicted the distributional patterns of populations in different landscape mosaics, which we tested empirically using two grasshopper species (Orthoptera: Acrididae) that occur in the shortgrass prairie of north-central Colorado. The distribution of these two species in this grassland mosaic matched the predictions from our simulations. By providing quantitative predictions of threshold effects, this modelling approach may prove useful in the formulation of conservation strategies and assessment of land-use changes on species` distributional patterns and persistence.« less
NASA Astrophysics Data System (ADS)
Mayes, R.; Lyford, M. E.; Myers, J. D.
2009-12-01
The Quantitative Reasoning in STEM (QR STEM) project is a state level Mathematics and Science Partnership Project (MSP) with a focus on the mathematics and statistics that underlies the understanding of complex global scientific issues. This session is a companion session to the QR STEM: The Science presentation. The focus of this session is the quantitative reasoning aspects of the project. As students move from understandings that range from local to global in perspective on issues of energy and environment, there is a significant increase in the need for mathematical and statistical conceptual understanding. These understandings must be accessible to the students within the scientific context, requiring the special understandings that are endemic within quantitative reasoning. The QR STEM project brings together interdisciplinary teams of higher education faculty and middle/high school teachers to explore complex problems in energy and environment. The disciplines include life sciences, physics, chemistry, earth science, statistics, and mathematics. These interdisciplinary teams develop open ended performance tasks to implement in the classroom, based on scientific concepts that underpin energy and environment. Quantitative reasoning is broken down into three components: Quantitative Literacy, Quantitative Interpretation, and Quantitative Modeling. Quantitative Literacy is composed of arithmetic concepts such as proportional reasoning, numeracy, and descriptive statistics. Quantitative Interpretation includes algebraic and geometric concepts that underlie the ability to interpret a model of natural phenomena which is provided for the student. This model may be a table, graph, or equation from which the student is to make predictions or identify trends, or from which they would use statistics to explore correlations or patterns in data. Quantitative modeling is the ability to develop the model from data, including the ability to test hypothesis using statistical procedures. We use the term model very broadly, so it includes visual models such as box models, as well as best fit equation models and hypothesis testing. One of the powerful outcomes of the project is the conversation which takes place between science teachers and mathematics teachers. First they realize that though they are teaching concepts that cross their disciplines, the barrier of scientific language within their subjects restricts students from applying the concepts across subjects. Second the mathematics teachers discover the context of science as a means of providing real world situations that engage students in the utility of mathematics as a tool for solving problems. Third the science teachers discover the barrier to understanding science that is presented by poor quantitative reasoning ability. Finally the students are engaged in exploring energy and environment in a manner which exposes the importance of seeing a problem from multiple interdisciplinary perspectives. The outcome is a democratic citizen capable of making informed decisions, and perhaps a future scientist.
Inoue, K; Ochi, H; Taketsuka, M; Saito, H; Sakurai, K; Ichihashi, N; Iwatsuki, K; Kokubo, S
2008-05-01
A systematic analysis was carried out by using response surface methodology to create a quantitative model of the synergistic effects of conditions in a continuous freezer [mix flow rate (L/h), overrun (%), cylinder pressure (kPa), drawing temperature ( degrees C), and dasher speed (rpm)] on the principal constituent parameters of ice cream [rate of fat destabilization (%), mean air cell diameter (mum), and mean ice crystal diameter (mum)]. A central composite face-centered design was used for this study. Thirty-one combinations of the 5 above-mentioned freezer conditions were designed (including replicates at the center point), and ice cream samples were manufactured and examined in a continuous freezer under the selected conditions. The responses were the 3 variables given above. A quadratic model was constructed, with the freezer conditions as the independent variables and the ice cream characteristics as the dependent variables. The coefficients of determination (R(2)) were greater than 0.9 for all 3 responses, but Q(2), the index used here for the capability of the model for predicting future observed values of the responses, was negative for both the mean ice crystal diameter and the mean air cell diameter. Therefore, pruned models were constructed by removing terms that had contributed little to the prediction in the original model and by refitting the regression model. It was demonstrated that these pruned models provided good fits to the data in terms of R(2), Q(2), and ANOVA. The effects of freezer conditions were expressed quantitatively in terms of the 3 responses. The drawing temperature ( degrees C) was found to have a greater effect on ice cream characteristics than any of the other factors.
Development of the Methodology Needed to Quantify Risks to Groundwater at CO2 Storage Sites
NASA Astrophysics Data System (ADS)
Brown, C. F.; Birkholzer, J. T.; Carroll, S.; Hakala, A.; Keating, E. H.; Lopano, C. L.; Newell, D. L.; Spycher, N.
2011-12-01
The National Risk Assessment Partnership (NRAP) is an effort that harnesses capabilities across five U.S. Department of Energy (DOE) national laboratories into a mission-focused platform to develop a defensible, science-based quantitative methodology for determining risk profiles at CO2 storage sites. NRAP is conducting risk and uncertainty analysis in the areas of reservoir performance, natural leakage pathways, wellbore integrity, groundwater protection, monitoring, and systems level modeling. The mission of NRAP is "to provide the scientific underpinning for risk assessment with respect to the long-term storage of CO2, including assessment of residual risk associated with a site post-closure." Additionally, NRAP will develop a strategic, risk-based monitoring protocol, such that monitoring at all stages of a project effectively minimizes uncertainty in the predicted behavior of the site, thereby increasing confidence in storage integrity. NRAP's research focus in the area of groundwater protection is divided into three main tasks: 1) development of quantitative risk profiles for potential groundwater impacts; 2) filling key science gaps in developing those risk profiles; and 3) field-based confirmation. Within these three tasks, researchers are engaged in collaborative studies to determine metrics to identify system perturbation and their associated risk factors. Reservoir simulations are being performed to understand/predict consequences of hypothetical leakage scenarios, from which reduced order models are being developed to feed risk profile development. Both laboratory-based experiments and reactive transport modeling studies provide estimates of geochemical impacts over a broad range of leakage scenarios. This presentation will provide an overview of the research objectives within NRAP's groundwater protection focus area, as well as select accomplishments achieved to date.
NOAA Climate Program Office Contributions to National ESPC
NASA Astrophysics Data System (ADS)
Higgins, W.; Huang, J.; Mariotti, A.; Archambault, H. M.; Barrie, D.; Lucas, S. E.; Mathis, J. T.; Legler, D. M.; Pulwarty, R. S.; Nierenberg, C.; Jones, H.; Cortinas, J. V., Jr.; Carman, J.
2016-12-01
NOAA is one of five federal agencies (DOD, DOE, NASA, NOAA, and NSF) which signed an updated charter in 2016 to partner on the National Earth System Prediction Capability (ESPC). Situated within NOAA's Office of Oceanic and Atmospheric Research (OAR), NOAA Climate Program Office (CPO) programs contribute significantly to the National ESPC goals and activities. This presentation will provide an overview of CPO contributions to National ESPC. First, we will discuss selected CPO research and transition activities that directly benefit the ESPC coupled model prediction capability, including The North American Multi-Model Ensemble (NMME) seasonal prediction system The Subseasonal Experiment (SubX) project to test real-time subseasonal ensemble prediction systems. Improvements to the NOAA operational Climate Forecast System (CFS), including software infrastructure and data assimilation. Next, we will show how CPO's foundational research activities are advancing future ESPC capabilities. Highlights will include: The Tropical Pacific Observing System (TPOS) to provide the basis for predicting climate on subseasonal to decadal timescales. Subseasonal-to-Seasonal (S2S) processes and predictability studies to improve understanding, modeling and prediction of the MJO. An Arctic Research Program to address urgent needs for advancing monitoring and prediction capabilities in this major area of concern. Advances towards building an experimental multi-decadal prediction system through studies on the Atlantic Meridional Overturning Circulation (AMOC). Finally, CPO has embraced Integrated Information Systems (IIS's) that build on the innovation of programs such as the National Integrated Drought Information System (NIDIS) to develop and deliver end to end environmental information for key societal challenges (e.g. extreme heat; coastal flooding). These contributions will help the National ESPC better understand and address societal needs and decision support requirements.
USM3D Analysis of Low Boom Configuration
NASA Technical Reports Server (NTRS)
Carter, Melissa B.; Campbell, Richard L.; Nayani, Sudheer N.
2011-01-01
In the past few years considerable improvement was made in NASA's in house boom prediction capability. As part of this improved capability, the USM3D Navier-Stokes flow solver, when combined with a suitable unstructured grid, went from accurately predicting boom signatures at 1 body length to 10 body lengths. Since that time, the research emphasis has shifted from analysis to the design of supersonic configurations with boom signature mitigation In order to design an aircraft, the techniques for accurately predicting boom and drag need to be determined. This paper compares CFD results with the wind tunnel experimental results conducted on a Gulfstream reduced boom and drag configuration. Two different wind-tunnel models were designed and tested for drag and boom data. The goal of this study was to assess USM3D capability for predicting both boom and drag characteristics. Overall, USM3D coupled with a grid that was sheared and stretched was able to reasonably predict boom signature. The computational drag polar matched the experimental results for a lift coefficient above 0.1 despite some mismatch in the predicted lift-curve slope.
Towards predicting the encoding capability of MR fingerprinting sequences.
Sommer, K; Amthor, T; Doneva, M; Koken, P; Meineke, J; Börnert, P
2017-09-01
Sequence optimization and appropriate sequence selection is still an unmet need in magnetic resonance fingerprinting (MRF). The main challenge in MRF sequence design is the lack of an appropriate measure of the sequence's encoding capability. To find such a measure, three different candidates for judging the encoding capability have been investigated: local and global dot-product-based measures judging dictionary entry similarity as well as a Monte Carlo method that evaluates the noise propagation properties of an MRF sequence. Consistency of these measures for different sequence lengths as well as the capability to predict actual sequence performance in both phantom and in vivo measurements was analyzed. While the dot-product-based measures yielded inconsistent results for different sequence lengths, the Monte Carlo method was in a good agreement with phantom experiments. In particular, the Monte Carlo method could accurately predict the performance of different flip angle patterns in actual measurements. The proposed Monte Carlo method provides an appropriate measure of MRF sequence encoding capability and may be used for sequence optimization. Copyright © 2017 Elsevier Inc. All rights reserved.
PLS modelling of structure—activity relationships of catechol O-methyltransferase inhibitors
NASA Astrophysics Data System (ADS)
Lotta, Timo; Taskinen, Jyrki; Bäckström, Reijo; Nissinen, Erkki
1992-06-01
Quantitative structure-activity analysis was carried out for in vitro inhibition of rat brain soluble catechol O-methyltransferase by a series (N=99) of 1,5-substituted-3,4-dihydroxybenzenes using computational chemistry and multivariate PLS modelling of data sets. The molecular structural descriptors (N=19) associated with the electronics of the catecholic ring and sizes of substituents were derived theoretically. For the whole set of molecules two separate PLS models have to be used. A PLS model with two significant (crossvalidated) model dimensions describing 82.2% of the variance in inhibition activity data was capable of predicting all molecules except those having the largest R1 substituent or having a large R5 substituent compared to the NO2 group. The other PLS model with three significant (crossvalidated) model dimensions described 83.3% of the variance in inhibition activity data. This model could not handle compounds having a small R5 substituent, compared to the NO2 group, or the largest R1 substituent. The predictive capability of these PLS models was good. The models reveal that inhibition activity is nonlinearly related to the size of the R5 substituent. The analysis of the PLS models also shows that the binding affinity is greatly dependent on the electronic nature of both R1 and R5 substituents. The electron-withdrawing nature of the substituents enhances inhibition activity. In addition, the size of the R1 substituent and its lipophilicity are important in the binding of inhibitors. The size of the R1 substituent has an upper limit. On the other hand, ionized R1 substituents decrease inhibition activity.
Unraveling the Plant-Soil Interactome
NASA Astrophysics Data System (ADS)
Lipton, M. S.; Hixson, K.; Ahkami, A. H.; HaHandkumbura, P. P.; Hess, N. J.; Fang, Y.; Fortin, D.; Stanfill, B.; Yabusaki, S.; Engbrecht, K. M.; Baker, E.; Renslow, R.; Jansson, C.
2017-12-01
Plant photosynthesis is the primary conduit of carbon fixation from the atmosphere to the terrestrial ecosystem. While more is known about plant physiology and biochemistry, the interplay between genetic and environmental factors that govern partitioning of carbon to above- and below ground plant biomass, to microbes, to the soil, and respired to the atmosphere is not well understood holistically. To address this knowledge gap there is a need to define, study, comprehend, and model the plant ecosystem as an integrated system of integrated biotic and abiotic processes and feedbacks. Local rhizosphere conditions are an important control on plant performance but are in turn affected by plant uptake and rhizodeposition processes. C3 and C4 plants have different CO2 fixation strategies and likely have differential metabolic profiles resulting in different carbon sources exuding to the rhizosphere. In this presentation, we report on an integrated capability to better understand plant-soil interactions, including modeling tools that address the spatiotemporal hydrobiogeochemistry in the rhizosphere. Comparing Brachypodium distachyon, (Brachypodium) as our C3 representative and Setaria viridis (Setaria) as our C4 representative, we designed, highly controlled single-plant experimental ecosystems based these model grasses to enable quantitative prediction of ecosystem traits and responses as a function of plant genotype and environmental variables. A metabolomics survey of 30 Brachypodium genotypes grown under control and drought conditions revealed specific metabolites that correlated with biomass production and drought tolerance. A comparison of Brachypodium and Setaria grown with control and a future predicted elevated CO2 level revealed changes in biomass accumulation and metabolite profiles between the C3 and C4 species in both leaves and roots. Finally, we are building an mechanistic modeling capability that will contribute to a better basis for modeling plant water and nutrient cycling in larger scale models.
NASA Astrophysics Data System (ADS)
Wang, Yunzhi; Qiu, Yuchen; Thai, Theresa; More, Kathleen; Ding, Kai; Liu, Hong; Zheng, Bin
2016-03-01
How to rationally identify epithelial ovarian cancer (EOC) patients who will benefit from bevacizumab or other antiangiogenic therapies is a critical issue in EOC treatments. The motivation of this study is to quantitatively measure adiposity features from CT images and investigate the feasibility of predicting potential benefit of EOC patients with or without receiving bevacizumab-based chemotherapy treatment using multivariate statistical models built based on quantitative adiposity image features. A dataset involving CT images from 59 advanced EOC patients were included. Among them, 32 patients received maintenance bevacizumab after primary chemotherapy and the remaining 27 patients did not. We developed a computer-aided detection (CAD) scheme to automatically segment subcutaneous fat areas (VFA) and visceral fat areas (SFA) and then extracted 7 adiposity-related quantitative features. Three multivariate data analysis models (linear regression, logistic regression and Cox proportional hazards regression) were performed respectively to investigate the potential association between the model-generated prediction results and the patients' progression-free survival (PFS) and overall survival (OS). The results show that using all 3 statistical models, a statistically significant association was detected between the model-generated results and both of the two clinical outcomes in the group of patients receiving maintenance bevacizumab (p<0.01), while there were no significant association for both PFS and OS in the group of patients without receiving maintenance bevacizumab. Therefore, this study demonstrated the feasibility of using quantitative adiposity-related CT image features based statistical prediction models to generate a new clinical marker and predict the clinical outcome of EOC patients receiving maintenance bevacizumab-based chemotherapy.
An Assessment of Current Fan Noise Prediction Capability
NASA Technical Reports Server (NTRS)
Envia, Edmane; Woodward, Richard P.; Elliott, David M.; Fite, E. Brian; Hughes, Christopher E.; Podboy, Gary G.; Sutliff, Daniel L.
2008-01-01
In this paper, the results of an extensive assessment exercise carried out to establish the current state of the art for predicting fan noise at NASA are presented. Representative codes in the empirical, analytical, and computational categories were exercised and assessed against a set of benchmark acoustic data obtained from wind tunnel tests of three model scale fans. The chosen codes were ANOPP, representing an empirical capability, RSI, representing an analytical capability, and LINFLUX, representing a computational aeroacoustics capability. The selected benchmark fans cover a wide range of fan pressure ratios and fan tip speeds, and are representative of modern turbofan engine designs. The assessment results indicate that the ANOPP code can predict fan noise spectrum to within 4 dB of the measurement uncertainty band on a third-octave basis for the low and moderate tip speed fans except at extreme aft emission angles. The RSI code can predict fan broadband noise spectrum to within 1.5 dB of experimental uncertainty band provided the rotor-only contribution is taken into account. The LINFLUX code can predict interaction tone power levels to within experimental uncertainties at low and moderate fan tip speeds, but could deviate by as much as 6.5 dB outside the experimental uncertainty band at the highest tip speeds in some case.
Kashir, Junaid; Jones, Celine; Mounce, Ginny; Ramadan, Walaa M; Lemmon, Bernadette; Heindryckx, Bjorn; de Sutter, Petra; Parrington, John; Turner, Karen; Child, Tim; McVeigh, Enda; Coward, Kevin
2013-01-01
To examine whether similar levels of phospholipase C zeta (PLC-ζ) protein are present in sperm from men whose ejaculates resulted in normal oocyte activation, and to examine whether a predominant pattern of PLC-ζ localization is linked to normal oocyte activation ability. Laboratory study. University laboratory. Control subjects (men with proven oocyte activation capacity; n = 16) and men whose sperm resulted in recurrent intracytoplasmic sperm injection failure (oocyte activation deficient [OAD]; n = 5). Quantitative immunofluorescent analysis of PLC-ζ protein in human sperm. Total levels of PLC-ζ fluorescence, proportions of sperm exhibiting PLC-ζ immunoreactivity, and proportions of PLC-ζ localization patterns in sperm from control and OAD men. Sperm from control subjects presented a significantly higher proportion of sperm exhibiting PLC-ζ immunofluorescence compared with infertile men diagnosed with OAD (82.6% and 27.4%, respectively). Total levels of PLC-ζ in sperm from individual control and OAD patients exhibited significant variance, with sperm from 10 out of 16 (62.5%) exhibiting levels similar to OAD samples. Predominant PLC-ζ localization patterns varied between control and OAD samples with no predictable or consistent pattern. The results indicate that sperm from control men exhibited significant variance in total levels of PLC-ζ protein, as well as significant variance in the predominant localization pattern. Such variance may hinder the diagnostic application of quantitative PLC-ζ immunofluorescent analysis. Copyright © 2013 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.
The health effects of climate change: a survey of recent quantitative research.
Grasso, Margherita; Manera, Matteo; Chiabai, Aline; Markandya, Anil
2012-05-01
In recent years there has been a large scientific and public debate on climate change and its direct as well as indirect effects on human health. In particular, a large amount of research on the effects of climate changes on human health has addressed two fundamental questions. First, can historical data be of some help in revealing how short-run or long-run climate variations affect the occurrence of infectious diseases? Second, is it possible to build more accurate quantitative models which are capable of predicting the future effects of different climate conditions on the transmissibility of particularly dangerous infectious diseases? The primary goal of this paper is to review the most relevant contributions which have directly tackled those questions, both with respect to the effects of climate changes on the diffusion of non-infectious and infectious diseases, with malaria as a case study. Specific attention will be drawn on the methodological aspects of each study, which will be classified according to the type of quantitative model considered, namely time series models, panel data and spatial models, and non-statistical approaches. Since many different disciplines and approaches are involved, a broader view is necessary in order to provide a better understanding of the interactions between climate and health. In this respect, our paper also presents a critical summary of the recent literature related to more general aspects of the impacts of climate changes on human health, such as: the economics of climate change; how to manage the health effects of climate change; the establishment of Early Warning Systems for infectious diseases.
The Health Effects of Climate Change: A Survey of Recent Quantitative Research
Grasso, Margherita; Manera, Matteo; Chiabai, Aline; Markandya, Anil
2012-01-01
In recent years there has been a large scientific and public debate on climate change and its direct as well as indirect effects on human health. In particular, a large amount of research on the effects of climate changes on human health has addressed two fundamental questions. First, can historical data be of some help in revealing how short-run or long-run climate variations affect the occurrence of infectious diseases? Second, is it possible to build more accurate quantitative models which are capable of predicting the future effects of different climate conditions on the transmissibility of particularly dangerous infectious diseases? The primary goal of this paper is to review the most relevant contributions which have directly tackled those questions, both with respect to the effects of climate changes on the diffusion of non-infectious and infectious diseases, with malaria as a case study. Specific attention will be drawn on the methodological aspects of each study, which will be classified according to the type of quantitative model considered, namely time series models, panel data and spatial models, and non-statistical approaches. Since many different disciplines and approaches are involved, a broader view is necessary in order to provide a better understanding of the interactions between climate and health. In this respect, our paper also presents a critical summary of the recent literature related to more general aspects of the impacts of climate changes on human health, such as: the economics of climate change; how to manage the health effects of climate change; the establishment of Early Warning Systems for infectious diseases. PMID:22754455
Essentiality, toxicity, and uncertainty in the risk assessment of manganese.
Boyes, William K
2010-01-01
Risk assessments of manganese by inhalation or oral routes of exposure typically acknowledge the duality of manganese as an essential element at low doses and a toxic metal at high doses. Previously, however, risk assessors were unable to describe manganese pharmacokinetics quantitatively across dose levels and routes of exposure, to account for mass balance, and to incorporate this information into a quantitative risk assessment. In addition, the prior risk assessment of inhaled manganese conducted by the U.S. Environmental Protection Agency (EPA) identified a number of specific factors that contributed to uncertainty in the risk assessment. In response to a petition regarding the use of a fuel additive containing manganese, methylcyclopentadienyl manganese tricarbonyl (MMT), the U.S. EPA developed a test rule under the U.S. Clean Air Act that required, among other things, the generation of pharmacokinetic information. This information was intended not only to aid in the design of health outcome studies, but also to help address uncertainties in the risk assessment of manganese. To date, the work conducted in response to the test rule has yielded substantial pharmacokinetic data. This information will enable the generation of physiologically based pharmacokinetic (PBPK) models capable of making quantitative predictions of tissue manganese concentrations following inhalation and oral exposure, across dose levels, and accounting for factors such as duration of exposure, different species of manganese, and changes of age, gender, and reproductive status. The work accomplished in response to the test rule, in combination with other scientific evidence, will enable future manganese risk assessments to consider tissue dosimetry more comprehensively than was previously possible.
Developing a predictive tropospheric ozone model for Tabriz
NASA Astrophysics Data System (ADS)
Khatibi, Rahman; Naghipour, Leila; Ghorbani, Mohammad A.; Smith, Michael S.; Karimi, Vahid; Farhoudi, Reza; Delafrouz, Hadi; Arvanaghi, Hadi
2013-04-01
Predictive ozone models are becoming indispensable tools by providing a capability for pollution alerts to serve people who are vulnerable to the risks. We have developed a tropospheric ozone prediction capability for Tabriz, Iran, by using the following five modeling strategies: three regression-type methods: Multiple Linear Regression (MLR), Artificial Neural Networks (ANNs), and Gene Expression Programming (GEP); and two auto-regression-type models: Nonlinear Local Prediction (NLP) to implement chaos theory and Auto-Regressive Integrated Moving Average (ARIMA) models. The regression-type modeling strategies explain the data in terms of: temperature, solar radiation, dew point temperature, and wind speed, by regressing present ozone values to their past values. The ozone time series are available at various time intervals, including hourly intervals, from August 2010 to March 2011. The results for MLR, ANN and GEP models are not overly good but those produced by NLP and ARIMA are promising for the establishing a forecasting capability.
A Quantitative Study of Oxygen as a Metabolic Regulator
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan; LaManna, Joseph C.; Cabrera, Marco E.
1999-01-01
An acute reduction in oxygen (O2) delivery to a tissue is generally associated with a decrease in phosphocreatine, increases in ADP, NADH/NAD, and inorganic phosphate, increased rates of glycolysis and lactate production, and reduced rates of pyruvate and fatty acid oxidation. However, given the complexity of the human bioenergetic system and its components, it is difficult to determine quantitatively how cellular metabolic processes interact to maintain ATP homeostasis during stress (e.g., hypoxia, ischemia, and exercise). Of special interest is the determination of mechanisms relating tissue oxygenation to observed metabolic responses at the tissue, organ, and whole body levels and the quantification of how changes in tissue O2 availability affect the pathways of ATP synthesis and the metabolites that control these pathways. In this study, we extend a previously developed mathematical model of human bioenergetics to provide a physicochemical framework that permits quantitative understanding of O2 as a metabolic regulator. Specifically, the enhancement permits studying the effects of variations in tissue oxygenation and in parameters controlling the rate of cellular respiration on glycolysis, lactate production, and pyruvate oxidation. The whole body is described as a bioenergetic system consisting of metabolically distinct tissue/organ subsystems that exchange materials with the blood. In order to study the dynamic response of each subsystem to stimuli, we solve the ordinary differential equations describing the temporal evolution of metabolite levels, given the initial concentrations. The solver used in the present study is the packaged code LSODE, as implemented in the NASA Lewis kinetics and sensitivity analysis code, LSENS. A major advantage of LSENS is the efficient procedures supporting systematic sensitivity analysis, which provides the basic methods for studying parameter sensitivities (i.e., changes in model behavior due to parameter variation). Sensitivity analysis establishes relationships between model predictions and problem parameters (i.e., initial concentrations, rate coefficients, etc). It helps determine the effects of uncertainties or changes in these input parameters on the predictions, which ultimately are compared with experimental observations in order to validate the model. Sensitivity analysis can identify parameters that must be determined accurately because of their large effect on the model predictions and parameters that need not be known with great precision because they have little or no effect on the solution. This capability may prove to be important in optimizing the design of experiments, thereby reducing the use of animals. This approach can be applied to study the metabolic effects of reduced oxygen delivery to cardiac muscle due to local myocardial ischemia and the effects of acute hypoxia on brain metabolism. Other important applications of sensitivity analysis include identification of quantitatively relevant pathways and biochemical species within an overall mechanism, when examining the effects of a genetic anomaly or pathological state on energetic system components and whole system behavior.
Forecasting the Environmental Impacts of New Energetic Materials
2010-11-30
Quantitative structure- activity relationships for chemical reductions of organic contaminants. Environmental Toxicology and Chemistry 22(8): 1733-1742. QSARs ...activity relationships [ QSARs ]) and the use of these properties to predict the chemical?s fate with multimedia assessment models. SERDP has recently...has several parts, including the prediction of chemical properties (e.g., with quantitative structure-activity relationships [ QSARs ]) and the use of
Matrix evaluation of science objectives
NASA Technical Reports Server (NTRS)
Wessen, Randii R.
1994-01-01
The most fundamental objective of all robotic planetary spacecraft is to return science data. To accomplish this, a spacecraft is fabricated and built, software is planned and coded, and a ground system is designed and implemented. However, the quantitative analysis required to determine how the collection of science data drives ground system capabilities has received very little attention. This paper defines a process by which science objectives can be quantitatively evaluated. By applying it to the Cassini Mission to Saturn, this paper further illustrates the power of this technique. The results show which science objectives drive specific ground system capabilities. In addition, this process can assist system engineers and scientists in the selection of the science payload during pre-project mission planning; ground system designers during ground system development and implementation; and operations personnel during mission operations.
Fire spread probabilities for experimental beds composed of mixedwood boreal forest fuels
M.B. Dickinson; E.A. Johnson; R. Artiaga
2013-01-01
Although fuel characteristics are assumed to have an important impact on fire regimes through their effects on extinction dynamics, limited capabilities exist for predicting whether a fire will spread in mixedwood boreal forest surface fuels. To improve predictive capabilities, we conducted 347 no-wind, laboratory test burns in surface fuels collected from the mixed-...
Evaluating the habitat capability model for Merriam's turkeys
Mark A. Rumble; Stanley H. Anderson
1995-01-01
Habitat capability (HABCAP) models for wildlife assist land managers in predicting the consequences of their management decisions. Models must be tested and refined prior to using them in management planning. We tested the predicted patterns of habitat selection of the R2 HABCAP model using observed patterns of habitats selected by radio-marked Merriamâs turkey (
Image analysis in cytology: DNA-histogramming versus cervical smear prescreening.
Bengtsson, E W; Nordin, B
1993-01-01
The visual inspection of cellular specimens and histological sections through a light microscope plays an important role in clinical medicine and biomedical research. The human visual system is very good at the recognition of various patterns but less efficient at quantitative assessment of these patterns. Some samples are prepared in great numbers, most notably the screening for cervical cancer, the so-called PAP-smears, which results in hundreds of millions of samples each year, creating a tedious mass inspection task. Numerous attempts have been made over the last 40 years to create systems that solve these two tasks, the quantitative supplement to the human visual system and the automation of mass screening. The most difficult task, the total automation, has received the greatest attention with many large scale projects over the decades. In spite of all these efforts, still no generally accepted automated prescreening device exists on the market. The main reason for this failure is the great pattern recognition capabilities needed to distinguish between cancer cells and all other kinds of objects found in the specimens: cellular clusters, debris, degenerate cells, etc. Improved algorithms, the ever-increasing processing power of computers and progress in biochemical specimen preparation techniques make it likely that eventually useful automated prescreening systems will become available. Meanwhile, much less effort has been put into the development of interactive cell image analysis systems. Still, some such systems have been developed and put into use at thousands of laboratories worldwide. In these the human pattern recognition capability is used to select the fields and objects that are to be analysed while the computational power of the computer is used for the quantitative analysis of cellular DNA content or other relevant markers. Numerous studies have shown that the quantitative information about the distribution of cellular DNA content is of prognostic significance in many types of cancer. Several laboratories are therefore putting these techniques into routine clinical use. The more advanced systems can also study many other markers and cellular features, some known to be of clinical interest, others useful in research. The advances in computer technology are making these systems more generally available through decreasing cost, increasing computational power and improved user interfaces. We have been involved in research and development of both automated and interactive cell analysis systems during the last 20 years. Here some experiences and conclusions from this work will be presented as well as some predictions about what can be expected in the near future.
Prediction of Environmental Impact of High-Energy Materials with Atomistic Computer Simulations
2010-11-01
from a training set of compounds. Other methods include Quantitative Struc- ture-Activity Relationship ( QSAR ) and Quantitative Structure-Property...26 28 the development of QSPR/ QSAR models, in contrast to boiling points and critical parameters derived from empirical correlations, to improve...Quadratic Configuration Interaction Singles Doubles QSAR Quantitative Structure-Activity Relationship QSPR Quantitative Structure-Property
Quantitative Adverse Outcome Pathways and Their Application to Predictive Toxicology
A quantitative adverse outcome pathway (qAOP) consists of one or more biologically based, computational models describing key event relationships linking a molecular initiating event (MIE) to an adverse outcome. A qAOP provides quantitative, dose–response, and time-course p...
The simultaneous quantitation of ten amino acids in soil extracts by mass fragmentography
NASA Technical Reports Server (NTRS)
Pereira, W. E.; Hoyano, Y.; Reynolds, W. E.; Summons, R. E.; Duffield, A. M.
1972-01-01
A specific and sensitive method for the identification and simultaneous quantitation by mass fragmentography of ten of the amino acids present in soil was developed. The technique uses a computer driven quadrupole mass spectrometer and a commercial preparation of deuterated amino acids is used as internal standards for purposes of quantitation. The results obtained are comparable with those from an amino acid analyzer. In the quadrupole mass spectrometer-computer system up to 25 pre-selected ions may be monitored sequentially. This allows a maximum of 12 different amino acids (one specific ion in each of the undeuterated and deuterated amino acid spectra) to be quantitated. The method is relatively rapid (analysis time of approximately one hour) and is capable of the quantitation of nanogram quantities of amino acids.
Using ensemble rainfall predictions in a countrywide flood forecasting model in Scotland
NASA Astrophysics Data System (ADS)
Cranston, M. D.; Maxey, R.; Tavendale, A. C. W.; Buchanan, P.
2012-04-01
Improving flood predictions for all sources of flooding is at the centre of flood risk management policy in Scotland. With the introduction of the Flood Risk Management (Scotland) Act providing a new statutory basis for SEPA's flood warning responsibilities, the pressures on delivering hydrological science developments in support of this legislation has increased. Specifically, flood forecasting capabilities need to develop in support of the need to reduce the impact of flooding through the provision of actively disseminated, reliable and timely flood warnings. Flood forecasting in Scotland has developed significantly in recent years (Cranston and Tavendale, 2012). The development of hydrological models to predict flooding at a catchment scale has relied upon the application of rainfall runoff models utilising raingauge, radar and quantitative precipitation forecasts in the short lead time (less than 6 hours). Single or deterministic forecasts based on highly uncertain rainfall predictions have led to the greatest operational difficulties when communicating flood risk with emergency responders, therefore the emergence of probability-based estimates offers the greatest opportunity for managing uncertain predictions. This paper presents operational application of a physical-conceptual distributed hydrological model on a countrywide basis across Scotland. Developed by CEH Wallingford for SEPA in 2011, Grid-to-Grid (G2G) principally runs in deterministic mode and employs radar and raingauge estimates of rainfall together with weather model predictions to produce forecast river flows, as gridded time-series at a resolution of 1km and for up to 5 days ahead (Cranston, et al., 2012). However the G2G model is now being run operationally using ensemble predictions of rainfall from the MOGREPS-R system to provide probabilistic flood forecasts. By presenting a range of flood predictions on a national scale through this approach, hydrologists are now able to consider an objective measure of the likelihood of flooding impacts to help with risk based emergency communication.
Assessment of CFD capability for prediction of hypersonic shock interactions
NASA Astrophysics Data System (ADS)
Knight, Doyle; Longo, José; Drikakis, Dimitris; Gaitonde, Datta; Lani, Andrea; Nompelis, Ioannis; Reimann, Bodo; Walpot, Louis
2012-01-01
The aerothermodynamic loadings associated with shock wave boundary layer interactions (shock interactions) must be carefully considered in the design of hypersonic air vehicles. The capability of Computational Fluid Dynamics (CFD) software to accurately predict hypersonic shock wave laminar boundary layer interactions is examined. A series of independent computations performed by researchers in the US and Europe are presented for two generic configurations (double cone and cylinder) and compared with experimental data. The results illustrate the current capabilities and limitations of modern CFD methods for these flows.
Molecular dispersion spectroscopy – new capabilities in laser chemical sensing
Nikodem, Michal; Wysocki, Gerard
2012-01-01
Laser spectroscopic techniques suitable for molecular dispersion sensing enable new applications and strategies in chemical detection. This paper discusses the current state-of-the art and provides an overview of recently developed chirped laser dispersion spectroscopy (CLaDS) based techniques. CLaDS and its derivatives allow for quantitative spectroscopy of trace-gases and enable new capabilities such as extended dynamic range of concentration measurements, high immunity to photodetected intensity fluctuations, or capability of direct processing of spectroscopic signals in optical domain. Several experimental configurations based on quantum cascade lasers and examples of molecular spectroscopic data are presented to demonstrate capabilities of molecular dispersion spectroscopy in the mid-infrared spectral region. PMID:22809459
A Conceptual Measurement Model for eHealth Readiness: a Team Based Perspective
Phillips, James; Poon, Simon K.; Yu, Dan; Lam, Mary; Hines, Monique; Brunner, Melissa; Power, Emma; Keep, Melanie; Shaw, Tim; Togher, Leanne
2017-01-01
Despite the shift towards collaborative healthcare and the increase in the use of eHealth technologies, there does not currently exist a model for the measurement of eHealth readiness in interdisciplinary healthcare teams. This research aims to address this gap in the literature through the development of a three phase methodology incorporating qualitative and quantitative methods. We propose a conceptual measurement model consisting of operationalized themes affecting readiness across four factors: (i) Organizational Capabilities, (ii) Team Capabilities, (iii) Patient Capabilities, and (iv) Technology Capabilities. The creation of this model will allow for the measurement of the readiness of interdisciplinary healthcare teams to use eHealth technologies to improve patient outcomes. PMID:29854207
Bruijn, Merel M C; Kamphuis, Esme I; Hoesli, Irene M; Martinez de Tejada, Begoña; Loccufier, Anne R; Kühnert, Maritta; Helmer, Hanns; Franz, Marie; Porath, Martina M; Oudijk, Martijn A; Jacquemyn, Yves; Schulzke, Sven M; Vetter, Grit; Hoste, Griet; Vis, Jolande Y; Kok, Marjolein; Mol, Ben W J; van Baaren, Gert-Jan
2016-12-01
The combination of the qualitative fetal fibronectin test and cervical length measurement has a high negative predictive value for preterm birth within 7 days; however, positive prediction is poor. A new bedside quantitative fetal fibronectin test showed potential additional value over the conventional qualitative test, but there is limited evidence on the combination with cervical length measurement. The purpose of this study was to compare quantitative fetal fibronectin and qualitative fetal fibronectin testing in the prediction of spontaneous preterm birth within 7 days in symptomatic women who undergo cervical length measurement. We performed a European multicenter cohort study in 10 perinatal centers in 5 countries. Women between 24 and 34 weeks of gestation with signs of active labor and intact membranes underwent quantitative fibronectin testing and cervical length measurement. We assessed the risk of preterm birth within 7 days in predefined strata based on fibronectin concentration and cervical length. Of 455 women who were included in the study, 48 women (11%) delivered within 7 days. A combination of cervical length and qualitative fibronectin resulted in the identification of 246 women who were at low risk: 164 women with a cervix between 15 and 30 mm and a negative fibronectin test (<50 ng/mL; preterm birth rate, 2%) and 82 women with a cervix at >30 mm (preterm birth rate, 2%). Use of quantitative fibronectin alone resulted in a predicted risk of preterm birth within 7 days that ranged from 2% in the group with the lowest fibronectin level (<10 ng/mL) to 38% in the group with the highest fibronectin level (>500 ng/mL), with similar accuracy as that of the combination of cervical length and qualitative fibronectin. Combining cervical length and quantitative fibronectin resulted in the identification of an additional 19 women at low risk (preterm birth rate, 5%), using a threshold of 10 ng/mL in women with a cervix at <15 mm, and 6 women at high risk (preterm birth rate, 33%) using a threshold of >500 ng/mL in women with a cervix at >30 mm. In women with threatened preterm birth, quantitative fibronectin testing alone performs equal to the combination of cervical length and qualitative fibronectin. Possibly, the combination of quantitative fibronectin testing and cervical length increases this predictive capacity. Cost-effectiveness analysis and the availability of these tests in a local setting should determine the final choice. Copyright © 2016 Elsevier Inc. All rights reserved.
USDA-ARS?s Scientific Manuscript database
Classical quantitative genetics aids crop improvement by providing the means to estimate heritability, genetic correlations, and predicted responses to various selection schemes. Genomics has the potential to aid quantitative genetics and applied crop improvement programs via large-scale, high-thro...
Liu, Jian; Gao, Yun-Hua; Li, Ding-Dong; Gao, Yan-Chun; Hou, Ling-Mi; Xie, Ting
2014-01-01
To compare the value of contrast-enhanced ultrasound (CEUS) qualitative and quantitative analysis in the identification of breast tumor lumps. Qualitative and quantitative indicators of CEUS for 73 cases of breast tumor lumps were retrospectively analyzed by univariate and multivariate approaches. Logistic regression was applied and ROC curves were drawn for evaluation and comparison. The CEUS qualitative indicator-generated regression equation contained three indicators, namely enhanced homogeneity, diameter line expansion and peak intensity grading, which demonstrated prediction accuracy for benign and malignant breast tumor lumps of 91.8%; the quantitative indicator-generated regression equation only contained one indicator, namely the relative peak intensity, and its prediction accuracy was 61.5%. The corresponding areas under the ROC curve for qualitative and quantitative analyses were 91.3% and 75.7%, respectively, which exhibited a statistically significant difference by the Z test (P<0.05). The ability of CEUS qualitative analysis to identify breast tumor lumps is better than with quantitative analysis.
Challenges and perspectives in quantitative NMR.
Giraudeau, Patrick
2017-01-01
This perspective article summarizes, from the author's point of view at the beginning of 2016, the major challenges and perspectives in the field of quantitative NMR. The key concepts in quantitative NMR are first summarized; then, the most recent evolutions in terms of resolution and sensitivity are discussed, as well as some potential future research directions in this field. A particular focus is made on methodologies capable of boosting the resolution and sensitivity of quantitative NMR, which could open application perspectives in fields where the sample complexity and the analyte concentrations are particularly challenging. These include multi-dimensional quantitative NMR and hyperpolarization techniques such as para-hydrogen-induced polarization or dynamic nuclear polarization. Because quantitative NMR cannot be dissociated from the key concepts of analytical chemistry, i.e. trueness and precision, the methodological developments are systematically described together with their level of analytical performance. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Testing process predictions of models of risky choice: a quantitative model comparison approach
Pachur, Thorsten; Hertwig, Ralph; Gigerenzer, Gerd; Brandstätter, Eduard
2013-01-01
This article presents a quantitative model comparison contrasting the process predictions of two prominent views on risky choice. One view assumes a trade-off between probabilities and outcomes (or non-linear functions thereof) and the separate evaluation of risky options (expectation models). Another view assumes that risky choice is based on comparative evaluation, limited search, aspiration levels, and the forgoing of trade-offs (heuristic models). We derived quantitative process predictions for a generic expectation model and for a specific heuristic model, namely the priority heuristic (Brandstätter et al., 2006), and tested them in two experiments. The focus was on two key features of the cognitive process: acquisition frequencies (i.e., how frequently individual reasons are looked up) and direction of search (i.e., gamble-wise vs. reason-wise). In Experiment 1, the priority heuristic predicted direction of search better than the expectation model (although neither model predicted the acquisition process perfectly); acquisition frequencies, however, were inconsistent with both models. Additional analyses revealed that these frequencies were primarily a function of what Rubinstein (1988) called “similarity.” In Experiment 2, the quantitative model comparison approach showed that people seemed to rely more on the priority heuristic in difficult problems, but to make more trade-offs in easy problems. This finding suggests that risky choice may be based on a mental toolbox of strategies. PMID:24151472
A systematic review of quantitative burn wound microbiology in the management of burns patients.
Halstead, Fenella D; Lee, Kwang Chear; Kwei, Johnny; Dretzke, Janine; Oppenheim, Beryl A; Moiemen, Naiem S
2018-02-01
The early diagnosis of infection or sepsis in burns are important for patient care. Globally, a large number of burn centres advocate quantitative cultures of wound biopsies for patient management, since there is assumed to be a direct link between the bioburden of a burn wound and the risk of microbial invasion. Given the conflicting study findings in this area, a systematic review was warranted. Bibliographic databases were searched with no language restrictions to August 2015. Study selection, data extraction and risk of bias assessment were performed in duplicate using pre-defined criteria. Substantial heterogeneity precluded quantitative synthesis, and findings were described narratively, sub-grouped by clinical question. Twenty six laboratory and/or clinical studies were included. Substantial heterogeneity hampered comparisons across studies and interpretation of findings. Limited evidence suggests that (i) more than one quantitative microbiology sample is required to obtain reliable estimates of bacterial load; (ii) biopsies are more sensitive than swabs in diagnosing or predicting sepsis; (iii) high bacterial loads may predict worse clinical outcomes, and (iv) both quantitative and semi-quantitative culture reports need to be interpreted with caution and in the context of other clinical risk factors. The evidence base for the utility and reliability of quantitative microbiology for diagnosing or predicting clinical outcomes in burns patients is limited and often poorly reported. Consequently future research is warranted. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
Estimation of equivalence ratio distribution in diesel spray using a computational fluid dynamics
NASA Astrophysics Data System (ADS)
Suzuki, Yasumasa; Tsujimura, Taku; Kusaka, Jin
2014-08-01
It is important to understand the mechanism of mixing and atomization of the diesel spray. In addition, the computational prediction of mixing behavior and internal structure of a diesel spray is expected to promote the further understanding about a diesel spray and development of the diesel engine including devices for fuel injection. In this study, we predicted the formation of diesel fuel spray with 3D-CFD code and validated the application by comparing experimental results of the fuel spray behavior and the equivalence ratio visualized by Layleigh-scatter imaging under some ambient, injection and fuel conditions. Using the applicable constants of KH-RT model, we can predict the liquid length spray on a quantitative level. under various fuel injection, ambient and fuel conditions. On the other hand, the change of the vapor penetration and the fuel mass fraction and equivalence ratio distribution with change of fuel injection and ambient conditions quantitatively. The 3D-CFD code used in this study predicts the spray cone angle and entrainment of ambient gas are predicted excessively, therefore there is the possibility of the improvement in the prediction accuracy by the refinement of fuel droplets breakup and evaporation model and the quantitative prediction of spray cone angle.
Ueda, Kazuhiro; Tanaka, Toshiki; Li, Tao-Sheng; Tanaka, Nobuyuki; Hamano, Kimikazu
2009-03-01
The prediction of pulmonary functional reserve is mandatory in therapeutic decision-making for patients with resectable lung cancer, especially those with underlying lung disease. Volumetric analysis in combination with densitometric analysis of the affected lung lobe or segment with quantitative computed tomography (CT) helps to identify residual pulmonary function, although the utility of this modality needs investigation. The subjects of this prospective study were 30 patients with resectable lung cancer. A three-dimensional CT lung model was created with voxels representing normal lung attenuation (-600 to -910 Hounsfield units). Residual pulmonary function was predicted by drawing a boundary line between the lung to be preserved and that to be resected, directly on the lung model. The predicted values were correlated with the postoperative measured values. The predicted and measured values corresponded well (r=0.89, p<0.001). Although the predicted values corresponded with values predicted by simple calculation using a segment-counting method (r=0.98), there were two outliers whose pulmonary functional reserves were predicted more accurately by CT than by segment counting. The measured pulmonary functional reserves were significantly higher than the predicted values in patients with extensive emphysematous areas (<-910 Hounsfield units), but not in patients with chronic obstructive pulmonary disease. Quantitative CT yielded accurate prediction of functional reserve after lung cancer surgery and helped to identify patients whose functional reserves are likely to be underestimated. Hence, this modality should be utilized for patients with marginal pulmonary function.
Miyanaga, Yohko; Inoue, Naoko; Ohnishi, Ayako; Fujisawa, Emi; Yamaguchi, Maki; Uchida, Takahiro
2003-12-01
The purpose of the study was to develop a method for the quantitative prediction of the bitterness suppression of elemental diets by various flavors and to predict the optimum composition of such elemental diets for oral administration using a multichannel taste sensor. We examined the effects of varying the volume of water used for dilution and of adding varying quantities of five flavors (pineapple, apple, milky coffee, powdered green tea, and banana) on the bitterness of the elemental diet, Aminoreban EN. Gustatory sensation tests with human volunteers (n = 9) and measurements using the artificial taste sensor were performed on 50 g Aminoreban EN dissolved in various volumes (140), 180, 220, 260, 300, 420, 660, 1140, and 2100 ml) of water, and on 50 g Aminoreban EN dissolved in 180 ml of water with the addition of 3-9 g of various flavors for taste masking. In gustatory sensation tests, the relationship between the logarithmic values of the volumes of water used for dilution and the bitterness intensity scores awarded by the volunteers proved to be linear. The addition of flavors also reduced the bitterness of elemental diets in gustatory sensation tests; the magnitude of this effect was, in decreasing order, apple, pineapple, milky coffee, powdered green tea, and banana. With the artificial taste sensor, large changes of membrane potential in channel 1, caused by adsorption (CPA values, corresponding to a bitter aftertaste), were observed for Aminoreban EN but not for any of the flavors. There was a good correlation between the CPA values in channel 1 and the results of the human gustatory tests, indicating that the taste sensor is capable of evaluating not only the bitterness of Aminoreban EN itself but also the bitterness-suppressing effect of the five flavors, which contained many elements such as organic acids and flavor components, and the effect of dilution (by water) on this bitterness. Using regression analysis of data derived from the taste sensor and from human gustatory data for four representative points, we were able to predict the bitterness of 50 g Aminoreban EN solutions diluted with various volumes of water (14-300 ml), with or without the addition of a selected flavor. Even though this prediction method does not offer perfect simulation of human taste sensations, the artificial taste sensor may be useful for predicting the bitterness intensity of elemental diets containing various flavors in the absence of results from full gustatory sensation tests.
Clinical application of a light-pen computer system for quantitative angiography
NASA Technical Reports Server (NTRS)
Alderman, E. L.
1975-01-01
The paper describes an angiographic analysis system which uses a video disk for recording and playback, a light-pen for data input, minicomputer processing, and an electrostatic printer/plotter for hardcopy output. The method is applied to quantitative analysis of ventricular volumes, sequential ventriculography for assessment of physiologic and pharmacologic interventions, analysis of instantaneous time sequence of ventricular systolic and diastolic events, and quantitation of segmental abnormalities. The system is shown to provide the capability for computation of ventricular volumes and other measurements from operator-defined margins by greatly reducing the tedium and errors associated with manual planimetry.
NASA Technical Reports Server (NTRS)
Schubert, Siegfried
2011-01-01
Drought is fundamentally the result of an extended period of reduced precipitation lasting anywhere from a few weeks to decades and even longer. As such, addressing drought predictability and prediction in a changing climate requires foremost that we make progress on the ability to predict precipitation anomalies on subseasonal and longer time scales. From the perspective of the users of drought forecasts and information, drought is however most directly viewed through its impacts (e.g., on soil moisture, streamflow, crop yields). As such, the question of the predictability of drought must extend to those quantities as well. In order to make progress on these issues, the WCRP drought information group (DIG), with the support of WCRP, the Catalan Institute of Climate Sciences, the La Caixa Foundation, the National Aeronautics and Space Administration, the National Oceanic and Atmospheric Administration, and the National Science Foundation, has organized a workshop to focus on: 1. User requirements for drought prediction information on sub-seasonal to centennial time scales 2. Current understanding of the mechanisms and predictability of drought on sub-seasonal to centennial time scales 3. Current drought prediction/projection capabilities on sub-seasonal to centennial time scales 4. Advancing regional drought prediction capabilities for variables and scales most relevant to user needs on sub-seasonal to centennial time scales. This introductory talk provides an overview of these goals, and outlines the occurrence and mechanisms of drought world-wide.
Universality and predictability in molecular quantitative genetics.
Nourmohammad, Armita; Held, Torsten; Lässig, Michael
2013-12-01
Molecular traits, such as gene expression levels or protein binding affinities, are increasingly accessible to quantitative measurement by modern high-throughput techniques. Such traits measure molecular functions and, from an evolutionary point of view, are important as targets of natural selection. We review recent developments in evolutionary theory and experiments that are expected to become building blocks of a quantitative genetics of molecular traits. We focus on universal evolutionary characteristics: these are largely independent of a trait's genetic basis, which is often at least partially unknown. We show that universal measurements can be used to infer selection on a quantitative trait, which determines its evolutionary mode of conservation or adaptation. Furthermore, universality is closely linked to predictability of trait evolution across lineages. We argue that universal trait statistics extends over a range of cellular scales and opens new avenues of quantitative evolutionary systems biology. Copyright © 2013. Published by Elsevier Ltd.
Cao, Hui; Li, Yao-Jiang; Zhou, Yan; Wang, Yan-Xia
2014-11-01
To deal with nonlinear characteristics of spectra data for the thermal power plant flue, a nonlinear partial least square (PLS) analysis method with internal model based on neural network is adopted in the paper. The latent variables of the independent variables and the dependent variables are extracted by PLS regression firstly, and then they are used as the inputs and outputs of neural network respectively to build the nonlinear internal model by train process. For spectra data of flue gases of the thermal power plant, PLS, the nonlinear PLS with the internal model of back propagation neural network (BP-NPLS), the non-linear PLS with the internal model of radial basis function neural network (RBF-NPLS) and the nonlinear PLS with the internal model of adaptive fuzzy inference system (ANFIS-NPLS) are compared. The root mean square error of prediction (RMSEP) of sulfur dioxide of BP-NPLS, RBF-NPLS and ANFIS-NPLS are reduced by 16.96%, 16.60% and 19.55% than that of PLS, respectively. The RMSEP of nitric oxide of BP-NPLS, RBF-NPLS and ANFIS-NPLS are reduced by 8.60%, 8.47% and 10.09% than that of PLS, respectively. The RMSEP of nitrogen dioxide of BP-NPLS, RBF-NPLS and ANFIS-NPLS are reduced by 2.11%, 3.91% and 3.97% than that of PLS, respectively. Experimental results show that the nonlinear PLS is more suitable for the quantitative analysis of glue gas than PLS. Moreover, by using neural network function which can realize high approximation of nonlinear characteristics, the nonlinear partial least squares method with internal model mentioned in this paper have well predictive capabilities and robustness, and could deal with the limitations of nonlinear partial least squares method with other internal model such as polynomial and spline functions themselves under a certain extent. ANFIS-NPLS has the best performance with the internal model of adaptive fuzzy inference system having ability to learn more and reduce the residuals effectively. Hence, ANFIS-NPLS is an accurate and useful quantitative thermal power plant flue gas analysis method.
NASA Astrophysics Data System (ADS)
Taub, Marc Barry
Transdermal drug delivery is an alternative approach to the systemic delivery of pharmaceuticals where drugs are administered through the skin and absorbed percutaneously. This method of delivery offers several advantages over more traditional routes; most notably, the avoidance of the fast-pass metabolism of the liver and gut, the ability to offer controlled release rates, and the possibility for novel devices. Pressure sensitive adhesives (PSAs) are used to bond transdermal drug delivery devices to the skin because of their good initial and long-term adhesion, clean removability, and skin and drug compatibility. However, an understanding of the mechanics of adhesion to the dermal layer, together with quantitative and reproducible test methods for measuring adhesion, have been lacking. This study utilizes a mechanics-based approach to quantify the interfacial adhesion of PSAs bonded to selected substrates, including human dermal tissue. The delamination of PSA layers is associated with cavitation in the PSA followed by the formation of an extensive cohesive zone behind the debond tip. A quantitative metrology was developed to assess the adhesion and delamination of PSAs, such that it could be possible to easily distinguish between the adhesive characteristics of different PSA compositions and to provide a quantitative basis from which the reliability of adhesive layers bonded to substrates could be studied. A mechanics-based model was also developed to predict debonding in terms of the relevant energy dissipation mechanisms active during this process. As failure of transdermal devices may occur cohesively within the PSA layer, adhesively at the interface between the PSA and the skin, or cohesively between the corneocytes that comprise the outermost layer of the skin, it was also necessary to explore the mechanical and fracture properties of human skin. The out-of-plane delamination of corneocytes was studied by determining the strain energy release rate during debonding of cantilever-beam specimens containing thin layers of human dermal tissue at their midline. Finally, the interfacial adhesion of PSAs bonded to human skin was studied and the mechanics model that was developed for PSA failure was extended to provide the capability for in vivo reliability predictions for transdermal systems bonded to human skin.
Reflectance Prediction Modelling for Residual-Based Hyperspectral Image Coding
Xiao, Rui; Gao, Junbin; Bossomaier, Terry
2016-01-01
A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times the data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing “original pixel intensity”-based coding approaches using traditional image coders (e.g., JPEG2000) to the “residual”-based approaches using a video coder for better compression performance. A modified video coder is required to exploit spatial-spectral redundancy using pixel-level reflectance modelling due to the different characteristics of HS images in their spectral and shape domain of panchromatic imagery compared to traditional videos. In this paper a novel coding framework using Reflectance Prediction Modelling (RPM) in the latest video coding standard High Efficiency Video Coding (HEVC) for HS images is proposed. An HS image presents a wealth of data where every pixel is considered a vector for different spectral bands. By quantitative comparison and analysis of pixel vector distribution along spectral bands, we conclude that modelling can predict the distribution and correlation of the pixel vectors for different bands. To exploit distribution of the known pixel vector, we estimate a predicted current spectral band from the previous bands using Gaussian mixture-based modelling. The predicted band is used as the additional reference band together with the immediate previous band when we apply the HEVC. Every spectral band of an HS image is treated like it is an individual frame of a video. In this paper, we compare the proposed method with mainstream encoders. The experimental results are fully justified by three types of HS dataset with different wavelength ranges. The proposed method outperforms the existing mainstream HS encoders in terms of rate-distortion performance of HS image compression. PMID:27695102
Li, Chaodi; Kotha, Shiva; Mason, James
2003-01-01
The exothermic polymerization of bone cement may induce thermal necrosis of bone in cemented hip arthroplasty. A finite element formulation was developed to predict the evolution of the temperature with time in the cemented hip replacement system. The developed method is capable of taking into account both the chemical reaction that generates heat during bone cement polymerization (through a kinetic model) and the physical process of heat conduction (with an energy balance equation). The possibility of thermal necrosis of bone was then evaluated based on the temperature history in the bone and an appropriate damage criterion. Specifically, we evaluate the role of implant materials and designs on the thermal response of the system. Results indicated that the peak temperature at the bone/cement interface with a metal prosthesis was lower than that with a polymer or a composite prosthesis in hip replacement systems. Necrosis of bone was predicted to occur with a polymer or a composite prosthesis while no necrosis was predicted with a metal prosthesis in the simulated conditions. When reinforcing osteoporotic hips with injected bone cement in the cancellous core of the femur, the volume of bone cement implanted is increased which may increase the risk of thermal necrosis of bone. We evaluate whether this risk can be decreased through the use of an insulator to contain the bone cement. No thermal necrosis of bone was predicted with a 3 mm thick polyurethane insulator while more damage is predicted for the use of bone cement without the insulator. This method provides a numerical tool for the quantitative simulation of the thermal behavior of bone-cement-prosthesis designs and for examining and refining new designs computationally.
NASA Technical Reports Server (NTRS)
Liou, J. C.
2012-01-01
Presentation outlne: (1) The NASA Orbital Debris (OD) Engineering Model -- A mathematical model capable of predicting OD impact risks for the ISS and other critical space assets (2) The NASA OD Evolutionary Model -- A physical model capable of predicting future debris environment based on user-specified scenarios (3) The NASA Standard Satellite Breakup Model -- A model describing the outcome of a satellite breakup (explosion or collision)
NASA Astrophysics Data System (ADS)
Singh, Manpreet; Alabanza, Anginelle; Gonzalez, Lorelis E.; Wang, Weiwei; Reeves, W. Brian; Hahm, Jong-In
2016-02-01
Determining ultratrace amounts of protein biomarkers in patient samples in a straightforward and quantitative manner is extremely important for early disease diagnosis and treatment. Here, we successfully demonstrate the novel use of zinc oxide nanorods (ZnO NRs) in the ultrasensitive and quantitative detection of two acute kidney injury (AKI)-related protein biomarkers, tumor necrosis factor (TNF)-α and interleukin (IL)-8, directly from patient samples. We first validate the ZnO NRs-based IL-8 results via comparison with those obtained from using a conventional enzyme-linked immunosorbent method in samples from 38 individuals. We further assess the full detection capability of the ZnO NRs-based technique by quantifying TNF-α, whose levels in human urine are often below the detection limits of conventional methods. Using the ZnO NR platforms, we determine the TNF-α concentrations of all 46 patient samples tested, down to the fg per mL level. Subsequently, we screen for TNF-α levels in approximately 50 additional samples collected from different patient groups in order to demonstrate a potential use of the ZnO NRs-based assay in assessing cytokine levels useful for further clinical monitoring. Our research efforts demonstrate that ZnO NRs can be straightforwardly employed in the rapid, ultrasensitive, quantitative, and simultaneous detection of multiple AKI-related biomarkers directly in patient urine samples, providing an unparalleled detection capability beyond those of conventional analysis methods. Additional key advantages of the ZnO NRs-based approach include a fast detection speed, low-volume assay condition, multiplexing ability, and easy automation/integration capability to existing fluorescence instrumentation. Therefore, we anticipate that our ZnO NRs-based detection method will be highly beneficial for overcoming the frequent challenges in early biomarker development and treatment assessment, pertaining to the facile and ultrasensitive quantification of hard-to-trace biomolecules.Determining ultratrace amounts of protein biomarkers in patient samples in a straightforward and quantitative manner is extremely important for early disease diagnosis and treatment. Here, we successfully demonstrate the novel use of zinc oxide nanorods (ZnO NRs) in the ultrasensitive and quantitative detection of two acute kidney injury (AKI)-related protein biomarkers, tumor necrosis factor (TNF)-α and interleukin (IL)-8, directly from patient samples. We first validate the ZnO NRs-based IL-8 results via comparison with those obtained from using a conventional enzyme-linked immunosorbent method in samples from 38 individuals. We further assess the full detection capability of the ZnO NRs-based technique by quantifying TNF-α, whose levels in human urine are often below the detection limits of conventional methods. Using the ZnO NR platforms, we determine the TNF-α concentrations of all 46 patient samples tested, down to the fg per mL level. Subsequently, we screen for TNF-α levels in approximately 50 additional samples collected from different patient groups in order to demonstrate a potential use of the ZnO NRs-based assay in assessing cytokine levels useful for further clinical monitoring. Our research efforts demonstrate that ZnO NRs can be straightforwardly employed in the rapid, ultrasensitive, quantitative, and simultaneous detection of multiple AKI-related biomarkers directly in patient urine samples, providing an unparalleled detection capability beyond those of conventional analysis methods. Additional key advantages of the ZnO NRs-based approach include a fast detection speed, low-volume assay condition, multiplexing ability, and easy automation/integration capability to existing fluorescence instrumentation. Therefore, we anticipate that our ZnO NRs-based detection method will be highly beneficial for overcoming the frequent challenges in early biomarker development and treatment assessment, pertaining to the facile and ultrasensitive quantification of hard-to-trace biomolecules. Electronic supplementary information (ESI) available: Typical SEM images of the ZnO NRs used in the biomarker assays are provided in Fig. S1. See DOI: 10.1039/c5nr08706f
Predicting supramolecular self-assembly on reconstructed metal surfaces
NASA Astrophysics Data System (ADS)
Roussel, Thomas J.; Barrena, Esther; Ocal, Carmen; Faraudo, Jordi
2014-06-01
The prediction of supramolecular self-assembly onto solid surfaces is still challenging in many situations of interest for nanoscience. In particular, no previous simulation approach has been capable to simulate large self-assembly patterns of organic molecules over reconstructed surfaces (which have periodicities over large distances) due to the large number of surface atoms and adsorbing molecules involved. Using a novel simulation technique, we report here large scale simulations of the self-assembly patterns of an organic molecule (DIP) over different reconstructions of the Au(111) surface. We show that on particular reconstructions, the molecule-molecule interactions are enhanced in a way that long-range order is promoted. Also, the presence of a distortion in a reconstructed surface pattern not only induces the presence of long-range order but also is able to drive the organization of DIP into two coexisting homochiral domains, in quantitative agreement with STM experiments. On the other hand, only short range order is obtained in other reconstructions of the Au(111) surface. The simulation strategy opens interesting perspectives to tune the supramolecular structure by simulation design and surface engineering if choosing the right molecular building blocks and stabilising the chosen reconstruction pattern.The prediction of supramolecular self-assembly onto solid surfaces is still challenging in many situations of interest for nanoscience. In particular, no previous simulation approach has been capable to simulate large self-assembly patterns of organic molecules over reconstructed surfaces (which have periodicities over large distances) due to the large number of surface atoms and adsorbing molecules involved. Using a novel simulation technique, we report here large scale simulations of the self-assembly patterns of an organic molecule (DIP) over different reconstructions of the Au(111) surface. We show that on particular reconstructions, the molecule-molecule interactions are enhanced in a way that long-range order is promoted. Also, the presence of a distortion in a reconstructed surface pattern not only induces the presence of long-range order but also is able to drive the organization of DIP into two coexisting homochiral domains, in quantitative agreement with STM experiments. On the other hand, only short range order is obtained in other reconstructions of the Au(111) surface. The simulation strategy opens interesting perspectives to tune the supramolecular structure by simulation design and surface engineering if choosing the right molecular building blocks and stabilising the chosen reconstruction pattern. GA image adapted from refs: (a) Phys. Chem. Chem. Phys., 2001, 3, 3399-3404, with permission from the PCCP Owner Societies, and (b) J. Phys. Chem. C, 2008, 112 (18), 7168-7172, reprinted with permission from the American Chemical Society, copyright © 2008.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burley, J.B.; Polakowski, K.J.; Fowler, G.
Surface mine reclamation specialists have been searching for predictive methods to assess the capability of disturbed soils to support vegetation growth. We conducted a study to develop a vegetation productivity equation for reclaiming surface mines in Oliver County, North Dakota, thereby allowing investigators to quantitatively determine the plant growth potential of a reclaimed soil. The study examined the predictive modeling potential for both agronomic crops and woody plants, including: wheat (Triticum aestivum L.), barley (Hordeum vulgare L.), oat (Avena sativa L.), corn (Zea mays L.), grass and legume mixtures, Eastern red cedar (Juniperus virginiana L.), Black Hills spruce (Picea glaucamore » var. densata Bailey), Colorado spruce (Picea pungens Engelm.), ponderosa pine (Pinus ponderosa var. scope Engelm.), green ash (Fraxinus pennsylvanica Marsh.), Eastern cottonwood Populus deltoides (Bart. ex Marsh.), Siberian elm (Ulmus pumila L.), Siberian peashrub (Caragana arborescens Lam), American plum (Prunus americans Marsh.), and chokecherry ( Prunus virginiana L.). An equation was developed which is highly significant (p<0.0001), explaining 81.08% of the variance (coefficient of multiple determination=0.8108), with all regressors significant (p{le}0.048, Type II Sums of Squares). The measurement of seven soil parameters are required to predict soil vegetation productivity: percent slope, available water holding capacity, percent rock fragments, topographic position, electrical conductivity, pH, and percent organic matter. While the equation was developed from data on undisturbed soils, the equation`s predictions were positively correlated (0.71424, p{le}0.0203) with a small data set (n=10) from reclaimed soils.« less
Hsing, Michael; Byler, Kendall; Cherkasov, Artem
2009-01-01
Hub proteins (those engaged in most physical interactions in a protein interaction network (PIN) have recently gained much research interest due to their essential role in mediating cellular processes and their potential therapeutic value. It is straightforward to identify hubs if the underlying PIN is experimentally determined; however, theoretical hub prediction remains a very challenging task, as physicochemical properties that differentiate hubs from less connected proteins remain mostly uncharacterized. To adequately distinguish hubs from non-hub proteins we have utilized over 1300 protein descriptors, some of which represent QSAR (quantitative structure-activity relationship) parameters, and some reflect sequence-derived characteristics of proteins including domain composition and functional annotations. Those protein descriptors, together with available protein interaction data have been processed by a machine learning method (boosting trees) and resulted in the development of hub classifiers that are capable of predicting highly interacting proteins for four model organisms: Escherichia coli, Saccharomyces cerevisiae, Drosophila melanogaster and Homo sapiens. More importantly, through the analyses of the most relevant protein descriptors, we are able to demonstrate that hub proteins not only share certain common physicochemical and structural characteristics that make them different from non-hub counterparts, but they also exhibit species-specific characteristics that should be taken into account when analyzing different PINs. The developed prediction models can be used for determining highly interacting proteins in the four studied species to assist future proteomics experiments and PIN analyses. Availability The source code and executable program of the hub classifier are available for download at: http://www.cnbi2.ca/hub-analysis/ PMID:20198194
Peridynamic theory for modeling three-dimensional damage growth in metallic and composite structures
NASA Astrophysics Data System (ADS)
Ochoa-Ricoux, Juan Pedro
A recently introduced nonlocal peridynamic theory removes the obstacles present in classical continuum mechanics that limit the prediction of crack initiation and growth in materials. It is also applicable at different length scales. This study presents an alternative approach for the derivation of peridynamic equations of motion based on the principle of virtual work. It also presents solutions for the longitudinal vibration of a bar subjected to an initial stretch, propagation of a pre-existing crack in a plate subjected to velocity boundary conditions, and crack initiation and growth in a plate with a circular cutout. Furthermore, damage growth in composites involves complex and progressive failure modes. Current computational tools are incapable of predicting failure in composite materials mainly due to their mathematical structure. However, the peridynamic theory removes these obstacles by taking into account non-local interactions between material points. Hence, an application of the peridynamic theory to predict how damage propagates in fiber reinforced composite materials subjected to mechanical and thermal loading conditions is presented. Finally, an analysis approach based on a merger of the finite element method and the peridynamic theory is proposed. Its validity is established through qualitative and quantitative comparisons against the test results for a stiffened composite curved panel with a central slot under combined internal pressure and axial tension. The predicted initial and final failure loads, as well as the final failure modes, are in close agreement with the experimental observations. This proposed approach demonstrates the capability of the PD approach to assess the durability of complex composite structures.
Mallon, Dermot H; Bradley, J Andrew; Winn, Peter J; Taylor, Craig J; Kosmoliaptsis, Vasilis
2015-02-01
We have previously shown that qualitative assessment of surface electrostatic potential of HLA class I molecules helps explain serological patterns of alloantibody binding. We have now used a novel computational approach to quantitate differences in surface electrostatic potential of HLA B-cell epitopes and applied this to explain HLA Bw4 and Bw6 antigenicity. Protein structure models of HLA class I alleles expressing either the Bw4 or Bw6 epitope (defined by sequence motifs at positions 77 to 83) were generated using comparative structure prediction. The electrostatic potential in 3-dimensional space encompassing the Bw4/Bw6 epitope was computed by solving the Poisson-Boltzmann equation and quantitatively compared in a pairwise, all-versus-all fashion to produce distance matrices that cluster epitopes with similar electrostatics properties. Quantitative comparison of surface electrostatic potential at the carboxyl terminal of the α1-helix of HLA class I alleles, corresponding to amino acid sequence motif 77 to 83, produced clustering of HLA molecules in 3 principal groups according to Bw4 or Bw6 epitope expression. Remarkably, quantitative differences in electrostatic potential reflected known patterns of serological reactivity better than Bw4/Bw6 amino acid sequence motifs. Quantitative assessment of epitope electrostatic potential allowed the impact of known amino acid substitutions (HLA-B*07:02 R79G, R82L, G83R) that are critical for antibody binding to be predicted. We describe a novel approach for quantitating differences in HLA B-cell epitope electrostatic potential. Proof of principle is provided that this approach enables better assessment of HLA epitope antigenicity than amino acid sequence data alone, and it may allow prediction of HLA immunogenicity.
Provocative work experiences predict the acquired capability for suicide in physicians.
Fink-Miller, Erin L
2015-09-30
The interpersonal psychological theory of suicidal behavior (IPTS) offers a potential means to explain suicide in physicians. The IPTS posits three necessary and sufficient precursors to death by suicide: thwarted belongingness, perceived burdensomeness, and acquired capability. The present study sought to examine whether provocative work experiences unique to physicians (e.g., placing sutures, withdrawing life support) would predict levels of acquired capability, while controlling for gender and painful and provocative experiences outside the work environment. Data were obtained from 376 of 7723 recruited physicians. Study measures included the Acquired Capability for Suicide Scale, the Interpersonal Needs Questionnaire, the Painful and Provocative Events Scale, and the Life Events Scale-Medical Doctors Version. Painful and provocative events outside of work predicted acquired capability (β=0.23, t=3.82, p<0.001, f(2)=0.09) as did provocative work experiences (β=0.12, t=2.05, p<0.05, f(2)=0.07). This represents the first study assessing the potential impact of unique work experiences on suicidality in physicians. Limitations include over-representation of Caucasian participants, limited representation from various specialties of medicine, and lack of information regarding individual differences. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Comprehensive Micromechanics-Analysis Code - Version 4.0
NASA Technical Reports Server (NTRS)
Arnold, S. M.; Bednarcyk, B. A.
2005-01-01
Version 4.0 of the Micromechanics Analysis Code With Generalized Method of Cells (MAC/GMC) has been developed as an improved means of computational simulation of advanced composite materials. The previous version of MAC/GMC was described in "Comprehensive Micromechanics-Analysis Code" (LEW-16870), NASA Tech Briefs, Vol. 24, No. 6 (June 2000), page 38. To recapitulate: MAC/GMC is a computer program that predicts the elastic and inelastic thermomechanical responses of continuous and discontinuous composite materials with arbitrary internal microstructures and reinforcement shapes. The predictive capability of MAC/GMC rests on a model known as the generalized method of cells (GMC) - a continuum-based model of micromechanics that provides closed-form expressions for the macroscopic response of a composite material in terms of the properties, sizes, shapes, and responses of the individual constituents or phases that make up the material. Enhancements in version 4.0 include a capability for modeling thermomechanically and electromagnetically coupled ("smart") materials; a more-accurate (high-fidelity) version of the GMC; a capability to simulate discontinuous plies within a laminate; additional constitutive models of materials; expanded yield-surface-analysis capabilities; and expanded failure-analysis and life-prediction capabilities on both the microscopic and macroscopic scales.
Hochard, Kevin D; Heym, Nadja; Townsend, Ellen
2017-06-01
Heightened arousal significantly interacts with acquired capability to predict suicidality. We explore this interaction with insomnia and nightmares independently of waking state arousal symptoms, and test predictions of the Interpersonal Theory of Suicide (IPTS) and Escape Theory in relation to these sleep arousal symptoms. Findings from our e-survey (n = 540) supported the IPTS over models of Suicide as Escape. Sleep-specific measurements of arousal (insomnia and nightmares) showed no main effect, yet interacted with acquired capability to predict increased suicidality. The explained variance in suicidality by the interaction (1%-2%) using sleep-specific measures was comparable to variance explained by interactions previously reported in the literature using measurements composed of a mix of waking and sleep state arousal symptoms. Similarly, when entrapment (inability to escape) was included in models, main effects of sleep symptoms arousal were not detected yet interacted with entrapment to predict suicidality. We discuss findings in relation to treatment options suggesting that sleep-specific interventions be considered for the long-term management of at-risk individuals. © 2016 The American Association of Suicidology.
Chen, Ran; Zhang, Yuntao; Sahneh, Faryad Darabi; Scoglio, Caterina M; Wohlleben, Wendel; Haase, Andrea; Monteiro-Riviere, Nancy A; Riviere, Jim E
2014-09-23
Quantitative characterization of nanoparticle interactions with their surrounding environment is vital for safe nanotechnological development and standardization. A recent quantitative measure, the biological surface adsorption index (BSAI), has demonstrated promising applications in nanomaterial surface characterization and biological/environmental prediction. This paper further advances the approach beyond the application of five descriptors in the original BSAI to address the concentration dependence of the descriptors, enabling better prediction of the adsorption profile and more accurate categorization of nanomaterials based on their surface properties. Statistical analysis on the obtained adsorption data was performed based on three different models: the original BSAI, a concentration-dependent polynomial model, and an infinite dilution model. These advancements in BSAI modeling showed a promising development in the application of quantitative predictive modeling in biological applications, nanomedicine, and environmental safety assessment of nanomaterials.
Murumkar, Prashant R; Giridhar, Rajani; Yadav, Mange Ram
2008-04-01
A set of 29 benzothiadiazepine hydroxamates having selective tumor necrosis factor-alpha converting enzyme inhibitory activity were used to compare the quality and predictive power of 3D-quantitative structure-activity relationship, comparative molecular field analysis, and comparative molecular similarity indices models for the atom-based, centroid/atom-based, data-based, and docked conformer-based alignment. Removal of two outliers from the initial training set of molecules improved the predictivity of models. Among the 3D-quantitative structure-activity relationship models developed using the above four alignments, the database alignment provided the optimal predictive comparative molecular field analysis model for the training set with cross-validated r(2) (q(2)) = 0.510, non-cross-validated r(2) = 0.972, standard error of estimates (s) = 0.098, and F = 215.44 and the optimal comparative molecular similarity indices model with cross-validated r(2) (q(2)) = 0.556, non-cross-validated r(2) = 0.946, standard error of estimates (s) = 0.163, and F = 99.785. These models also showed the best test set prediction for six compounds with predictive r(2) values of 0.460 and 0.535, respectively. The contour maps obtained from 3D-quantitative structure-activity relationship studies were appraised for activity trends for the molecules analyzed. The comparative molecular similarity indices models exhibited good external predictivity as compared with that of comparative molecular field analysis models. The data generated from the present study helped us to further design and report some novel and potent tumor necrosis factor-alpha converting enzyme inhibitors.
Wignall, Jessica A; Muratov, Eugene; Sedykh, Alexander; Guyton, Kathryn Z; Tropsha, Alexander; Rusyn, Ivan; Chiu, Weihsueh A
2018-05-01
Human health assessments synthesize human, animal, and mechanistic data to produce toxicity values that are key inputs to risk-based decision making. Traditional assessments are data-, time-, and resource-intensive, and they cannot be developed for most environmental chemicals owing to a lack of appropriate data. As recommended by the National Research Council, we propose a solution for predicting toxicity values for data-poor chemicals through development of quantitative structure-activity relationship (QSAR) models. We used a comprehensive database of chemicals with existing regulatory toxicity values from U.S. federal and state agencies to develop quantitative QSAR models. We compared QSAR-based model predictions to those based on high-throughput screening (HTS) assays. QSAR models for noncancer threshold-based values and cancer slope factors had cross-validation-based Q 2 of 0.25-0.45, mean model errors of 0.70-1.11 log 10 units, and applicability domains covering >80% of environmental chemicals. Toxicity values predicted from QSAR models developed in this study were more accurate and precise than those based on HTS assays or mean-based predictions. A publicly accessible web interface to make predictions for any chemical of interest is available at http://toxvalue.org. An in silico tool that can predict toxicity values with an uncertainty of an order of magnitude or less can be used to quickly and quantitatively assess risks of environmental chemicals when traditional toxicity data or human health assessments are unavailable. This tool can fill a critical gap in the risk assessment and management of data-poor chemicals. https://doi.org/10.1289/EHP2998.
Cloud-Based Numerical Weather Prediction for Near Real-Time Forecasting and Disaster Response
NASA Technical Reports Server (NTRS)
Molthan, Andrew; Case, Jonathan; Venners, Jason; Schroeder, Richard; Checchi, Milton; Zavodsky, Bradley; Limaye, Ashutosh; O'Brien, Raymond
2015-01-01
The use of cloud computing resources continues to grow within the public and private sector components of the weather enterprise as users become more familiar with cloud-computing concepts, and competition among service providers continues to reduce costs and other barriers to entry. Cloud resources can also provide capabilities similar to high-performance computing environments, supporting multi-node systems required for near real-time, regional weather predictions. Referred to as "Infrastructure as a Service", or IaaS, the use of cloud-based computing hardware in an on-demand payment system allows for rapid deployment of a modeling system in environments lacking access to a large, supercomputing infrastructure. Use of IaaS capabilities to support regional weather prediction may be of particular interest to developing countries that have not yet established large supercomputing resources, but would otherwise benefit from a regional weather forecasting capability. Recently, collaborators from NASA Marshall Space Flight Center and Ames Research Center have developed a scripted, on-demand capability for launching the NOAA/NWS Science and Training Resource Center (STRC) Environmental Modeling System (EMS), which includes pre-compiled binaries of the latest version of the Weather Research and Forecasting (WRF) model. The WRF-EMS provides scripting for downloading appropriate initial and boundary conditions from global models, along with higher-resolution vegetation, land surface, and sea surface temperature data sets provided by the NASA Short-term Prediction Research and Transition (SPoRT) Center. This presentation will provide an overview of the modeling system capabilities and benchmarks performed on the Amazon Elastic Compute Cloud (EC2) environment. In addition, the presentation will discuss future opportunities to deploy the system in support of weather prediction in developing countries supported by NASA's SERVIR Project, which provides capacity building activities in environmental monitoring and prediction across a growing number of regional hubs throughout the world. Capacity-building applications that extend numerical weather prediction to developing countries are intended to provide near real-time applications to benefit public health, safety, and economic interests, but may have a greater impact during disaster events by providing a source for local predictions of weather-related hazards, or impacts that local weather events may have during the recovery phase.
Building gene expression signatures indicative of transcription factor activation to predict AOP modulation Adverse outcome pathways (AOPs) are a framework for predicting quantitative relationships between molecular initiatin...
Active illumination using a digital micromirror device for quantitative phase imaging.
Shin, Seungwoo; Kim, Kyoohyun; Yoon, Jonghee; Park, YongKeun
2015-11-15
We present a powerful and cost-effective method for active illumination using a digital micromirror device (DMD) for quantitative phase-imaging techniques. Displaying binary illumination patterns on a DMD with appropriate spatial filtering, plane waves with various illumination angles are generated and impinged onto a sample. Complex optical fields of the sample obtained with various incident angles are then measured via Mach-Zehnder interferometry, from which a high-resolution 2D synthetic aperture phase image and a 3D refractive index tomogram of the sample are reconstructed. We demonstrate the fast and stable illumination-control capability of the proposed method by imaging colloidal spheres and biological cells. The capability of high-speed optical diffraction tomography is also demonstrated by measuring 3D Brownian motion of colloidal particles with the tomogram acquisition rate of 100 Hz.
NASA Astrophysics Data System (ADS)
Moser, Stefan; Nau, Siegfried; Salk, Manfred; Thoma, Klaus
2014-02-01
The in situ investigation of dynamic events, ranging from car crash to ballistics, often is key to the understanding of dynamic material behavior. In many cases the important processes and interactions happen on the scale of milli- to microseconds at speeds of 1000 m s-1 or more. Often, 3D information is necessary to fully capture and analyze all relevant effects. High-speed 3D-visualization techniques are thus required for the in situ analysis. 3D-capable optical high-speed methods often are impaired by luminous effects and dust, while flash x-ray based methods usually deliver only 2D data. In this paper, a novel 3D-capable flash x-ray based method, in situ flash x-ray high-speed computed tomography is presented. The method is capable of producing 3D reconstructions of high-speed processes based on an undersampled dataset consisting of only a few (typically 3 to 6) x-ray projections. The major challenges are identified, discussed and the chosen solution outlined. The application is illustrated with an exemplary application of a 1000 m s-1 high-speed impact event on the scale of microseconds. A quantitative analysis of the in situ measurement of the material fragments with a 3D reconstruction with 1 mm voxel size is presented and the results are discussed. The results show that the HSCT method allows gaining valuable visual and quantitative mechanical information for the understanding and interpretation of high-speed events.
The Dopamine Prediction Error: Contributions to Associative Models of Reward Learning
Nasser, Helen M.; Calu, Donna J.; Schoenbaum, Geoffrey; Sharpe, Melissa J.
2017-01-01
Phasic activity of midbrain dopamine neurons is currently thought to encapsulate the prediction-error signal described in Sutton and Barto’s (1981) model-free reinforcement learning algorithm. This phasic signal is thought to contain information about the quantitative value of reward, which transfers to the reward-predictive cue after learning. This is argued to endow the reward-predictive cue with the value inherent in the reward, motivating behavior toward cues signaling the presence of reward. Yet theoretical and empirical research has implicated prediction-error signaling in learning that extends far beyond a transfer of quantitative value to a reward-predictive cue. Here, we review the research which demonstrates the complexity of how dopaminergic prediction errors facilitate learning. After briefly discussing the literature demonstrating that phasic dopaminergic signals can act in the manner described by Sutton and Barto (1981), we consider how these signals may also influence attentional processing across multiple attentional systems in distinct brain circuits. Then, we discuss how prediction errors encode and promote the development of context-specific associations between cues and rewards. Finally, we consider recent evidence that shows dopaminergic activity contains information about causal relationships between cues and rewards that reflect information garnered from rich associative models of the world that can be adapted in the absence of direct experience. In discussing this research we hope to support the expansion of how dopaminergic prediction errors are thought to contribute to the learning process beyond the traditional concept of transferring quantitative value. PMID:28275359
Applications of LANCE Data at SPoRT
NASA Technical Reports Server (NTRS)
Molthan, Andrew
2014-01-01
Short term Prediction Research and Transition (SPoRT) Center: Mission: Apply NASA and NOAA measurement systems and unique Earth science research to improve the accuracy of short term weather prediction at the regional/local scale. Goals: Evaluate and assess the utility of NASA and NOAA Earth science data and products and unique research capabilities to address operational weather forecast problems; Provide an environment which enables the development and testing of new capabilities to improve short term weather forecasts on a regional scale; Help ensure successful transition of new capabilities to operational weather entities for the benefit of society
NASA Technical Reports Server (NTRS)
Gardner, Kevin D.; Liu, Jong-Shang; Murthy, Durbha V.; Kruse, Marlin J.; James, Darrell
1999-01-01
AlliedSignal Engines, in cooperation with NASA GRC (National Aeronautics and Space Administration Glenn Research Center), completed an evaluation of recently-developed aeroelastic computer codes using test cases from the AlliedSignal Engines fan blisk and turbine databases. Test data included strain gage, performance, and steady-state pressure information obtained for conditions where synchronous or flutter vibratory conditions were found to occur. Aeroelastic codes evaluated included quasi 3-D UNSFLO (MIT Developed/AE Modified, Quasi 3-D Aeroelastic Computer Code), 2-D FREPS (NASA-Developed Forced Response Prediction System Aeroelastic Computer Code), and 3-D TURBO-AE (NASA/Mississippi State University Developed 3-D Aeroelastic Computer Code). Unsteady pressure predictions for the turbine test case were used to evaluate the forced response prediction capabilities of each of the three aeroelastic codes. Additionally, one of the fan flutter cases was evaluated using TURBO-AE. The UNSFLO and FREPS evaluation predictions showed good agreement with the experimental test data trends, but quantitative improvements are needed. UNSFLO over-predicted turbine blade response reductions, while FREPS under-predicted them. The inviscid TURBO-AE turbine analysis predicted no discernible blade response reduction, indicating the necessity of including viscous effects for this test case. For the TURBO-AE fan blisk test case, significant effort was expended getting the viscous version of the code to give converged steady flow solutions for the transonic flow conditions. Once converged, the steady solutions provided an excellent match with test data and the calibrated DAWES (AlliedSignal 3-D Viscous Steady Flow CFD Solver). However, efforts expended establishing quality steady-state solutions prevented exercising the unsteady portion of the TURBO-AE code during the present program. AlliedSignal recommends that unsteady pressure measurement data be obtained for both test cases examined for use in aeroelastic code validation.
Characterization of Infrastructure Materials using Nonlinear Ultrasonics
NASA Astrophysics Data System (ADS)
Liu, Minghe
In order to improve the safety, reliability, cost, and performance of civil and mechanical structures/components, it is necessary to develop techniques that are capable of characterizing and quantifying the amount of distributed damage in engineering materials before any detectable discontinuities (cracks, delaminations, voids, etc.) appear. In this dissertation, novel nonlinear ultrasonic NDE methods are developed and applied to characterize cumulative damage such as fatigue damage in metallic materials and degradation of cement-based materials due to chemical reactions. First, nonlinear Rayleigh surface waves are used to measure the near-surface residual stresses in shot-peened aluminum alloy (AA 7075) samples. Results show that the nonlinear Rayleigh wave is very sensitive to near-surface residual stresses, and has the potential to quantitatively detect them. Second, a novel two-wave mixing method is theoretically developed and numerically verified. This method is then successfully applied to detect the fatigue damage in aluminum alloy (AA 6061) samples subjected to monotonic compression. In addition to its high sensitivity to fatigue damage, this collinear wave mixing method allows the measurement over a specific region of interest in the specimen, and this capability makes it possible to obtain spatial distribution of fatigue damage through the thickness direction of the sample by simply timing the transducers. Third, the nonlinear wave mixing method is used to characterize the degradation of cement-based materials caused by alkali-silica reaction (ASR). It is found that the nonlinear ultrasonic method is sensitive to detect ASR damage at very early stage, and has the potential to identify the different damage stages. Finally, a micromechanics-based chemo-mechanical model is developed which relates the acoustic nonlinearity parameter to ASR damage. This model provides a way to quantitatively predict the changes in the acoustic nonlinearity parameter due to ASR damage, which can be used to guide experimental measurements for nondestructive evaluation of ASR damage.
Resolving the Aerosol Piece of the Global Climate Picture
NASA Astrophysics Data System (ADS)
Kahn, R. A.
2017-12-01
Factors affecting our ability to calculate climate forcing and estimate model predictive skill include direct radiative effects of aerosols and their indirect effects on clouds. Several decades of Earth-observing satellite observations have produced a global aerosol column-amount (AOD) record, but an aerosol microphysical property record required for climate and many air quality applications is lacking. Surface-based photometers offer qualitative aerosol-type classification, and several space-based instruments map aerosol air-mass types under favorable conditions. However, aerosol hygroscopicity, mass extinction efficiency (MEE), and quantitative light absorption, must be obtained from in situ measurements. Completing the aerosol piece of the climate picture requires three elements: (1) continuing global AOD and qualitative type mapping from space-based, multi-angle imagers and aerosol vertical distribution from near-source stereo imaging and downwind lidar, (2) systematic, quantitative in situ observations of particle properties unobtainable from space, and (3) continuing transport modeling to connect observations to sources, and extrapolate limited sampling in space and time. At present, the biggest challenges to producing the needed aerosol data record are: filling gaps in particle property observations, maintaining global observing capabilities, and putting the pieces together. Obtaining the PDFs of key particle properties, adequately sampled, is now the leading observational deficiency. One simplifying factor is that, for a given aerosol source and season, aerosol amounts often vary, but particle properties tend to be repeatable. SAM-CAAM (Systematic Aircraft Measurements to Characterize Aerosol Air Masses), a modest aircraft payload deployed frequently could fill this gap, adding value to the entire satellite data record, improving aerosol property assumptions in retrieval algorithms, and providing MEEs to translate between remote-sensing optical constraints and aerosol mass book-kept in climate models [Kahn et al., BAMS 2017]. This will also improve connections between remote-sensing particle types and those defined in models. The third challenge, maintaining global observing capabilities, requires continued community effort and good budgetary fortune.
Bekker, Cindy; Voogd, Eef; Fransman, Wouter; Vermeulen, Roel
2016-11-01
Control banding can be used as a first-tier assessment to control worker exposure to nano-objects and their aggregates and agglomerates (NOAA). In a second tier, more advanced modelling approaches are needed to produce quantitative exposure estimates. As currently no general quantitative nano-specific exposure models are available, this study evaluated the validity and applicability of using a generic exposure assessment model (the Advanced REACH Tool-ART) for occupational exposure to NOAA. The predictive capability of ART for occupational exposure to NOAA was tested by calculating the relative bias and correlations (Pearson) between the model estimates and measured concentrations using a dataset of 102 NOAA exposure measurements collected during experimental and workplace exposure studies. Moderate to (very) strong correlations between the ART estimates and measured concentrations were found. Estimates correlated better to measured concentration levels of dust (r = 0.76, P < 0.01) than liquid aerosols (r = 0.51, P = 0.19). However, ART overestimated the measured NOAA concentrations for both the experimental and field measurements (factor 2-127). Overestimation was highest at low concentrations and decreased with increasing concentration. Correlations seemed to be better when looking at the nanomaterials individually compared to combined scenarios, indicating that nanomaterial-specific characteristics are not well captured within the mechanistic model of the ART. Although ART in its current state is not capable to estimate occupational exposure to NOAA, the strong correlations for the individual nanomaterials indicate that the ART (and potentially other generic exposure models) have the potential to be extended or adapted for exposure to NOAA. In the future, studies investigating the potential to estimate exposure to NOAA should incorporate more explicitly nanomaterial-specific characteristics in their models. © The Author 2016. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
Aircraft noise prediction program user's manual
NASA Technical Reports Server (NTRS)
Gillian, R. E.
1982-01-01
The Aircraft Noise Prediction Program (ANOPP) predicts aircraft noise with the best methods available. This manual is designed to give the user an understanding of the capabilities of ANOPP and to show how to formulate problems and obtain solutions by using these capabilities. Sections within the manual document basic ANOPP concepts, ANOPP usage, ANOPP functional modules, ANOPP control statement procedure library, and ANOPP permanent data base. appendixes to the manual include information on preparing job decks for the operating systems in use, error diagnostics and recovery techniques, and a glossary of ANOPP terms.
Extensions and evaluations of a general quantitative theory of forest structure and dynamics
Enquist, Brian J.; West, Geoffrey B.; Brown, James H.
2009-01-01
Here, we present the second part of a quantitative theory for the structure and dynamics of forests under demographic and resource steady state. The theory is based on individual-level allometric scaling relations for how trees use resources, fill space, and grow. These scale up to determine emergent properties of diverse forests, including size–frequency distributions, spacing relations, canopy configurations, mortality rates, population dynamics, successional dynamics, and resource flux rates. The theory uniquely makes quantitative predictions for both stand-level scaling exponents and normalizations. We evaluate these predictions by compiling and analyzing macroecological datasets from several tropical forests. The close match between theoretical predictions and data suggests that forests are organized by a set of very general scaling rules. Our mechanistic theory is based on allometric scaling relations, is complementary to “demographic theory,” but is fundamentally different in approach. It provides a quantitative baseline for understanding deviations from predictions due to other factors, including disturbance, variation in branching architecture, asymmetric competition, resource limitation, and other sources of mortality, which are not included in the deliberately simplified theory. The theory should apply to a wide range of forests despite large differences in abiotic environment, species diversity, and taxonomic and functional composition. PMID:19363161
Quantitative prediction of oral cancer risk in patients with oral leukoplakia.
Liu, Yao; Li, Yicheng; Fu, Yue; Liu, Tong; Liu, Xiaoyong; Zhang, Xinyan; Fu, Jie; Guan, Xiaobing; Chen, Tong; Chen, Xiaoxin; Sun, Zheng
2017-07-11
Exfoliative cytology has been widely used for early diagnosis of oral squamous cell carcinoma. We have developed an oral cancer risk index using DNA index value to quantitatively assess cancer risk in patients with oral leukoplakia, but with limited success. In order to improve the performance of the risk index, we collected exfoliative cytology, histopathology, and clinical follow-up data from two independent cohorts of normal, leukoplakia and cancer subjects (training set and validation set). Peaks were defined on the basis of first derivatives with positives, and modern machine learning techniques were utilized to build statistical prediction models on the reconstructed data. Random forest was found to be the best model with high sensitivity (100%) and specificity (99.2%). Using the Peaks-Random Forest model, we constructed an index (OCRI2) as a quantitative measurement of cancer risk. Among 11 leukoplakia patients with an OCRI2 over 0.5, 4 (36.4%) developed cancer during follow-up (23 ± 20 months), whereas 3 (5.3%) of 57 leukoplakia patients with an OCRI2 less than 0.5 developed cancer (32 ± 31 months). OCRI2 is better than other methods in predicting oral squamous cell carcinoma during follow-up. In conclusion, we have developed an exfoliative cytology-based method for quantitative prediction of cancer risk in patients with oral leukoplakia.
Evaluation of a habitat capability model for nongame birds in the Black Hills, South Dakota
Todd R. Mills; Mark A. Rumble; Lester D. Flake
1996-01-01
Habitat models, used to predict consequences of land management decisions on wildlife, can have considerable economic effect on management decisions. The Black Hills National Forest uses such a habitat capability model (HABCAP), but its accuracy is largely unknown. We tested this modelâs predictive accuracy for nongame birds in 13 vegetative structural stages of...
Design of the Next Generation Aircraft Noise Prediction Program: ANOPP2
NASA Technical Reports Server (NTRS)
Lopes, Leonard V., Dr.; Burley, Casey L.
2011-01-01
The requirements, constraints, and design of NASA's next generation Aircraft NOise Prediction Program (ANOPP2) are introduced. Similar to its predecessor (ANOPP), ANOPP2 provides the U.S. Government with an independent aircraft system noise prediction capability that can be used as a stand-alone program or within larger trade studies that include performance, emissions, and fuel burn. The ANOPP2 framework is designed to facilitate the combination of acoustic approaches of varying fidelity for the analysis of noise from conventional and unconventional aircraft. ANOPP2 integrates noise prediction and propagation methods, including those found in ANOPP, into a unified system that is compatible for use within general aircraft analysis software. The design of the system is described in terms of its functionality and capability to perform predictions accounting for distributed sources, installation effects, and propagation through a non-uniform atmosphere including refraction and the influence of terrain. The philosophy of mixed fidelity noise prediction through the use of nested Ffowcs Williams and Hawkings surfaces is presented and specific issues associated with its implementation are identified. Demonstrations for a conventional twin-aisle and an unconventional hybrid wing body aircraft configuration are presented to show the feasibility and capabilities of the system. Isolated model-scale jet noise predictions are also presented using high-fidelity and reduced order models, further demonstrating ANOPP2's ability to provide predictions for model-scale test configurations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurtz, Sarah; Repins, Ingrid L; Hacke, Peter L
Continued growth of PV system deployment would be enhanced by quantitative, low-uncertainty predictions of the degradation and failure rates of PV modules and systems. The intended product lifetime (decades) far exceeds the product development cycle (months), limiting our ability to reduce the uncertainty of the predictions for this rapidly changing technology. Yet, business decisions (setting insurance rates, analyzing return on investment, etc.) require quantitative risk assessment. Moving toward more quantitative assessments requires consideration of many factors, including the intended application, consequence of a possible failure, variability in the manufacturing, installation, and operation, as well as uncertainty in the measured accelerationmore » factors, which provide the basis for predictions based on accelerated tests. As the industry matures, it is useful to periodically assess the overall strategy for standards development and prioritization of research to provide a technical basis both for the standards and the analysis related to the application of those. To this end, this paper suggests a tiered approach to creating risk assessments. Recent and planned potential improvements in international standards are also summarized.« less
Acoustic Prediction State of the Art Assessment
NASA Technical Reports Server (NTRS)
Dahl, Milo D.
2007-01-01
The acoustic assessment task for both the Subsonic Fixed Wing and the Supersonic projects under NASA s Fundamental Aeronautics Program was designed to assess the current state-of-the-art in noise prediction capability and to establish baselines for gauging future progress. The documentation of our current capabilities included quantifying the differences between predictions of noise from computer codes and measurements of noise from experimental tests. Quantifying the accuracy of both the computed and experimental results further enhanced the credibility of the assessment. This presentation gives sample results from codes representative of NASA s capabilities in aircraft noise prediction both for systems and components. These include semi-empirical, statistical, analytical, and numerical codes. System level results are shown for both aircraft and engines. Component level results are shown for a landing gear prototype, for fan broadband noise, for jet noise from a subsonic round nozzle, and for propulsion airframe aeroacoustic interactions. Additional results are shown for modeling of the acoustic behavior of duct acoustic lining and the attenuation of sound in lined ducts with flow.
Correlation of admissions statistics to graduate student success in medical physics
McSpadden, Erin; Rakowski, Joseph; Nalichowski, Adrian; Yudelev, Mark; Snyder, Michael
2014-01-01
The purpose of this work is to develop metrics for evaluation of medical physics graduate student performance, assess relationships between success and other quantifiable factors, and determine whether graduate student performance can be accurately predicted by admissions statistics. A cohort of 108 medical physics graduate students from a single institution were rated for performance after matriculation based on final scores in specific courses, first year graduate Grade Point Average (GPA), performance on the program exit exam, performance in oral review sessions, and faculty rating. Admissions statistics including matriculating program (MS vs. PhD); undergraduate degree type, GPA, and country; graduate degree; general and subject GRE scores; traditional vs. nontraditional status; and ranking by admissions committee were evaluated for potential correlation with the performance metrics. GRE verbal and quantitative scores were correlated with higher scores in the most difficult courses in the program and with the program exit exam; however, the GRE section most correlated with overall faculty rating was the analytical writing section. Students with undergraduate degrees in engineering had a higher faculty rating than those from other disciplines and faculty rating was strongly correlated with undergraduate country. Undergraduate GPA was not statistically correlated with any success metrics investigated in this study. However, the high degree of selection on GPA and quantitative GRE scores during the admissions process results in relatively narrow ranges for these quantities. As such, these results do not necessarily imply that one should not strongly consider traditional metrics, such as undergraduate GPA and quantitative GRE score, during the admissions process. They suggest that once applicants have been initially filtered by these metrics, additional selection should be performed via the other metrics shown here to be correlated with success. The parameters used to make admissions decisions for our program are accurate in predicting student success, as illustrated by the very strong statistical correlation between admissions rank and course average, first year graduate GPA, and faculty rating (p<0.002). Overall, this study indicates that an undergraduate degree in physics should not be considered a fundamental requirement for entry into our program and that within the relatively narrow range of undergraduate GPA and quantitative GRE scores of those admitted into our program, additional variations in these metrics are not important predictors of success. While the high degree of selection on particular statistics involved in the admissions process, along with the relatively small sample size, makes it difficult to draw concrete conclusions about the meaning of correlations here, these results suggest that success in medical physics is based on more than quantitative capabilities. Specifically, they indicate that analytical and communication skills play a major role in student success in our program, as well as predicted future success by program faculty members. Finally, this study confirms that our current admissions process is effective in identifying candidates who will be successful in our program and are expected to be successful after graduation, and provides additional insight useful in improving our admissions selection process. PACS number: 01.40.‐d PMID:24423842
A general method for bead-enhanced quantitation by flow cytometry
Montes, Martin; Jaensson, Elin A.; Orozco, Aaron F.; Lewis, Dorothy E.; Corry, David B.
2009-01-01
Flow cytometry provides accurate relative cellular quantitation (percent abundance) of cells from diverse samples, but technical limitations of most flow cytometers preclude accurate absolute quantitation. Several quantitation standards are now commercially available which, when added to samples, permit absolute quantitation of CD4+ T cells. However, these reagents are limited by their cost, technical complexity, requirement for additional software and/or limited applicability. Moreover, few studies have validated the use of such reagents in complex biological samples, especially for quantitation of non-T cells. Here we show that addition to samples of known quantities of polystyrene fluorescence standardization beads permits accurate quantitation of CD4+ T cells from complex cell samples. This procedure, here termed single bead-enhanced cytofluorimetry (SBEC), was equally capable of enumerating eosinophils as well as subcellular fragments of apoptotic cells, moieties with very different optical and fluorescent characteristics. Relative to other proprietary products, SBEC is simple, inexpensive and requires no special software, suggesting that the method is suitable for the routine quantitation of most cells and other particles by flow cytometry. PMID:17067632
Impact of implementation choices on quantitative predictions of cell-based computational models
NASA Astrophysics Data System (ADS)
Kursawe, Jochen; Baker, Ruth E.; Fletcher, Alexander G.
2017-09-01
'Cell-based' models provide a powerful computational tool for studying the mechanisms underlying the growth and dynamics of biological tissues in health and disease. An increasing amount of quantitative data with cellular resolution has paved the way for the quantitative parameterisation and validation of such models. However, the numerical implementation of cell-based models remains challenging, and little work has been done to understand to what extent implementation choices may influence model predictions. Here, we consider the numerical implementation of a popular class of cell-based models called vertex models, which are often used to study epithelial tissues. In two-dimensional vertex models, a tissue is approximated as a tessellation of polygons and the vertices of these polygons move due to mechanical forces originating from the cells. Such models have been used extensively to study the mechanical regulation of tissue topology in the literature. Here, we analyse how the model predictions may be affected by numerical parameters, such as the size of the time step, and non-physical model parameters, such as length thresholds for cell rearrangement. We find that vertex positions and summary statistics are sensitive to several of these implementation parameters. For example, the predicted tissue size decreases with decreasing cell cycle durations, and cell rearrangement may be suppressed by large time steps. These findings are counter-intuitive and illustrate that model predictions need to be thoroughly analysed and implementation details carefully considered when applying cell-based computational models in a quantitative setting.
Dolled-Filhart, Marisa P; Gustavson, Mark D
2012-11-01
Translational oncology has been improved by using tissue microarrays (TMAs), which facilitate biomarker analysis of large cohorts on a single slide. This has allowed for rapid analysis and validation of potential biomarkers for prognostic and predictive value, as well as for evaluation of biomarker prevalence. Coupled with quantitative analysis of immunohistochemical (IHC) staining, objective and standardized biomarker data from tumor samples can further advance companion diagnostic approaches for the identification of drug-responsive or resistant patient subpopulations. This review covers the advantages, disadvantages and applications of TMAs for biomarker research. Research literature and reviews of TMAs and quantitative image analysis methodology have been surveyed for this review (with an AQUA® analysis focus). Applications such as multi-marker diagnostic development and pathway-based biomarker subpopulation analyses are described. Tissue microarrays are a useful tool for biomarker analyses including prevalence surveys, disease progression assessment and addressing potential prognostic or predictive value. By combining quantitative image analysis with TMAs, analyses will be more objective and reproducible, allowing for more robust IHC-based diagnostic test development. Quantitative multi-biomarker IHC diagnostic tests that can predict drug response will allow for greater success of clinical trials for targeted therapies and provide more personalized clinical decision making.
Crausman, R S; Ferguson, G; Irvin, C G; Make, B; Newell, J D
1995-06-01
We assessed the value of quantitative high-resolution computed tomography (CT) as a diagnostic and prognostic tool in smoking-related emphysema. We performed an inception cohort study of 14 patients referred with emphysema. The diagnosis of emphysema was based on a compatible history, physical examination, chest radiograph, CT scan of the lung, and pulmonary physiologic evaluation. As a group, those who underwent exercise testing were hyperinflated (percentage predicted total lung capacity +/- standard error of the mean = 133 +/- 9%), and there was evidence of air trapping (percentage predicted respiratory volume = 318 +/- 31%) and airflow limitation (forced expiratory volume in 1 sec [FEV1] = 40 +/- 7%). The exercise performance of the group was severely limited (maximum achievable workload = 43 +/- 6%) and was characterized by prominent ventilatory, gas exchange, and pulmonary vascular abnormalities. The quantitative CT index was markedly elevated in all patients (76 +/- 9; n = 14; normal < 4). There were correlations between this quantitative CT index and measures of airflow limitation (FEV1 r2 = .34, p = 09; FEV1/forced vital capacity r2 = .46, p = .04) and between maximum workload achieved (r2 = .93, p = .0001) and maximum oxygen utilization (r2 = .83, p = .0007). Quantitative chest CT assessment of disease severity is correlated with the degree of airflow limitation and exercise impairment in pulmonary emphysema.
Kernel-based whole-genome prediction of complex traits: a review.
Morota, Gota; Gianola, Daniel
2014-01-01
Prediction of genetic values has been a focus of applied quantitative genetics since the beginning of the 20th century, with renewed interest following the advent of the era of whole genome-enabled prediction. Opportunities offered by the emergence of high-dimensional genomic data fueled by post-Sanger sequencing technologies, especially molecular markers, have driven researchers to extend Ronald Fisher and Sewall Wright's models to confront new challenges. In particular, kernel methods are gaining consideration as a regression method of choice for genome-enabled prediction. Complex traits are presumably influenced by many genomic regions working in concert with others (clearly so when considering pathways), thus generating interactions. Motivated by this view, a growing number of statistical approaches based on kernels attempt to capture non-additive effects, either parametrically or non-parametrically. This review centers on whole-genome regression using kernel methods applied to a wide range of quantitative traits of agricultural importance in animals and plants. We discuss various kernel-based approaches tailored to capturing total genetic variation, with the aim of arriving at an enhanced predictive performance in the light of available genome annotation information. Connections between prediction machines born in animal breeding, statistics, and machine learning are revisited, and their empirical prediction performance is discussed. Overall, while some encouraging results have been obtained with non-parametric kernels, recovering non-additive genetic variation in a validation dataset remains a challenge in quantitative genetics.
NASA Technical Reports Server (NTRS)
Reisel, John R.; Laurendeau, Normand M.
1994-01-01
Laser-induced fluorescence (LIF) has been applied to the quantitative measurement of nitric oxide (NO) in premixed, laminar, high-pressure flames. Their chemistry was also studied using three current kinetics schemes to determine the predictive capabilities of each mechanism with respect to NO concentrations. The flames studied were low-temperature (1600 less than T less than 1850K) C2H6/O2/N2 and C2H6/O2/N2 flames, and high temperature (2100 less than T less than 2300K) C2H6/O2/N2 flames. Laser-saturated fluorescence (LSF) was initially used to measure the NO concentrations. However, while the excitation transition was well saturated at atmospheric pressure, the fluorescence behavior was basically linear with respect to laser power at pressures above 6 atm. Measurements and calculations demonstrated that the fluorescence quenching rate variation is negligible for LIF measurements of NO at a given pressure. Therefore, linear LIF was used to perform quantitative measurements of NO concentration in these high-pressure flames. The transportability of a calibration factor from one set of flame conditions to another also was investigated by considering changes in the absorption and quenching environment for different flame conditions. The feasibility of performing LIF measurements of (NO) in turbulent flames was studied; the single-shot detection limit was determined to be 2 ppm.
Saiz-Urra, Liane; Racero, Juan C; Macías-Sáchez, Antonio J; Hernández-Galán, Rosario; Hanson, James R; Perez-Gonzalez, Maykel; Collado, Isidro G
2009-03-25
Twenty-three clovane derivatives, nine described here for the first time, bearing substituents on carbon C-2, have been synthesized and evaluated for their in vitro antifungal activity against the phytopathogenic fungus Botrytis cinerea. The results showed that compounds 9, 14, 16, and 18 bearing nitrogen atoms in the chain attached at C-2 displayed potent antifungal activity, whereas mercapto derivatives 13, 19, and 22 displayed low activity. The antifungal activity showed a clear structure-activity relationship (SAR) trend, which confirmed the importance of the nature of the C-2 chain on the antifungal activity. On the basis of these observations, the metabolism of compounds 8 and 14 by the fungus B. cinerea, and the metabolism of other clovanes by this fungus, described previously, a pro-drug action mechanism for 2-alkoxyclovane compounds is proposed. Quantitative structure-activity relationship (QSAR) studies were performed to rationalize the results and to suggest further optimization, using a topological sub-structural molecular design (TOPS-MODE) approach. The model displayed good fit and predictive capability, describing 85.5% of the experimental variance, with a standard deviation of 9.502 and yielding high values of cross-validation determination coefficients (q2CV-LOO = 0.784 and q2boot = 0.673). The most significant variables were the spectral moments weighted by bond dipole moment (Dip), hydrophobicity (Hyd), and the combined dipolarity/polarizability Abraham molecular descriptor (Ab-pi2H).
A quantitative, comprehensive analytical model for ``fast'' magnetic reconnection in Hall MHD
NASA Astrophysics Data System (ADS)
Simakov, Andrei N.
2008-11-01
Magnetic reconnection in nature usually happens on fast (e.g. dissipation independent) time scales. While such scales have been observed computationally [1], a fundamental analytical model capable of explaining them has been lacking. Here, we propose such a quantitative model for 2D Hall MHD reconnection without a guide field. The model recovers the Sweet-Parker and the electron MHD [2] results in the appropriate limits of the ion inertial length, di, and is valid everywhere in between [3]. The model predicts the dissipation region aspect ratio and the reconnection rate Ez in terms of dissipation and inertial parameters, and has been found to be in excellent agreement with non-linear simulations. It confirms a number of long-standing empirical results and resolves several controversies. In particular, we find that both open X-point and elongated dissipation regions allow ``fast'' reconnection and that Ez depends on di. Moreover, when applied to electron-positron plasmas, the model demonstrates that fast dispersive waves are not instrumental for ``fast'' reconnection [4]. [1] J. Birn et al., J. Geophys. Res. 106, 3715 (2001). [2] L. Chac'on, A. N. Simakov, and A. Zocco, Phys. Rev. Lett. 99, 235001 (2007). [3] A. N. Simakov and L. Chac'on, submitted to Phys. Rev. Lett. [4] L. Chac'on, A. N. Simakov, V. Lukin, and A. Zocco, Phys. Rev. Lett. 101, 025003 (2008).
Lomiwes, D; Reis, M M; Wiklund, E; Young, O A; North, M
2010-12-01
The potential of near infrared (NIR) spectroscopy as an on-line method to quantify glycogen and predict ultimate pH (pH(u)) of pre rigor beef M. longissimus dorsi (LD) was assessed. NIR spectra (538 to 1677 nm) of pre rigor LD from steers, cows and bulls were collected early post mortem and measurements were made for pre rigor glycogen concentration and pH(u). Spectral and measured data were combined to develop models to quantify glycogen and predict the pH(u) of pre rigor LD. NIR spectra and pre rigor predicted values obtained from quantitative models were shown to be poorly correlated against glycogen and pH(u) (r(2)=0.23 and 0.20, respectively). Qualitative models developed to categorize each muscle according to their pH(u) were able to correctly categorize 42% of high pH(u) samples. Optimum qualitative and quantitative models derived from NIR spectra found low correlation between predicted values and reference measurements. Copyright © 2010 The American Meat Science Association. Published by Elsevier Ltd.. All rights reserved.
Newman, M C; McCloskey, J T; Tatara, C P
1998-01-01
Ecological risk assessment can be enhanced with predictive models for metal toxicity. Modelings of published data were done under the simplifying assumption that intermetal trends in toxicity reflect relative metal-ligand complex stabilities. This idea has been invoked successfully since 1904 but has yet to be applied widely in quantitative ecotoxicology. Intermetal trends in toxicity were successfully modeled with ion characteristics reflecting metal binding to ligands for a wide range of effects. Most models were useful for predictive purposes based on an F-ratio criterion and cross-validation, but anomalous predictions did occur if speciation was ignored. In general, models for metals with the same valence (i.e., divalent metals) were better than those combining mono-, di-, and trivalent metals. The softness parameter (sigma p) and the absolute value of the log of the first hydrolysis constant ([symbol: see text] log KOH [symbol: see text]) were especially useful in model construction. Also, delta E0 contributed substantially to several of the two-variable models. In contrast, quantitative attempts to predict metal interactions in binary mixtures based on metal-ligand complex stabilities were not successful. PMID:9860900
Early Transition and Use of VIIRS and GOES-R Products by NWS Forecast Offices
NASA Technical Reports Server (NTRS)
Fuell, Kevin K.; Smith, Mathew; Jedlovec, Gary
2012-01-01
The Visible Infrared Imaging Radiometer Suite (VIIRS) on the NPOESS Preparatory Project (NPP) satellite, part of the Joint Polar Satellite System (JPSS), and the ABI and GLM sensors scheduled for the GOES-R geostationary satellite will bring advanced observing capabilities to the operational weather community. The NASA Short-term Prediction Research and Transition (SPoRT) project at Marshall Space Flight Center has been facilitating the use of real-time experimental and research satellite data by NWS Weather Forecast Offices (WFOs) for a number of years to demonstrate the planned capabilities of future sensors to address particular forecast challenges through improve situational awareness and short-term weather forecasts. For the NOAA GOES-R Proving Ground (PG) activity, SPoRT is developing and disseminating selected GOES-R proxy products to collaborating WFOs and National Centers. SPoRT developed the a pseudo-Geostationary Lightning Mapper product and helped in the transition of the Algorithm Working Group (AWG) Convective Initiation (CI) proxy product for the Hazardous Weather Testbed (HWT) Spring Experiment,. Along with its partner WFOs, SPoRT is evaluating MODIS/GOES Hybrid products, which brings ABI-like data sets from existing NASA instrumentation in front of the forecaster for everyday use. The Hybrid uses near real-time MODIS imagery to demonstrate future ABI capabilities, while utilizing standard GOES imagery to provide the temporal frequency of geostationary imagery expected by operational forecasters. In addition, SPoRT is collaborating with the GOES-R hydrology AWG to transition a baseline proxy product for rainfall rate / quantitative precipitation estimate (QPE) to the OCONUS regions. For VIIRS, SPoRT is demonstrating multispectral observing capabilities and the utility of low-light channels not previously available on operational weather satellites to address a variety of weather forecast challenges. This presentation will discuss the results of transitioning these products to collaborating WFOs throughout the country.
USDA-ARS?s Scientific Manuscript database
Experimental designs that exploit family information can provide substantial predictive power in quantitative trait variant discovery projects. Concordance between quantitative trait locus genotype as determined by the a posteriori granddaughter design and marker genotype was determined for 29 trai...
Qualitative Versus Quantitative Social Support as a Predictor of Depression in the Elderly.
ERIC Educational Resources Information Center
Chwalisz, Kathleen D.; And Others
This study examined the relationship between qualitative and quantitative indicators of social support in the prediction of depression. Quantitative indicators were examined with regard to their direct effects on depression as well as their indirect effects through their relationship to perceived social support. Subjects were 301…
Xie, Zhengwei; Zhang, Tianyu; Ouyang, Qi
2018-02-01
One of the long-expected goals of genome-scale metabolic modelling is to evaluate the influence of the perturbed enzymes on flux distribution. Both ordinary differential equation (ODE) models and constraint-based models, like Flux balance analysis (FBA), lack the capacity to perform metabolic control analysis (MCA) for large-scale networks. In this study, we developed a hyper-cube shrink algorithm (HCSA) to incorporate the enzymatic properties into the FBA model by introducing a pseudo reaction V constrained by enzymatic parameters. Our algorithm uses the enzymatic information quantitatively rather than qualitatively. We first demonstrate the concept by applying HCSA to a simple three-node network, whereby we obtained a good correlation between flux and enzyme abundance. We then validate its prediction by comparison with ODE and with a synthetic network producing voilacein and analogues in Saccharomyces cerevisiae. We show that HCSA can mimic the state-state results of ODE. Finally, we show its capability of predicting the flux distribution in genome-scale networks by applying it to sporulation in yeast. We show the ability of HCSA to operate without biomass flux and perform MCA to determine rate-limiting reactions. Algorithm was implemented by Matlab and C ++. The code is available at https://github.com/kekegg/HCSA. xiezhengwei@hsc.pku.edu.cn or qi@pku.edu.cn. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
NASA Astrophysics Data System (ADS)
Balankura, Tonnam; Qi, Xin; Zhou, Ya; Fichthorn, Kristen A.
2016-10-01
In the shape-controlled synthesis of colloidal Ag nanocrystals, structure-directing agents, particularly polyvinylpyrrolidone (PVP), are known to be a key additive in making nanostructures with well-defined shapes. Although many Ag nanocrystals have been successfully synthesized using PVP, the mechanism by which PVP actuates shape control remains elusive. Here, we present a multi-scale theoretical framework for kinetic Wulff shape predictions that accounts for the chemical environment, which we used to probe the kinetic influence of the adsorbed PVP film. Within this framework, we use umbrella-sampling molecular dynamics simulations to calculate the potential of mean force and diffusion coefficient profiles of Ag atom deposition onto Ag(100) and Ag(111) in ethylene glycol solution with surface-adsorbed PVP. We use these profiles to calculate the mean-first passage times and implement extensive Brownian dynamics simulations, which allows the kinetic effects to be quantitatively evaluated. Our results show that PVP films can regulate the flux of Ag atoms to be greater towards Ag(111) than Ag(100). PVP's preferential binding towards Ag(100) over Ag(111) gives PVP its flux-regulating capabilities through the lower free-energy barrier of Ag atoms to cross the lower-density PVP film on Ag(111) and enhanced Ag trapping by the extended PVP film on Ag(111). Under kinetic control, {100}-faceted nanocrystals will be formed when the Ag flux is greater towards Ag(111). The predicted kinetic Wulff shapes are in agreement with the analogous experimental system.
Tests of a habitat suitability model for black-capped chickadees
Schroeder, Richard L.
1990-01-01
The black-capped chickadee (Parus atricapillus) Habitat Suitability Index (HSI) model provides a quantitative rating of the capability of a habitat to support breeding, based on measures related to food and nest site availability. The model assumption that tree canopy volume can be predicted from measures of tree height and canopy closure was tested using data from foliage volume studies conducted in the riparian cottonwood habitat along the South Platte River in Colorado. Least absolute deviations (LAD) regression showed that canopy cover and over story tree height yielded volume predictions significantly lower than volume estimated by more direct methods. Revisions to these model relations resulted in improved predictions of foliage volume. The relation between the HSI and estimates of black-capped chickadee population densities was examined using LAD regression for both the original model and the model with the foliage volume revisions. Residuals from these models were compared to residuals from both a zero slope model and an ideal model. The fit model for the original HSI differed significantly from the ideal model, whereas the fit model for the original HSI did not differ significantly from the ideal model. However, both the fit model for the original HSI and the fit model for the revised HSI did not differ significantly from a model with a zero slope. Although further testing of the revised model is needed, its use is recommended for more realistic estimates of tree canopy volume and habitat suitability.
Cui, Shihong; Gao, Yanan; Zhang, Linlin; Wang, Yuan; Zhang, Lindong; Liu, Pingping; Liu, Ling; Chen, Juan
2017-10-01
Monocyte chemotactic protein-1 (MCP-1, or CCL2) is a member of the chemokine subfamily involved in recruitment of monocytes in inflammatory tissues. IL-10 is a key regulator for maintaining the balance of anti-inflammatory and pro-inflammatory milieu at the feto-maternal interface. Doppler examination has been routinely performed for the monitoring and management of preeclampsia patients. This study evaluates the efficiency of these factors alone, or in combination, for the predication of preeclampsia. The serum levels of MCP-1 and IL-10 in 78 preeclampsia patients and 143 age-matched normal controls were measured. The Doppler ultrasonography was performed and Artery Pulsatility Index (PI) and Resistance Index (RI) were calculated for the same subjects. It was found that while the second-trimester serum MCP-1, IL-10, MCP-1/IL-10 ratio, PI, and RI showed some power in predicting preeclampsia, the combination of MCP-1/IL-10 and PI and RI accomplishes the highest efficiency, achieving an AUC of 0.973 (95% CI, 0.000-1.000, P<0.001), a sensitivity of 94%, and a specificity of 80%. The use of MCP-1/IL-10 ratio in combination with ultrasound findings appears to provide a promising modality for predicting preeclampsia. Future studies using a larger sample can be conducted to construct an algorithm capable of quantitative assessment on the risk of preeclampsia. Copyright © 2016 Elsevier B.V. All rights reserved.
Crosscutting Airborne Remote Sensing Technologies for Oil and Gas and Earth Science Applications
NASA Technical Reports Server (NTRS)
Aubrey, A. D.; Frankenberg, C.; Green, R. O.; Eastwood, M. L.; Thompson, D. R.; Thorpe, A. K.
2015-01-01
Airborne imaging spectroscopy has evolved dramatically since the 1980s as a robust remote sensing technique used to generate 2-dimensional maps of surface properties over large spatial areas. Traditional applications for passive airborne imaging spectroscopy include interrogation of surface composition, such as mapping of vegetation diversity and surface geological composition. Two recent applications are particularly relevant to the needs of both the oil and gas as well as government sectors: quantification of surficial hydrocarbon thickness in aquatic environments and mapping atmospheric greenhouse gas components. These techniques provide valuable capabilities for petroleum seepage in addition to detection and quantification of fugitive emissions. New empirical data that provides insight into the source strength of anthropogenic methane will be reviewed, with particular emphasis on the evolving constraints enabled by new methane remote sensing techniques. Contemporary studies attribute high-strength point sources as significantly contributing to the national methane inventory and underscore the need for high performance remote sensing technologies that provide quantitative leak detection. Imaging sensors that map spatial distributions of methane anomalies provide effective techniques to detect, localize, and quantify fugitive leaks. Airborne remote sensing instruments provide the unique combination of high spatial resolution (<1 m) and large coverage required to directly attribute methane emissions to individual emission sources. This capability cannot currently be achieved using spaceborne sensors. In this study, results from recent NASA remote sensing field experiments focused on point-source leak detection, will be highlighted. This includes existing quantitative capabilities for oil and methane using state-of-the-art airborne remote sensing instruments. While these capabilities are of interest to NASA for assessment of environmental impact and global climate change, industry similarly seeks to detect and localize leaks of both oil and methane across operating fields. In some cases, higher sensitivities desired for upstream and downstream applications can only be provided by new airborne remote sensing instruments tailored specifically for a given application. There exists a unique opportunity for alignment of efforts between commercial and government sectors to advance the next generation of instruments to provide more sensitive leak detection capabilities, including those for quantitative source strength determination.
NASA Technical Reports Server (NTRS)
Coppenbarger, Rich; Jung, Yoon; Kozon, Tom; Farrahi, Amir; Malik, Wakar; Lee, Hanbong; Chevalley, Eric; Kistler, Matt
2016-01-01
NASA is collaborating with the FAA and aviation industry to develop and demonstrate new capabilities that integrate arrival, departure, and surface air-traffic operations. The concept relies on trajectory-based departure scheduling and collaborative decision making to reduce delays and uncertainties in taxi and climb operations. The paper describes the concept and benefit mechanisms aimed at improving flight efficiency and predictability while maintaining or improving operational throughput. The potential impact of the technology is studied and discussed through a quantitative analysis of relevant shortfalls at the site identified for initial deployment and demonstration in 2017: Charlotte-Douglas International Airport. Results from trajectory analysis indicate substantial opportunity to reduce taxi delays for both departures and arrivals by metering departures at the gate in a manner that maximizes throughput while adhering to takeoff restrictions due mostly to airspace constraints. Substantial taxi-out delay reduction is shown for flights subject to departure restrictions stemming from traffic flow management initiatives. Opportunities to improve the predictability of taxi, takeoff, and climb operations are examined and their potential impact on airline scheduling decisions and air-traffic forecasting is discussed. In addition, the potential to improve throughput with departure scheduling that maximizes use of available runway and airspace capacity is analyzed.
He, Yixuan; Kodali, Anita; Wallace, Dorothy I
2018-06-14
Neuroblastoma is the leading cause of cancer death in young children. Although treatment for neuroblastoma has improved, the 5-year survival rate of patients still remains less than half. Recent studies have indicated that bevacizumab, an anti-VEGF drug used in treatment of several other cancer types, may be effective for treating neuroblastoma as well. However, its effect on neuroblastoma has not been well characterized. While traditional experiments are costly and time-consuming, mathematical models are capable of simulating complex systems quickly and inexpensively. In this study, we present a model of vascular tumor growth of neuroblastoma IMR-32 that is complex enough to replicate experimental data across a range of tumor cell properties measured in a suite of in vitro and in vivo experiments. The model provides quantitative insight into tumor vasculature, predicting a linear relationship between vasculature and tumor volume. The tumor growth model was coupled with known pharmacokinetics and pharmacodynamics of the VEGF blocker bevacizumab to study its effect on neuroblastoma growth dynamics. The results of our model suggest that total administered bevacizumab concentration per week, as opposed to dosage regimen, is the major determining factor in tumor suppression. Our model also establishes an exponentially decreasing relationship between administered bevacizumab concentration and tumor growth rate.
Watershed Planning within a Quantitative Scenario Analysis Framework.
Merriam, Eric R; Petty, J Todd; Strager, Michael P
2016-07-24
There is a critical need for tools and methodologies capable of managing aquatic systems within heavily impacted watersheds. Current efforts often fall short as a result of an inability to quantify and predict complex cumulative effects of current and future land use scenarios at relevant spatial scales. The goal of this manuscript is to provide methods for conducting a targeted watershed assessment that enables resource managers to produce landscape-based cumulative effects models for use within a scenario analysis management framework. Sites are first selected for inclusion within the watershed assessment by identifying sites that fall along independent gradients and combinations of known stressors. Field and laboratory techniques are then used to obtain data on the physical, chemical, and biological effects of multiple land use activities. Multiple linear regression analysis is then used to produce landscape-based cumulative effects models for predicting aquatic conditions. Lastly, methods for incorporating cumulative effects models within a scenario analysis framework for guiding management and regulatory decisions (e.g., permitting and mitigation) within actively developing watersheds are discussed and demonstrated for 2 sub-watersheds within the mountaintop mining region of central Appalachia. The watershed assessment and management approach provided herein enables resource managers to facilitate economic and development activity while protecting aquatic resources and producing opportunity for net ecological benefits through targeted remediation.
Integrated modeling approach for optimal management of water, energy and food security nexus
NASA Astrophysics Data System (ADS)
Zhang, Xiaodong; Vesselinov, Velimir V.
2017-03-01
Water, energy and food (WEF) are inextricably interrelated. Effective planning and management of limited WEF resources to meet current and future socioeconomic demands for sustainable development is challenging. WEF production/delivery may also produce environmental impacts; as a result, green-house-gas emission control will impact WEF nexus management as well. Nexus management for WEF security necessitates integrated tools for predictive analysis that are capable of identifying the tradeoffs among various sectors, generating cost-effective planning and management strategies and policies. To address these needs, we have developed an integrated model analysis framework and tool called WEFO. WEFO provides a multi-period socioeconomic model for predicting how to satisfy WEF demands based on model inputs representing productions costs, socioeconomic demands, and environmental controls. WEFO is applied to quantitatively analyze the interrelationships and trade-offs among system components including energy supply, electricity generation, water supply-demand, food production as well as mitigation of environmental impacts. WEFO is demonstrated to solve a hypothetical nexus management problem consistent with real-world management scenarios. Model parameters are analyzed using global sensitivity analysis and their effects on total system cost are quantified. The obtained results demonstrate how these types of analyses can be helpful for decision-makers and stakeholders to make cost-effective decisions for optimal WEF management.
Zhang, Huai-zhu; Lin, Jun; Zhang, Huai-Zhu
2014-06-01
In the present paper, the outlier detection methods for determination of oil yield in oil shale using near-infrared (NIR) diffuse reflection spectroscopy was studied. During the quantitative analysis with near-infrared spectroscopy, environmental change and operator error will both produce outliers. The presence of outliers will affect the overall distribution trend of samples and lead to the decrease in predictive capability. Thus, the detection of outliers are important for the construction of high-quality calibration models. The methods including principal component analysis-Mahalanobis distance (PCA-MD) and resampling by half-means (RHM) were applied to the discrimination and elimination of outliers in this work. The thresholds and confidences for MD and RHM were optimized using the performance of partial least squares (PLS) models constructed after the elimination of outliers, respectively. Compared with the model constructed with the data of full spectrum, the values of RMSEP of the models constructed with the application of PCA-MD with a threshold of a value equal to the sum of average and standard deviation of MD, RHM with the confidence level of 85%, and the combination of PCA-MD and RHM, were reduced by 48.3%, 27.5% and 44.8%, respectively. The predictive ability of the calibration model has been improved effectively.
Biogenic organic emissions, air quality and climate
NASA Astrophysics Data System (ADS)
Guenther, A. B.
2015-12-01
Living organisms produce copious amounts of a diverse array of metabolites including many volatile organic compounds that are released into the atmosphere. These compounds participate in numerous chemical reactions that influence the atmospheric abundance of important air pollutants and short-lived climate forcers including organic aerosol, ozone and methane. The production and release of these organics are strongly influenced by environmental conditions including air pollution, temperature, solar radiation, and water availability and they are highly sensitive to stress and extreme events. As a result, releases of biogenic organics to the atmosphere have an impact on, and are sensitive to, air quality and climate leading to potential feedback couplings. Their role in linking air quality and climate is conceptually clear but an accurate quantitative representation is needed for predictive models. Progress towards this goal will be presented including numerical model development and assessments of the predictive capability of the Model of Emission of Gases and Aerosols from Nature (MEGAN). Recent studies of processes controlling the magnitude and variations in biogenic organic emissions will be described and observations of their impact on atmospheric composition will be shown. Recent advances and priorities for future research will be discussed including laboratory process studies, long-term measurements, multi-scale regional studies, global satellite observations, and the development of a next generation model for simulating land-atmosphere chemical exchange.
Peng, Jiangtao; Peng, Silong; Xie, Qiong; Wei, Jiping
2011-04-01
In order to eliminate the lower order polynomial interferences, a new quantitative calibration algorithm "Baseline Correction Combined Partial Least Squares (BCC-PLS)", which combines baseline correction and conventional PLS, is proposed. By embedding baseline correction constraints into PLS weights selection, the proposed calibration algorithm overcomes the uncertainty in baseline correction and can meet the requirement of on-line attenuated total reflectance Fourier transform infrared (ATR-FTIR) quantitative analysis. The effectiveness of the algorithm is evaluated by the analysis of glucose and marzipan ATR-FTIR spectra. BCC-PLS algorithm shows improved prediction performance over PLS. The root mean square error of cross-validation (RMSECV) on marzipan spectra for the prediction of the moisture is found to be 0.53%, w/w (range 7-19%). The sugar content is predicted with a RMSECV of 2.04%, w/w (range 33-68%). Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Linton, M.; Leake, J. E.; Schuck, P. W.
2016-12-01
The magnetic field of the solar atmosphere is the primary driver of solar activity. Understanding the magnetic state of the solar atmosphere is therefore of key importance to predicting solar activity. One promising means of studying the magnetic atmosphere is to dynamically build up and evolve this atmosphere from the time evolution of emerging magnetic field at the photosphere, where it can be measured with current solar vector magnetograms at high temporal and spatial resolution. We report here on a series of numerical experiments investigating the capabilities and limits of magnetohydrodynamical simulations of such a process, where a magnetic corona is dynamically built up and evolved from a time series of synthetic photospheric data. These synthetic data are composed of photospheric slices taken from self consistent convection zone to corona simulations of flux emergence. The driven coronae are then quantitatively compared against the coronae of the original simulations. We investigate and report on the fidelity of these driven simulations, both as a function of the emergence timescale of the magnetic flux, and as a function of the driving cadence of the input data. These investigations will then be used to outline future prospects and challenges for using observed photospheric data to drive such solar atmospheric simulations. This work was supported by the Chief of Naval Research and the NASA Living with a Star and Heliophysics Supporting Research programs.
NASA Astrophysics Data System (ADS)
Guan, Mingfu; Ahilan, Sangaralingam; Yu, Dapeng; Peng, Yong; Wright, Nigel
2018-01-01
Fine sediment plays crucial and multiple roles in the hydrological, ecological and geomorphological functioning of river systems. This study employs a two-dimensional (2D) numerical model to track the hydro-morphological processes dominated by fine suspended sediment, including the prediction of sediment concentration in flow bodies, and erosion and deposition caused by sediment transport. The model is governed by 2D full shallow water equations with which an advection-diffusion equation for fine sediment is coupled. Bed erosion and sedimentation are updated by a bed deformation model based on local sediment entrainment and settling flux in flow bodies. The model is initially validated with the three laboratory-scale experimental events where suspended load plays a dominant role. Satisfactory simulation results confirm the model's capability in capturing hydro-morphodynamic processes dominated by fine suspended sediment at laboratory-scale. Applications to sedimentation in a stormwater pond are conducted to develop the process-based understanding of fine sediment dynamics over a variety of flow conditions. Urban flows with 5-year, 30-year and 100-year return period and the extreme flood event in 2012 are simulated. The modelled results deliver a step change in understanding fine sediment dynamics in stormwater ponds. The model is capable of quantitatively simulating and qualitatively assessing the performance of a stormwater pond in managing urban water quantity and quality.
Predictions of the electro-mechanical response of conductive CNT-polymer composites
NASA Astrophysics Data System (ADS)
Matos, Miguel A. S.; Tagarielli, Vito L.; Baiz-Villafranca, Pedro M.; Pinho, Silvestre T.
2018-05-01
We present finite element simulations to predict the conductivity, elastic response and strain-sensing capability of conductive composites comprising a polymeric matrix and carbon nanotubes. Realistic representative volume elements (RVE) of the microstructure are generated and both constituents are modelled as linear elastic solids, with resistivity independent of strain; the electrical contact between nanotubes is represented by a new element which accounts for quantum tunnelling effects and captures the sensitivity of conductivity to separation. Monte Carlo simulations are conducted and the sensitivity of the predictions to RVE size is explored. Predictions of modulus and conductivity are found in good agreement with published results. The strain-sensing capability of the material is explored for multiaxial strain states.
Material Stream Strategy for Lithium and Inorganics (U)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Safarik, Douglas Joseph; Dunn, Paul Stanton; Korzekwa, Deniece Rochelle
Design Agency Responsibilities: Manufacturing Support to meet Stockpile Stewardship goals for maintaining the nuclear stockpile through experimental and predictive modeling capability. Development and maintenance of Manufacturing Science expertise to assess material specifications and performance boundaries, and their relationship to processing parameters. Production Engineering Evaluations with competence in design requirements, material specifications, and manufacturing controls. Maintenance and enhancement of Aging Science expertise to support Stockpile Stewardship predictive science capability.
Progress in Finite Element Modeling of the Lower Extremities
2015-06-01
bending and subsequent injury , e.g., the distal tibia motion results in bending of the tibia rather than the tibia rotating about the knee joint...layers, rich anisotropy, and wide variability. Developing a model for predictive injury capability, therefore, needs to be versatile and flexible to... injury capability presents many challenges, the first of which is identifying the types of conditions where injury prediction is needed. Our focus
Simulation-Based Prediction of Equivalent Continuous Noises during Construction Processes
Zhang, Hong; Pei, Yun
2016-01-01
Quantitative prediction of construction noise is crucial to evaluate construction plans to help make decisions to address noise levels. Considering limitations of existing methods for measuring or predicting the construction noise and particularly the equivalent continuous noise level over a period of time, this paper presents a discrete-event simulation method for predicting the construction noise in terms of equivalent continuous level. The noise-calculating models regarding synchronization, propagation and equivalent continuous level are presented. The simulation framework for modeling the noise-affected factors and calculating the equivalent continuous noise by incorporating the noise-calculating models into simulation strategy is proposed. An application study is presented to demonstrate and justify the proposed simulation method in predicting the equivalent continuous noise during construction. The study contributes to provision of a simulation methodology to quantitatively predict the equivalent continuous noise of construction by considering the relevant uncertainties, dynamics and interactions. PMID:27529266
Simulation-Based Prediction of Equivalent Continuous Noises during Construction Processes.
Zhang, Hong; Pei, Yun
2016-08-12
Quantitative prediction of construction noise is crucial to evaluate construction plans to help make decisions to address noise levels. Considering limitations of existing methods for measuring or predicting the construction noise and particularly the equivalent continuous noise level over a period of time, this paper presents a discrete-event simulation method for predicting the construction noise in terms of equivalent continuous level. The noise-calculating models regarding synchronization, propagation and equivalent continuous level are presented. The simulation framework for modeling the noise-affected factors and calculating the equivalent continuous noise by incorporating the noise-calculating models into simulation strategy is proposed. An application study is presented to demonstrate and justify the proposed simulation method in predicting the equivalent continuous noise during construction. The study contributes to provision of a simulation methodology to quantitatively predict the equivalent continuous noise of construction by considering the relevant uncertainties, dynamics and interactions.
Finite element analysis of a composite crash box subjected to low velocity impact
NASA Astrophysics Data System (ADS)
Shaik Dawood, M. S. I.; Ghazilan, A. L. Ahmad; Shah, Q. H.
2017-03-01
In this work, finite element analyses using LS-DYNA had been carried out to investigate the energy absorption capability of a composite crash box. The analysed design incorporates grooves to the cross sectional shape and E-Glass/Epoxy as design material. The effects of groove depth, ridge lines, plane width, material properties, wall thickness and fibre orientation had been quantitatively analysed and found to significantly enhance the energy absorption capability of the crash box.
2012-06-01
record (PoR) to give both a quantitative and qualitative perspective on the rapid cyber acquisitions framework . It also investigates if cyber operations...acquisition is a complex topic that does not yet have a solidified framework . To scope this research, a comprehensive review of past, present and...for AT&L is working with the DoD cyberspace community to develop a common framework for Services and Agencies to acquire capabilities for cyberspace
Diamond, James; Anderson, Neil H; Bartels, Peter H; Montironi, Rodolfo; Hamilton, Peter W
2004-09-01
Quantitative examination of prostate histology offers clues in the diagnostic classification of lesions and in the prediction of response to treatment and prognosis. To facilitate the collection of quantitative data, the development of machine vision systems is necessary. This study explored the use of imaging for identifying tissue abnormalities in prostate histology. Medium-power histological scenes were recorded from whole-mount radical prostatectomy sections at x 40 objective magnification and assessed by a pathologist as exhibiting stroma, normal tissue (nonneoplastic epithelial component), or prostatic carcinoma (PCa). A machine vision system was developed that divided the scenes into subregions of 100 x 100 pixels and subjected each to image-processing techniques. Analysis of morphological characteristics allowed the identification of normal tissue. Analysis of image texture demonstrated that Haralick feature 4 was the most suitable for discriminating stroma from PCa. Using these morphological and texture measurements, it was possible to define a classification scheme for each subregion. The machine vision system is designed to integrate these classification rules and generate digital maps of tissue composition from the classification of subregions; 79.3% of subregions were correctly classified. Established classification rates have demonstrated the validity of the methodology on small scenes; a logical extension was to apply the methodology to whole slide images via scanning technology. The machine vision system is capable of classifying these images. The machine vision system developed in this project facilitates the exploration of morphological and texture characteristics in quantifying tissue composition. It also illustrates the potential of quantitative methods to provide highly discriminatory information in the automated identification of prostatic lesions using computer vision.
Chemical Bonding in Sulfide Minerals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vaughan, David J.; Rosso, Kevin M.
An understanding of chemical bonding and electronic structure in sulfide minerals is central to any attempt at understanding their crystal structures, stabilities and physical properties. It is also an essential precursor to understanding reactivity through modeling surface structure at the molecular scale. In recent decades, there have been remarkable advances in first principles (ab initio) methods for the quantitative calculation of electronic structure. These advances have been made possible by the very rapid development of high performance computers. Several review volumes that chart the applications of these developments in mineralogy and geochemistry are available (Tossell and Vaughan, 1992; Cygan andmore » Kubicki, 2001). An important feature of the sulfide minerals is the diversity of their electronic structures, as evidenced by their electrical and magnetic properties (see Pearce et al. 2006, this volume). Thus, sulfide minerals range from insulators through semiconductors to metals, and exhibit every type of magnetic behavior. This has presented problems for those attempting to develop bonding models for sulfides, and also led to certain misconceptions regarding the kinds of models that may be appropriate. In this chapter, chemical bonding and electronic structure models for sulfides are reviewed with emphasis on more recent developments. Although the fully ab initio quantitative methods are now capable of a remarkable degree of sophistication in terms of agreement with experiment and potential to interpret and predict behavior with varying conditions, both qualitative and more simplistic quantitative approaches will also be briefly discussed. This is because we believe that the insights which they provide are still helpful to those studying sulfide minerals. In addition to the application of electronic structure models and calculations to solid sulfides, work on sulfide mineral surfaces (Rosso and Vaughan 2006a,b) and solution complexes and clusters (Rickard and Luther, 2006) are discussed in detail later in this volume.« less
Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.
2004-03-01
The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates basedmore » on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four projections, and associated kriging variances, were averaged using the posterior model probabilities as weights. Finally, cross-validation was conducted by eliminating from consideration all data from one borehole at a time, repeating the above process, and comparing the predictive capability of the model-averaged result with that of each individual model. Using two quantitative measures of comparison, the model-averaged result was superior to any individual geostatistical model of log permeability considered.« less
NASA Astrophysics Data System (ADS)
Esposito, Alessandro
2006-05-01
This PhD project aims at the development and evaluation of microscopy techniques for the quantitative detection of molecular interactions and cellular features. The primarily investigated techniques are Fαrster Resonance Energy Transfer imaging and Fluorescence Lifetime Imaging Microscopy. These techniques have the capability to quantitatively probe the biochemical environment of fluorophores. An automated microscope capable of unsupervised operation has been developed that enables the investigation of molecular and cellular properties at high throughput levels and the analysis of cellular heterogeneity. State-of-the-art Förster Resonance Energy Transfer imaging, Fluorescence Lifetime Imaging Microscopy, Confocal Laser Scanning Microscopy and the newly developed tools have been combined with cellular and molecular biology techniques for the investigation of protein-protein interactions, oligomerization and post-translational modifications of α-Synuclein and Tau, two proteins involved in Parkinson’s and Alzheimer’s disease, respectively. The high inter-disciplinarity of this project required the merging of the expertise of both the Molecular Biophysics Group at the Debye Institute - Utrecht University and the Cell Biophysics Group at the European Neuroscience Institute - Gαttingen University. This project was conducted also with the support and the collaboration of the Center for the Molecular Physiology of the Brain (Göttingen), particularly with the groups associated with the Molecular Quantitative Microscopy and Parkinson’s Disease and Aggregopathies areas. This work demonstrates that molecular and cellular quantitative microscopy can be used in combination with high-throughput screening as a powerful tool for the investigation of the molecular mechanisms of complex biological phenomena like those occurring in neurodegenerative diseases.
Kim, Paul; Lee, Ju Kang; Lim, Oh Kyung; Park, Heung Kyu; Park, Ki Deok
2017-12-01
To predict the probability of lymphedema development in breast cancer patients in the early post-operation stage, we investigated the ability of quantitative lymphoscintigraphic assessment. This retrospective study included 201 patients without lymphedema after unilateral breast cancer surgery. Lymphoscintigraphy was performed between 4 and 8 weeks after surgery to evaluate the lymphatic system in the early postoperative stage. Quantitative lymphoscintigraphy was performed using four methods: ratio of radiopharmaceutical clearance rate of the affected to normal hand; ratio of radioactivity of the affected to normal hand; ratio of radiopharmaceutical uptake rate of the affected to normal axilla (RUA); and ratio of radioactivity of the affected to normal axilla (RRA). During a 1-year follow-up, patients with a circumferential interlimb difference of 2 cm at any measurement location and a 200-mL interlimb volume difference were diagnosed with lymphedema. We investigated the difference in quantitative lymphoscintigraphic assessment between the non-lymphedema and lymphedema groups. Quantitative lymphoscintigraphic assessment revealed that the RUA and RRA were significantly lower in the lymphedema group than in the non-lymphedema group. After adjusting the model for all significant variables (body mass index, N-stage, T-stage, type of surgery, and type of lymph node surgery), RRA was associated with lymphedema (odds ratio=0.14; 95% confidence interval, 0.04-0.46; p=0.001). In patients in the early postoperative stage after unilateral breast cancer surgery, quantitative lymphoscintigraphic assessment can be used to predict the probability of developing lymphedema.
Wang, Fei; He, Bei
2013-01-01
To investigate the role of endotracheal aspirate (EA) culture in the diagnosis and antibiotic management in ventilator-associated pneumonia (VAP). We searched CNKI, Wanfang, PUBMED and EMBASE databases published from January 1990 to December 2011, to find relevant literatures on VAP microbiological diagnostic techniques including EA and bronchoalveolar lavage (BALF). The following key words were used: ventilator associated pneumonia, diagnosis and adult. Meta-analysis was performed and the sensitivity and specificity of EA on VAP diagnosis were calculated. Our literature search identified 1665 potential articles, 8 of which fulfilled our selection criteria including 561 patients with paired cultures. Using BALF quantitative culture as reference standard, the sensitivity and specificity of EA were 72% and 71%. When considering quantitative culture of EA only, the sensitivity and specificity improved to 90% and 65%, while the positive and the negative predictive values were 68% and 89% respectively. However, the sensitivity and specificity of semi-quantitative culture of EA were only 50% and 80%, with a positive predictive value of 77% and a negative predictive value of 58% respectively. EA culture had relatively poor sensitivity and specificity, although quantitative culture of EA only could improve the sensitivity. Initiating therapy on the basis of EA quantitative culture may still result in excessive antibiotic usage. Our data suggested that EA could provide some information for clinical decision but could not replace the role of BALF quantitative culture in VAP diagnosis.
Optofluidic time-stretch quantitative phase microscopy.
Guo, Baoshan; Lei, Cheng; Wu, Yi; Kobayashi, Hirofumi; Ito, Takuro; Yalikun, Yaxiaer; Lee, Sangwook; Isozaki, Akihiro; Li, Ming; Jiang, Yiyue; Yasumoto, Atsushi; Di Carlo, Dino; Tanaka, Yo; Yatomi, Yutaka; Ozeki, Yasuyuki; Goda, Keisuke
2018-03-01
Innovations in optical microscopy have opened new windows onto scientific research, industrial quality control, and medical practice over the last few decades. One of such innovations is optofluidic time-stretch quantitative phase microscopy - an emerging method for high-throughput quantitative phase imaging that builds on the interference between temporally stretched signal and reference pulses by using dispersive properties of light in both spatial and temporal domains in an interferometric configuration on a microfluidic platform. It achieves the continuous acquisition of both intensity and phase images with a high throughput of more than 10,000 particles or cells per second by overcoming speed limitations that exist in conventional quantitative phase imaging methods. Applications enabled by such capabilities are versatile and include characterization of cancer cells and microalgal cultures. In this paper, we review the principles and applications of optofluidic time-stretch quantitative phase microscopy and discuss its future perspective. Copyright © 2017 Elsevier Inc. All rights reserved.
Impact of self-healing capability on network robustness
NASA Astrophysics Data System (ADS)
Shang, Yilun
2015-04-01
A wide spectrum of real-life systems ranging from neurons to botnets display spontaneous recovery ability. Using the generating function formalism applied to static uncorrelated random networks with arbitrary degree distributions, the microscopic mechanism underlying the depreciation-recovery process is characterized and the effect of varying self-healing capability on network robustness is revealed. It is found that the self-healing capability of nodes has a profound impact on the phase transition in the emergence of percolating clusters, and that salient difference exists in upholding network integrity under random failures and intentional attacks. The results provide a theoretical framework for quantitatively understanding the self-healing phenomenon in varied complex systems.
Impact of self-healing capability on network robustness.
Shang, Yilun
2015-04-01
A wide spectrum of real-life systems ranging from neurons to botnets display spontaneous recovery ability. Using the generating function formalism applied to static uncorrelated random networks with arbitrary degree distributions, the microscopic mechanism underlying the depreciation-recovery process is characterized and the effect of varying self-healing capability on network robustness is revealed. It is found that the self-healing capability of nodes has a profound impact on the phase transition in the emergence of percolating clusters, and that salient difference exists in upholding network integrity under random failures and intentional attacks. The results provide a theoretical framework for quantitatively understanding the self-healing phenomenon in varied complex systems.
Nomura, J-I; Uwano, I; Sasaki, M; Kudo, K; Yamashita, F; Ito, K; Fujiwara, S; Kobayashi, M; Ogasawara, K
2017-12-01
Preoperative hemodynamic impairment in the affected cerebral hemisphere is associated with the development of cerebral hyperperfusion following carotid endarterectomy. Cerebral oxygen extraction fraction images generated from 7T MR quantitative susceptibility mapping correlate with oxygen extraction fraction images on positron-emission tomography. The present study aimed to determine whether preoperative oxygen extraction fraction imaging generated from 7T MR quantitative susceptibility mapping could identify patients at risk for cerebral hyperperfusion following carotid endarterectomy. Seventy-seven patients with unilateral internal carotid artery stenosis (≥70%) underwent preoperative 3D T2*-weighted imaging using a multiple dipole-inversion algorithm with a 7T MR imager. Quantitative susceptibility mapping images were then obtained, and oxygen extraction fraction maps were generated. Quantitative brain perfusion single-photon emission CT was also performed before and immediately after carotid endarterectomy. ROIs were automatically placed in the bilateral middle cerebral artery territories in all images using a 3D stereotactic ROI template, and affected-to-contralateral ratios in the ROIs were calculated on quantitative susceptibility mapping-oxygen extraction fraction images. Ten patients (13%) showed post-carotid endarterectomy hyperperfusion (cerebral blood flow increases of ≥100% compared with preoperative values in the ROIs on brain perfusion SPECT). Multivariate analysis showed that a high quantitative susceptibility mapping-oxygen extraction fraction ratio was significantly associated with the development of post-carotid endarterectomy hyperperfusion (95% confidence interval, 33.5-249.7; P = .002). Sensitivity, specificity, and positive- and negative-predictive values of the quantitative susceptibility mapping-oxygen extraction fraction ratio for the prediction of the development of post-carotid endarterectomy hyperperfusion were 90%, 84%, 45%, and 98%, respectively. Preoperative oxygen extraction fraction imaging generated from 7T MR quantitative susceptibility mapping identifies patients at risk for cerebral hyperperfusion following carotid endarterectomy. © 2017 by American Journal of Neuroradiology.
High Order Semi-Lagrangian Advection Scheme
NASA Astrophysics Data System (ADS)
Malaga, Carlos; Mandujano, Francisco; Becerra, Julian
2014-11-01
In most fluid phenomena, advection plays an important roll. A numerical scheme capable of making quantitative predictions and simulations must compute correctly the advection terms appearing in the equations governing fluid flow. Here we present a high order forward semi-Lagrangian numerical scheme specifically tailored to compute material derivatives. The scheme relies on the geometrical interpretation of material derivatives to compute the time evolution of fields on grids that deform with the material fluid domain, an interpolating procedure of arbitrary order that preserves the moments of the interpolated distributions, and a nonlinear mapping strategy to perform interpolations between undeformed and deformed grids. Additionally, a discontinuity criterion was implemented to deal with discontinuous fields and shocks. Tests of pure advection, shock formation and nonlinear phenomena are presented to show performance and convergence of the scheme. The high computational cost is considerably reduced when implemented on massively parallel architectures found in graphic cards. The authors acknowledge funding from Fondo Sectorial CONACYT-SENER Grant Number 42536 (DGAJ-SPI-34-170412-217).
Ardid, Salva; Wang, Xiao-Jing
2013-12-11
A hallmark of executive control is the brain's agility to shift between different tasks depending on the behavioral rule currently in play. In this work, we propose a "tweaking hypothesis" for task switching: a weak rule signal provides a small bias that is dramatically amplified by reverberating attractor dynamics in neural circuits for stimulus categorization and action selection, leading to an all-or-none reconfiguration of sensory-motor mapping. Based on this principle, we developed a biologically realistic model with multiple modules for task switching. We found that the model quantitatively accounts for complex task switching behavior: switch cost, congruency effect, and task-response interaction; as well as monkey's single-neuron activity associated with task switching. The model yields several testable predictions, in particular, that category-selective neurons play a key role in resolving sensory-motor conflict. This work represents a neural circuit model for task switching and sheds insights in the brain mechanism of a fundamental cognitive capability.
Whole-central nervous system functional imaging in larval Drosophila
Lemon, William C.; Pulver, Stefan R.; Höckendorf, Burkhard; McDole, Katie; Branson, Kristin; Freeman, Jeremy; Keller, Philipp J.
2015-01-01
Understanding how the brain works in tight concert with the rest of the central nervous system (CNS) hinges upon knowledge of coordinated activity patterns across the whole CNS. We present a method for measuring activity in an entire, non-transparent CNS with high spatiotemporal resolution. We combine a light-sheet microscope capable of simultaneous multi-view imaging at volumetric speeds 25-fold faster than the state-of-the-art, a whole-CNS imaging assay for the isolated Drosophila larval CNS and a computational framework for analysing multi-view, whole-CNS calcium imaging data. We image both brain and ventral nerve cord, covering the entire CNS at 2 or 5 Hz with two- or one-photon excitation, respectively. By mapping network activity during fictive behaviours and quantitatively comparing high-resolution whole-CNS activity maps across individuals, we predict functional connections between CNS regions and reveal neurons in the brain that identify type and temporal state of motor programs executed in the ventral nerve cord. PMID:26263051
Modelling the biologic effect of ions with the Local Effect Model
NASA Astrophysics Data System (ADS)
Friedrich, Thomas; Elsässer, Thilo; Durante, Marco; Scholz, Michael
In many cases in radiobiological experiments as well as in ion beam therapy the Local Effect Model (LEM) has proven to be capable to describe the biologic effect of ion irradiation based on the response to X-rays. During the last years, the LEM has been extended to include important processes such as the diffusion of free radicals or the biologic effect enhancement due to clustered lesions of the DNA in a more mechanistic fashion. In its current status the predictive power of the LEM covers a wide range of ions with good quantitative precision. Hence there is potential to also apply the LEM to problems in radiation protection. In this talk, the development stages of the LEM are illustrated. Emphasis is put on the most recent version of the LEM, where spatial distributions of DNA lesions are considered. Applicability, limits and strategies for an advanced model testing are discussed. Finally, planned extensions and applications of the LEM are presented.
NASA Technical Reports Server (NTRS)
Erickson, Gary E.
2007-01-01
An overview is given of selected measurement techniques used in the NASA Langley Research Center (NASA LaRC) Unitary Plan Wind Tunnel (UPWT) to determine the aerodynamic characteristics of aerospace vehicles operating at supersonic speeds. A broad definition of a measurement technique is adopted in this paper and is any qualitative or quantitative experimental approach that provides information leading to the improved understanding of the supersonic aerodynamic characteristics. On-surface and off-surface measurement techniques used to obtain discrete (point) and global (field) measurements and planar and global flow visualizations are described, and examples of all methods are included. The discussion is limited to recent experiences in the UPWT and is, therefore, not an exhaustive review of existing experimental techniques. The diversity and high quality of the measurement techniques and the resultant data illustrate the capabilities of a ground-based experimental facility and the key role that it plays in the advancement of our understanding, prediction, and control of supersonic aerodynamics.
Hauenschild, Till; Reichenwallner, Jörg; Enkelmann, Volker; Hinderberger, Dariush
2016-08-26
Drug binding to human serum albumin (HSA) has been characterized by a spin-labeling and continuous-wave (CW) EPR spectroscopic approach. Specifically, the contribution of functional groups (FGs) in a compound on its albumin-binding capabilities is quantitatively described. Molecules from different drug classes are labeled with EPR-active nitroxide radicals (spin-labeled pharmaceuticals (SLPs)) and in a screening approach CW-EPR spectroscopy is used to investigate HSA binding under physiological conditions and at varying ratios of SLP to protein. Spectral simulations of the CW-EPR spectra allow extraction of association constants (KA ) and the maximum number (n) of binding sites per protein. By comparison of data from 23 SLPs, the mechanisms of drug-protein association and the impact of chemical modifications at individual positions on drug uptake can be rationalized. Furthermore, new drug modifications with predictable protein binding tendency may be envisaged. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
From Brain Maps to Cognitive Ontologies: Informatics and the Search for Mental Structure.
Poldrack, Russell A; Yarkoni, Tal
2016-01-01
A major goal of cognitive neuroscience is to delineate how brain systems give rise to mental function. Here we review the increasingly large role informatics-driven approaches are playing in such efforts. We begin by reviewing a number of challenges conventional neuroimaging approaches face in trying to delineate brain-cognition mappings--for example, the difficulty in establishing the specificity of postulated associations. Next, we demonstrate how these limitations can potentially be overcome using complementary approaches that emphasize large-scale analysis--including meta-analytic methods that synthesize hundreds or thousands of studies at a time; latent-variable approaches that seek to extract structure from data in a bottom-up manner; and predictive modeling approaches capable of quantitatively inferring mental states from patterns of brain activity. We highlight the underappreciated but critical role for formal cognitive ontologies in helping to clarify, refine, and test theories of brain and cognitive function. Finally, we conclude with a speculative discussion of what future informatics developments may hold for cognitive neuroscience.
From brain maps to cognitive ontologies: informatics and the search for mental structure
Poldrack, Russell A.; Yarkoni, Tal
2015-01-01
A major goal of cognitive neuroscience is to delineate how brain systems give rise to mental function. Here we review the increasingly large role informatics-driven approaches are playing in such efforts. We begin by reviewing a number of challenges conventional neuroimaging approaches face in trying to delineate brain-cognition mappings—for example, the difficulty in establishing the specificity of postulated associations. Next, we demonstrate how these limitations can potentially be overcome using complementary approaches that emphasize large-scale analysis—including meta-analytic methods that synthesize hundreds or thousands of studies at a time; latent-variable approaches that seek to extract structure from data in a bottom-up manner; and predictive modeling approaches capable of quantitatively inferring mental states from patterns of brain activity. We highlight the underappreciated but critical role for formal cognitive ontologies in helping to clarify, refine, and test theories of brain and cognitive function. Finally, we conclude with a speculative discussion of what future informatics developments may hold for cognitive neuroscience. PMID:26393866
A novel structure-aware sparse learning algorithm for brain imaging genetics.
Du, Lei; Jingwen, Yan; Kim, Sungeun; Risacher, Shannon L; Huang, Heng; Inlow, Mark; Moore, Jason H; Saykin, Andrew J; Shen, Li
2014-01-01
Brain imaging genetics is an emergent research field where the association between genetic variations such as single nucleotide polymorphisms (SNPs) and neuroimaging quantitative traits (QTs) is evaluated. Sparse canonical correlation analysis (SCCA) is a bi-multivariate analysis method that has the potential to reveal complex multi-SNP-multi-QT associations. Most existing SCCA algorithms are designed using the soft threshold strategy, which assumes that the features in the data are independent from each other. This independence assumption usually does not hold in imaging genetic data, and thus inevitably limits the capability of yielding optimal solutions. We propose a novel structure-aware SCCA (denoted as S2CCA) algorithm to not only eliminate the independence assumption for the input data, but also incorporate group-like structure in the model. Empirical comparison with a widely used SCCA implementation, on both simulated and real imaging genetic data, demonstrated that S2CCA could yield improved prediction performance and biologically meaningful findings.
A rapid detection method of Escherichia coli by surface enhanced Raman scattering
NASA Astrophysics Data System (ADS)
Tao, Feifei; Peng, Yankun; Xu, Tianfeng
2015-05-01
Conventional microbiological detection and enumeration methods are time-consuming, labor-intensive, and giving retrospective information. The objectives of the present work are to study the capability of surface enhanced Raman scattering (SERS) to detect Escherichia coli (E. coli) using the presented silver colloidal substrate. The obtained results showed that the adaptive iteratively reweighed Penalized Least Squares (airPLS) algorithm could effectively remove the fluorescent background from original Raman spectra, and Raman characteristic peaks of 558, 682, 726, 1128, 1210 and 1328 cm-1 could be observed stably in the baseline corrected SERS spectra of all studied bacterial concentrations. The detection limit of SERS could be determined to be as low as 0.73 log CFU/ml for E. coli with the prepared silver colloidal substrate. The quantitative prediction results using the intensity values of characteristic peaks were not good, with the correlation coefficients of calibration set and cross validation set of 0.99 and 0.64, respectively.
NASA Astrophysics Data System (ADS)
McDuffie, E. E.; Brown, S. S.
2017-12-01
The heterogeneous chemistry of N2O5 impacts the budget of tropospheric oxidants, which directly controls air quality at Earth's surface. The reaction between gas-phase N2O5 and aerosol particles occurs largely at night, and is therefore more important during the less-intensively-studied winter season. Though N2O5-aerosol interactions are vital for the accurate understanding and simulation of tropospheric chemistry and air quality, many uncertainties persist in our understanding of how various environmental factors influence the reaction rate and probability. Quantitative and accurate evaluation of these factors directly improves the predictive capabilities of atmospheric models, used to inform mitigation strategies for wintertime air pollution. In an update to last year's presentation, The Wintertime Fate of N2O5: Observations and Box Model Analysis for the 2015 WINTER Aircraft Campaign, this presentation will focus on recent field results regarding new information about N2O5 heterogeneous chemistry and future research directions.
Towards water vapor assimilation into mesoscale models for improved precipitation forecast
NASA Astrophysics Data System (ADS)
Demoz, B.; Whiteman, D.; Venable, D.; Joseph, E.
2006-05-01
Atmospheric water vapor plays a primary role in the life cycle of clouds, precipitation and is crucial in understanding many aspects of the water cycle. It is very important to short-range mesoscale and storm-scale weather prediction. Specifically, accurate characterization of water vapor at low levels is a necessary condition for quantitative precipitation forecast (QPF), the initiation of convection and various thermodynamic and microphysical processes in mesoscale severe weather systems. However, quantification of its variability (both temporal and spatial) and integration of high quality and high frequency water vapor profiles into mesoscale models have been challenging. We report on a conceptual proposal that attempts to 1) define approporiate lidar-based data and instrumentation required for mesoscale data assimilation and 2) a possible federated network of ground-based lidars that may be capable of acquiring such high resolution water vapor data sets and 3) a possible frame work of assimilation of the data into a mesoscale model.
NASA Astrophysics Data System (ADS)
Hirst, Jonathan D.; King, Ross D.; Sternberg, Michael J. E.
1994-08-01
Neural networks and inductive logic programming (ILP) have been compared to linear regression for modelling the QSAR of the inhibition of E. coli dihydrofolate reductase (DHFR) by 2,4-diamino-5-(substitured benzyl)pyrimidines, and, in the subsequent paper [Hirst, J.D., King, R.D. and Sternberg, M.J.E., J. Comput.-Aided Mol. Design, 8 (1994) 421], the inhibition of rodent DHFR by 2,4-diamino-6,6-dimethyl-5-phenyl-dihydrotriazines. Cross-validation trials provide a statistically rigorous assessment of the predictive capabilities of the methods, with training and testing data selected randomly and all the methods developed using identical training data. For the ILP analysis, molecules are represented by attributes other than Hansch parameters. Neural networks and ILP perform better than linear regression using the attribute representation, but the difference is not statistically significant. The major benefit from the ILP analysis is the formulation of understandable rules relating the activity of the inhibitors to their chemical structure.
NASA Astrophysics Data System (ADS)
Chen, Alvin U.; Basaran, Osman A.
2000-11-01
Drop formation from a capillary --- dripping mode --- or an ink jet nozzle --- drop-on-demand (DOD) mode --- falls into a class of scientifically challenging yet practically useful free surface flows that exhibit a finite time singularity, i.e. the breakup of an initially single liquid mass into two or more fragments. While computational tools to model such problems have been developed recently, they lack the accuracy needed to quantitatively predict all the dynamics observed in experiments. Here we present a new finite element method (FEM) based on a robust algorithm for elliptic mesh generation and remeshing to handle extremely large interface deformations. The new algorithm allows continuation of computations beyond the first singularity to track fates of both primary and any satellite drops. The accuracy of the computations is demonstrated by comparison of simulations with experimental measurements made possible with an ultra high-speed digital imager capable of recording 100 million frames per second.
Dimension-Factorized Range Migration Algorithm for Regularly Distributed Array Imaging
Guo, Qijia; Wang, Jie; Chang, Tianying
2017-01-01
The two-dimensional planar MIMO array is a popular approach for millimeter wave imaging applications. As a promising practical alternative, sparse MIMO arrays have been devised to reduce the number of antenna elements and transmitting/receiving channels with predictable and acceptable loss in image quality. In this paper, a high precision three-dimensional imaging algorithm is proposed for MIMO arrays of the regularly distributed type, especially the sparse varieties. Termed the Dimension-Factorized Range Migration Algorithm, the new imaging approach factorizes the conventional MIMO Range Migration Algorithm into multiple operations across the sparse dimensions. The thinner the sparse dimensions of the array, the more efficient the new algorithm will be. Advantages of the proposed approach are demonstrated by comparison with the conventional MIMO Range Migration Algorithm and its non-uniform fast Fourier transform based variant in terms of all the important characteristics of the approaches, especially the anti-noise capability. The computation cost is analyzed as well to evaluate the efficiency quantitatively. PMID:29113083
Epistemic and aleatory uncertainty in the study of dynamic human-water systems
NASA Astrophysics Data System (ADS)
Di Baldassarre, Giuliano; Brandimarte, Luigia; Beven, Keith
2016-04-01
Here we discuss epistemic and aleatory uncertainty in the study of dynamic human-water systems (e.g. socio-hydrology), which is one of the main topics of Panta Rhei, the current scientific decade of the International Association of Hydrological Sciences (IAHS). In particular, we identify three types of lack of understanding: (i) known unknowns, which are things we know we don't know; (ii) unknown unknowns, which are things we don't know we don't know; and (iii) wrong assumptions, things we think we know, but we actually don't know. We posit that a better understanding of human-water interactions and feedbacks can help coping with wrong assumptions and known unknowns. Moreover, being aware of the existence of unknown unknowns, and their potential capability to generate surprises or black swans, suggest the need to rely more on bottom-up approaches, based on social vulnerabilities and possibilities of failures, and less on top-down approaches, based on optimization and quantitative predictions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frank, Jonathan H.; Pickett, Lyle M.; Bisson, Scott E.
In this LDRD project, we developed a capability for quantitative high - speed imaging measurements of high - pressure fuel injection dynamics to advance understanding of turbulent mixing in transcritical flows, ignition, and flame stabilization mechanisms, and to provide e ssential validation data for developing predictive tools for engine combustion simulations. Advanced, fuel - efficient engine technologies rely on fuel injection into a high - pressure, high - temperature environment for mixture preparation and com bustion. Howe ver, the dynamics of fuel injection are not well understood and pose significant experimental and modeling challenges. To address the need for quantitativemore » high - speed measurements, we developed a Nd:YAG laser that provides a 5ms burst of pulses at 100 kHz o n a robust mobile platform . Using this laser, we demonstrated s patially and temporally resolved Rayleigh scattering imaging and particle image velocimetry measurements of turbulent mixing in high - pressure gas - phase flows and vaporizing sprays . Quantitativ e interpretation of high - pressure measurements was advanced by reducing and correcting interferences and imaging artifacts.« less
Observation of Compressible Plasma Mix in Cylindrically Convergent Implosions
NASA Astrophysics Data System (ADS)
Barnes, Cris W.; Batha, Steven H.; Lanier, Nicholas E.; Magelssen, Glenn R.; Tubbs, David L.; Dunne, A. M.; Rothman, Steven R.; Youngs, David L.
2000-10-01
An understanding of hydrodynamic mix in convergent geometry will be of key importance in the development of a robust ignition/burn capability on NIF, LMJ and future pulsed power machines. We have made use of the OMEGA laser facility at the University of Rochester to investigate directly the mix evolution in a convergent geometry, compressible plasma regime. The experiments comprise a plastic cylindrical shell imploded by direct laser irradiation. The cylindrical shell surrounds a lower density plastic foam which provides sufficient back pressure to allow the implosion to stagnate at a sufficiently high radius to permit quantitative radiographic diagnosis of the interface evolution near turnaround. The susceptibility to mix of the shell-foam interface is varied by choosing different density material for the inner shell surface (thus varying the Atwood number). This allows the study of shock-induced Richtmyer-Meshkov growth during the coasting phase, and Rayleigh-Taylor growth during the stagnation phase. The experimental results will be described along with calculational predictions using various radiation hydrodynamics codes and turbulent mix models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yulan; Hu, Shenyang Y.; Sun, Xin
2011-06-15
Microstructure evolution kinetics in irradiated materials has strongly spatial correlation. For example, void and second phases prefer to nucleate and grow at pre-existing defects such as dislocations, grain boundaries, and cracks. Inhomogeneous microstructure evolution results in inhomogeneity of microstructure and thermo-mechanical properties. Therefore, the simulation capability for predicting three dimensional (3-D) microstructure evolution kinetics and its subsequent impact on material properties and performance is crucial for scientific design of advanced nuclear materials and optimal operation conditions in order to reduce uncertainty in operational and safety margins. Very recently the meso-scale phase-field (PF) method has been used to predict gas bubblemore » evolution, void swelling, void lattice formation and void migration in irradiated materials,. Although most results of phase-field simulations are qualitative due to the lake of accurate thermodynamic and kinetic properties of defects, possible missing of important kinetic properties and processes, and the capability of current codes and computers for large time and length scale modeling, the simulations demonstrate that PF method is a promising simulation tool for predicting 3-D heterogeneous microstructure and property evolution, and providing microstructure evolution kinetics for higher scale level simulations of microstructure and property evolution such as mean field methods. This report consists of two parts. In part I, we will present a new phase-field model for predicting interstitial loop growth kinetics in irradiated materials. The effect of defect (vacancy/interstitial) generation, diffusion and recombination, sink strength, long-range elastic interaction, inhomogeneous and anisotropic mobility on microstructure evolution kinetics is taken into account in the model. The model is used to study the effect of elastic interaction on interstitial loop growth kinetics, the interstitial flux, and sink strength of interstitial loop for interstitials. In part II, we present a generic phase field model and discuss the thermodynamic and kinetic properties in phase-field models including the reaction kinetics of radiation defects and local free energy of irradiated materials. In particular, a two-sublattice thermodynamic model is suggested to describe the local free energy of alloys with irradiated defects. Fe-Cr alloy is taken as an example to explain the required thermodynamic and kinetic properties for quantitative phase-field modeling. Finally the great challenges in phase-field modeling will be discussed.« less
Ueda, Kazuhiro; Kaneda, Yoshikazu; Sudo, Manabu; Mitsutaka, Jinbo; Li, Tao-Sheng; Suga, Kazuyoshi; Tanaka, Nobuyuki; Hamano, Kimikazu
2005-11-01
Emphysema is a well-known risk factor for developing air leak or persistent air leak after pulmonary resection. Although quantitative computed tomography (CT) and spirometry are used to diagnose emphysema, it remains controversial whether these tests are predictive of the duration of postoperative air leak. Sixty-two consecutive patients who were scheduled to undergo major lung resection for cancer were enrolled in this prospective study to define the best predictor of postoperative air leak duration. Preoperative factors analyzed included spirometric variables and area of emphysema (proportion of the low-attenuation area) that was quantified in a three-dimensional CT lung model. Chest tubes were removed the day after disappearance of the air leak, regardless of pleural drainage. Univariate and multivariate proportional hazards analyses were used to determine the influence of preoperative factors on chest tube time (air leak duration). By univariate analysis, site of resection (upper, lower), forced expiratory volume in 1 second, predicted postoperative forced expiratory volume in 1 second, and area of emphysema (< 1%, 1% to 10%, > 10%) were significant predictors of air leak duration. By multivariate analysis, site of resection and area of emphysema were the best independent determinants of air leak duration. The results were similar for patients with a smoking history (n = 40), but neither forced expiratory volume in 1 second nor predicted postoperative forced expiratory volume in 1 second were predictive of air leak duration. Quantitative CT is superior to spirometry in predicting air leak duration after major lung resection for cancer. Quantitative CT may aid in the identification of patients, particularly among those with a smoking history, requiring additional preventive procedures against air leak.
Bartsch, H; Tomatis, L
1983-01-01
The qualitative relationship between carcinogenicity and mutagenicity (DNA-damaging activity), based on chemicals which are known to be or suspected of being carcinogenic to man and/or to experimental animals, is analyzed using 532 chemicals evaluated in Volumes 1-25 of the IARC Monographs on the Evaluation of the Carcinogenic Risk of Chemicals to Humans. About 40 compounds (industrial processes) were found to be either definitely or probably carcinogenic to man, and 130 chemicals have been adequately tested in rodents and most of them also in various short-term assays. For a comparison between the carcinogenicity of a chemical and its behavior in short-term tests, systems were selected that have a value for predicting carcinogenicity. These were divided into mutagenicity in (A) the S. typhimurium/microsome assay, (B) other submammalian systems and (C) cultured mammalian cells; (D) chromosomal abnormalities in mammalian cells; (E) DNA damage and repair; (F) cell transformation (or altered growth properties) in vitro. The following conclusions can be drawn. In the absence of studies in man, long-term animal tests are still today the only ones capable of providing evidence of the carcinogenic effect of a chemical. The development and application of an appropriate combination of short-term tests (despite current limitations) can significantly contribute to the prediction/confirmation of the carcinogenic effects of chemicals in animals/man. Confidence in positive tests results is increased when they are confirmed in multiple short-term tests using nonrepetitive end points and different activation systems. Assays to detect carcinogens which do not act via electrophiles (promoters) need to be developed. The results of a given short-term test should be interpreted in the context of other toxicological data. Increasing demand for quantitative carcinogenicity data requires further examination of whether or not there is a quantitative relationship between the potency of a carcinogen in experimental animals/man, and its genotoxic activity in short-term tests. At present, such a relationship is not sufficiently established for it to be used for the prediction of the carcinogenic potency of new compounds. PMID:6337827
Measurement with microscopic MRI and simulation of flow in different aneurysm models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edelhoff, Daniel, E-mail: daniel.edelhoff@tu-dortmund.de; Frank, Frauke; Heil, Marvin
2015-10-15
Purpose: The impact and the development of aneurysms depend to a significant degree on the exchange of liquid between the regular vessel and the pathological extension. A better understanding of this process will lead to improved prediction capabilities. The aim of the current study was to investigate fluid-exchange in aneurysm models of different complexities by combining microscopic magnetic resonance measurements with numerical simulations. In order to evaluate the accuracy and applicability of these methods, the fluid-exchange process between the unaltered vessel lumen and the aneurysm phantoms was analyzed quantitatively using high spatial resolution. Methods: Magnetic resonance flow imaging was usedmore » to visualize fluid-exchange in two different models produced with a 3D printer. One model of an aneurysm was based on histological findings. The flow distribution in the different models was measured on a microscopic scale using time of flight magnetic resonance imaging. The whole experiment was simulated using fast graphics processing unit-based numerical simulations. The obtained simulation results were compared qualitatively and quantitatively with the magnetic resonance imaging measurements, taking into account flow and spin–lattice relaxation. Results: The results of both presented methods compared well for the used aneurysm models and the chosen flow distributions. The results from the fluid-exchange analysis showed comparable characteristics concerning measurement and simulation. Similar symmetry behavior was observed. Based on these results, the amount of fluid-exchange was calculated. Depending on the geometry of the models, 7% to 45% of the liquid was exchanged per second. Conclusions: The result of the numerical simulations coincides well with the experimentally determined velocity field. The rate of fluid-exchange between vessel and aneurysm was well-predicted. Hence, the results obtained by simulation could be validated by the experiment. The observed deviations can be caused by the noise in the measurement and by the limited resolution of the simulation. The resulting differences are small enough to allow reliable predictions of the flow distribution in vessels with stents and for pulsed blood flow.« less
Flow and Transport in Complex Microporous Carbonates as a Consequence of Separation of Scales
NASA Astrophysics Data System (ADS)
Bijeljic, B.; Raeini, A. Q.; Lin, Q.; Blunt, M. J.
2017-12-01
Some of the most important examples of flow and transport in complex pore structures are found in subsurface applications such as contaminant hydrology, carbon storage and enhanced oil recovery. Carbonate rock structures contain most of the world's oil reserves, considerable amount of water reserves, and potentially hold a storage capacity for carbon dioxide. However, this type of pore space is difficult to represent due to complexities associated with a wide range of pore sizes and variation in connectivity which poses a considerable challenge for quantitative predictions of transport across multiple scales.A new concept unifying X-ray tomography experiment and direct numerical simulation has been developed that relies on full description flow and solute transport at the pore scale. Differential imaging method (Lin et al. 2016) provides rich information in microporous space, while advective and diffusive mass transport are simulated on micro-CT images of pore-space: Navier-Stokes equations are solved for flow in the image voxels comprising the pore space, streamline-based simulation is used to account for advection, and diffusion is superimposed by random walk.Quantitative validation has been done on analytical solutions for diffusion and by comparing the model predictions versus the experimental NMR measurements in the dual porosity beadpack. Furthermore, we discriminate signatures of multi-scale transport behaviour for a range of carbonate rock (Figure 1), dependent on the heterogeneity of the inter- and intra-grain pore space, heterogeneity in the flow field, and the mass transfer characteristics of the porous media. Finally, we demonstrate the predictive capabilities of the model through an analysis that includes a number of probability density functions flow and transport (PDFs) measures of non-Fickian transport on the micro-CT i935mages. In complex porous media separation of scales exists, leading to flow and transport signatures that need to be described by multiple functions with distinct flow field and transport characteristics. Reference: Lin, Q., Al-Khulaifi Y., Blunt, M.J. and Bijeljic B. (2016). Advances in Water Resources, 96, 306-322, doi:10.1016/j.advwatres.2016.08.002.
Extending Theory-Based Quantitative Predictions to New Health Behaviors.
Brick, Leslie Ann D; Velicer, Wayne F; Redding, Colleen A; Rossi, Joseph S; Prochaska, James O
2016-04-01
Traditional null hypothesis significance testing suffers many limitations and is poorly adapted to theory testing. A proposed alternative approach, called Testing Theory-based Quantitative Predictions, uses effect size estimates and confidence intervals to directly test predictions based on theory. This paper replicates findings from previous smoking studies and extends the approach to diet and sun protection behaviors using baseline data from a Transtheoretical Model behavioral intervention (N = 5407). Effect size predictions were developed using two methods: (1) applying refined effect size estimates from previous smoking research or (2) using predictions developed by an expert panel. Thirteen of 15 predictions were confirmed for smoking. For diet, 7 of 14 predictions were confirmed using smoking predictions and 6 of 16 using expert panel predictions. For sun protection, 3 of 11 predictions were confirmed using smoking predictions and 5 of 19 using expert panel predictions. Expert panel predictions and smoking-based predictions poorly predicted effect sizes for diet and sun protection constructs. Future studies should aim to use previous empirical data to generate predictions whenever possible. The best results occur when there have been several iterations of predictions for a behavior, such as with smoking, demonstrating that expected values begin to converge on the population effect size. Overall, the study supports necessity in strengthening and revising theory with empirical data.
A Microfluidic Platform for High-Throughput Multiplexed Protein Quantitation
Volpetti, Francesca; Garcia-Cordero, Jose; Maerkl, Sebastian J.
2015-01-01
We present a high-throughput microfluidic platform capable of quantitating up to 384 biomarkers in 4 distinct samples by immunoassay. The microfluidic device contains 384 unit cells, which can be individually programmed with pairs of capture and detection antibody. Samples are quantitated in each unit cell by four independent MITOMI detection areas, allowing four samples to be analyzed in parallel for a total of 1,536 assays per device. We show that the device can be pre-assembled and stored for weeks at elevated temperature and we performed proof-of-concept experiments simultaneously quantitating IL-6, IL-1β, TNF-α, PSA, and GFP. Finally, we show that the platform can be used to identify functional antibody combinations by screening 64 antibody combinations requiring up to 384 unique assays per device. PMID:25680117
Highly linear, sensitive analog-to-digital converter
NASA Technical Reports Server (NTRS)
Cox, J.; Finley, W. R.
1969-01-01
Analog-to-digital converter converts 10 volt full scale input signal into 13 bit digital output. Advantages include high sensitivity, linearity, low quantitizing error, high resistance to mechanical shock and vibration loads, and temporary data storage capabilities.
Code of Federal Regulations, 2010 CFR
2010-10-01
... quality measures data; and (v) A description and quantitative data on how its incentive payment program... for quality improvement, reduction of disparities, research or outreach. (ii) Capability to submit...
Code of Federal Regulations, 2011 CFR
2011-10-01
... quality measures data; and (v) A description and quantitative data on how its incentive payment program... for quality improvement, reduction of disparities, research or outreach. (ii) Capability to submit...
A Process for Assessing NASA's Capability in Aircraft Noise Prediction Technology
NASA Technical Reports Server (NTRS)
Dahl, Milo D.
2008-01-01
An acoustic assessment is being conducted by NASA that has been designed to assess the current state of the art in NASA s capability to predict aircraft related noise and to establish baselines for gauging future progress in the field. The process for determining NASA s current capabilities includes quantifying the differences between noise predictions and measurements of noise from experimental tests. The computed noise predictions are being obtained from semi-empirical, analytical, statistical, and numerical codes. In addition, errors and uncertainties are being identified and quantified both in the predictions and in the measured data to further enhance the credibility of the assessment. The content of this paper contains preliminary results, since the assessment project has not been fully completed, based on the contributions of many researchers and shows a select sample of the types of results obtained regarding the prediction of aircraft noise at both the system and component levels. The system level results are for engines and aircraft. The component level results are for fan broadband noise, for jet noise from a variety of nozzles, and for airframe noise from flaps and landing gear parts. There are also sample results for sound attenuation in lined ducts with flow and the behavior of acoustic lining in ducts.