Impact of implementation choices on quantitative predictions of cell-based computational models
NASA Astrophysics Data System (ADS)
Kursawe, Jochen; Baker, Ruth E.; Fletcher, Alexander G.
2017-09-01
'Cell-based' models provide a powerful computational tool for studying the mechanisms underlying the growth and dynamics of biological tissues in health and disease. An increasing amount of quantitative data with cellular resolution has paved the way for the quantitative parameterisation and validation of such models. However, the numerical implementation of cell-based models remains challenging, and little work has been done to understand to what extent implementation choices may influence model predictions. Here, we consider the numerical implementation of a popular class of cell-based models called vertex models, which are often used to study epithelial tissues. In two-dimensional vertex models, a tissue is approximated as a tessellation of polygons and the vertices of these polygons move due to mechanical forces originating from the cells. Such models have been used extensively to study the mechanical regulation of tissue topology in the literature. Here, we analyse how the model predictions may be affected by numerical parameters, such as the size of the time step, and non-physical model parameters, such as length thresholds for cell rearrangement. We find that vertex positions and summary statistics are sensitive to several of these implementation parameters. For example, the predicted tissue size decreases with decreasing cell cycle durations, and cell rearrangement may be suppressed by large time steps. These findings are counter-intuitive and illustrate that model predictions need to be thoroughly analysed and implementation details carefully considered when applying cell-based computational models in a quantitative setting.
Liu, Chun; Bridges, Melissa E; Kaundun, Shiv S; Glasgow, Les; Owen, Micheal Dk; Neve, Paul
2017-02-01
Simulation models are useful tools for predicting and comparing the risk of herbicide resistance in weed populations under different management strategies. Most existing models assume a monogenic mechanism governing herbicide resistance evolution. However, growing evidence suggests that herbicide resistance is often inherited in a polygenic or quantitative fashion. Therefore, we constructed a generalised modelling framework to simulate the evolution of quantitative herbicide resistance in summer annual weeds. Real-field management parameters based on Amaranthus tuberculatus (Moq.) Sauer (syn. rudis) control with glyphosate and mesotrione in Midwestern US maize-soybean agroecosystems demonstrated that the model can represent evolved herbicide resistance in realistic timescales. Sensitivity analyses showed that genetic and management parameters were impactful on the rate of quantitative herbicide resistance evolution, whilst biological parameters such as emergence and seed bank mortality were less important. The simulation model provides a robust and widely applicable framework for predicting the evolution of quantitative herbicide resistance in summer annual weed populations. The sensitivity analyses identified weed characteristics that would favour herbicide resistance evolution, including high annual fecundity, large resistance phenotypic variance and pre-existing herbicide resistance. Implications for herbicide resistance management and potential use of the model are discussed. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
ERIC Educational Resources Information Center
Zhang, Jiabei
2011-01-01
The purpose of this study was to analyze quantitative needs for more adapted physical education (APE) teachers based on both market- and prevalence-based models. The market-based need for more APE teachers was examined based on APE teacher positions funded, while the prevalence-based need for additional APE teachers was analyzed based on students…
Human Spaceflight Architecture Model (HSFAM) Data Dictionary
NASA Technical Reports Server (NTRS)
Shishko, Robert
2016-01-01
HSFAM is a data model based on the DoDAF 2.02 data model with some for purpose extensions. These extensions are designed to permit quantitative analyses regarding stakeholder concerns about technical feasibility, configuration and interface issues, and budgetary and/or economic viability.
Pargett, Michael; Umulis, David M
2013-07-15
Mathematical modeling of transcription factor and signaling networks is widely used to understand if and how a mechanism works, and to infer regulatory interactions that produce a model consistent with the observed data. Both of these approaches to modeling are informed by experimental data, however, much of the data available or even acquirable are not quantitative. Data that is not strictly quantitative cannot be used by classical, quantitative, model-based analyses that measure a difference between the measured observation and the model prediction for that observation. To bridge the model-to-data gap, a variety of techniques have been developed to measure model "fitness" and provide numerical values that can subsequently be used in model optimization or model inference studies. Here, we discuss a selection of traditional and novel techniques to transform data of varied quality and enable quantitative comparison with mathematical models. This review is intended to both inform the use of these model analysis methods, focused on parameter estimation, and to help guide the choice of method to use for a given study based on the type of data available. Applying techniques such as normalization or optimal scaling may significantly improve the utility of current biological data in model-based study and allow greater integration between disparate types of data. Copyright © 2013 Elsevier Inc. All rights reserved.
Using enterprise architecture to analyse how organisational structure impact motivation and learning
NASA Astrophysics Data System (ADS)
Närman, Pia; Johnson, Pontus; Gingnell, Liv
2016-06-01
When technology, environment, or strategies change, organisations need to adjust their structures accordingly. These structural changes do not always enhance the organisational performance as intended partly because organisational developers do not understand the consequences of structural changes in performance. This article presents a model-based analysis framework for quantitative analysis of the effect of organisational structure on organisation performance in terms of employee motivation and learning. The model is based on Mintzberg's work on organisational structure. The quantitative analysis is formalised using the Object Constraint Language (OCL) and the Unified Modelling Language (UML) and implemented in an enterprise architecture tool.
NASA Astrophysics Data System (ADS)
Fettahlıoğlu, Pınar; Aydoğdu, Mustafa
2018-04-01
The purpose of this research is to investigate the effect of using argumentation and problem-based learning approaches on the development of environmentally responsible behaviours among pre-service science teachers. Experimental activities were implemented for 14 weeks for 52 class hours in an environmental education class within a science teaching department. A mixed method was used as a research design; particularly, a special type of Concurrent Nested Strategy was applied. The quantitative portion was based on the one-group pre-test and post-test models, and the qualitative portion was based on the holistic multiple-case study method. The quantitative portion of the research was conducted with 34 third-year pre-service science teachers studying at a state university. The qualitative portion of the study was conducted with six pre-service science teachers selected among the 34 pre-service science teachers based on the pre-test results obtained from an environmentally responsible behaviour scale. t tests for dependent groups were used to analyse quantitative data. Both descriptive and content analyses of the qualitative data were performed. The results of the study showed that the use of the argumentation and problem-based learning approaches significantly contributed to the development of environmentally responsible behaviours among pre-service science teachers.
The analysis of morphometric data on rocky mountain wolves and artic wolves using statistical method
NASA Astrophysics Data System (ADS)
Ammar Shafi, Muhammad; Saifullah Rusiman, Mohd; Hamzah, Nor Shamsidah Amir; Nor, Maria Elena; Ahmad, Noor’ani; Azia Hazida Mohamad Azmi, Nur; Latip, Muhammad Faez Ab; Hilmi Azman, Ahmad
2018-04-01
Morphometrics is a quantitative analysis depending on the shape and size of several specimens. Morphometric quantitative analyses are commonly used to analyse fossil record, shape and size of specimens and others. The aim of the study is to find the differences between rocky mountain wolves and arctic wolves based on gender. The sample utilised secondary data which included seven variables as independent variables and two dependent variables. Statistical modelling was used in the analysis such was the analysis of variance (ANOVA) and multivariate analysis of variance (MANOVA). The results showed there exist differentiating results between arctic wolves and rocky mountain wolves based on independent factors and gender.
Quantitative Measures of Immersion in Cloud and the Biogeography of Cloud Forests
NASA Technical Reports Server (NTRS)
Lawton, R. O.; Nair, U. S.; Ray, D.; Regmi, A.; Pounds, J. A.; Welch, R. M.
2010-01-01
Sites described as tropical montane cloud forests differ greatly, in part because observers tend to differ in their opinion as to what constitutes frequent and prolonged immersion in cloud. This definitional difficulty interferes with hydrologic analyses, assessments of environmental impacts on ecosystems, and biogeographical analyses of cloud forest communities and species. Quantitative measurements of cloud immersion can be obtained on site, but the observations are necessarily spatially limited, although well-placed observers can examine 10 50 km of a mountain range under rainless conditions. Regional analyses, however, require observations at a broader scale. This chapter discusses remote sensing and modeling approaches that can provide quantitative measures of the spatiotemporal patterns of cloud cover and cloud immersion in tropical mountain ranges. These approaches integrate remote sensing tools of various spatial resolutions and frequencies of observation, digital elevation models, regional atmospheric models, and ground-based observations to provide measures of cloud cover, cloud base height, and the intersection of cloud and terrain. This combined approach was applied to the Monteverde region of northern Costa Rica to illustrate how the proportion of time the forest is immersed in cloud may vary spatially and temporally. The observed spatial variation was largely due to patterns of airflow over the mountains. The temporal variation reflected the diurnal rise and fall of the orographic cloud base, which was influenced in turn by synoptic weather conditions, the seasonal movement of the Intertropical Convergence Zone and the north-easterly trade winds. Knowledge of the proportion of the time that sites are immersed in clouds should facilitate ecological comparisons and biogeographical analyses, as well as land use planning and hydrologic assessments in areas where intensive on-site work is not feasible.
Wang, Hui; Jiang, Mingyue; Li, Shujun; Hse, Chung-Yun; Jin, Chunde; Sun, Fangli; Li, Zhuo
2017-09-01
Cinnamaldehyde amino acid Schiff base (CAAS) is a new class of safe, bioactive compounds which could be developed as potential antifungal agents for fungal infections. To design new cinnamaldehyde amino acid Schiff base compounds with high bioactivity, the quantitative structure-activity relationships (QSARs) for CAAS compounds against Aspergillus niger ( A. niger ) and Penicillium citrinum (P. citrinum) were analysed. The QSAR models ( R 2 = 0.9346 for A. niger , R 2 = 0.9590 for P. citrinum, ) were constructed and validated. The models indicated that the molecular polarity and the Max atomic orbital electronic population had a significant effect on antifungal activity. Based on the best QSAR models, two new compounds were designed and synthesized. Antifungal activity tests proved that both of them have great bioactivity against the selected fungi.
Wang, Hui; Jiang, Mingyue; Hse, Chung-Yun; Jin, Chunde; Sun, Fangli; Li, Zhuo
2017-01-01
Cinnamaldehyde amino acid Schiff base (CAAS) is a new class of safe, bioactive compounds which could be developed as potential antifungal agents for fungal infections. To design new cinnamaldehyde amino acid Schiff base compounds with high bioactivity, the quantitative structure–activity relationships (QSARs) for CAAS compounds against Aspergillus niger (A. niger) and Penicillium citrinum (P. citrinum) were analysed. The QSAR models (R2 = 0.9346 for A. niger, R2 = 0.9590 for P. citrinum,) were constructed and validated. The models indicated that the molecular polarity and the Max atomic orbital electronic population had a significant effect on antifungal activity. Based on the best QSAR models, two new compounds were designed and synthesized. Antifungal activity tests proved that both of them have great bioactivity against the selected fungi. PMID:28989758
Pucher, Katharina K; Candel, Math J J M; Krumeich, Anja; Boot, Nicole M W M; De Vries, Nanne K
2015-07-05
We report on the longitudinal quantitative and qualitative data resulting from a two-year trajectory (2008-2011) based on the DIagnosis of Sustainable Collaboration (DISC) model. This trajectory aimed to support regional coordinators of comprehensive school health promotion (CSHP) in systematically developing change management and project management to establish intersectoral collaboration. Multilevel analyses of quantitative data on the determinants of collaborations according to the DISC model were done, with 90 respondents (response 57 %) at pretest and 69 respondents (52 %) at posttest. Nvivo analyses of the qualitative data collected during the trajectory included minutes of monthly/bimonthly personal/telephone interviews (N = 65) with regional coordinators, and documents they produced about their activities. Quantitative data showed major improvements in change management and project management. There were also improvements in consensus development, commitment formation, formalization of the CSHP, and alignment of policies, although organizational problems within the collaboration increased. Content analyses of qualitative data identified five main management styles, including (1) facilitating active involvement of relevant parties; (2) informing collaborating parties; (3) controlling and (4) supporting their task accomplishment; and (5) coordinating the collaborative processes. We have contributed to the fundamental understanding of the development of intersectoral collaboration by combining qualitative and quantitative data. Our results support a systematic approach to intersectoral collaboration using the DISC model. They also suggest five main management styles to improve intersectoral collaboration in the initial stage. The outcomes are useful for health professionals involved in similar ventures.
Hollingsworth, T Déirdre; Adams, Emily R; Anderson, Roy M; Atkins, Katherine; Bartsch, Sarah; Basáñez, María-Gloria; Behrend, Matthew; Blok, David J; Chapman, Lloyd A C; Coffeng, Luc; Courtenay, Orin; Crump, Ron E; de Vlas, Sake J; Dobson, Andy; Dyson, Louise; Farkas, Hajnal; Galvani, Alison P; Gambhir, Manoj; Gurarie, David; Irvine, Michael A; Jervis, Sarah; Keeling, Matt J; Kelly-Hope, Louise; King, Charles; Lee, Bruce Y; Le Rutte, Epke A; Lietman, Thomas M; Ndeffo-Mbah, Martial; Medley, Graham F; Michael, Edwin; Pandey, Abhishek; Peterson, Jennifer K; Pinsent, Amy; Porco, Travis C; Richardus, Jan Hendrik; Reimer, Lisa; Rock, Kat S; Singh, Brajendra K; Stolk, Wilma; Swaminathan, Subramanian; Torr, Steve J; Townsend, Jeffrey; Truscott, James; Walker, Martin; Zoueva, Alexandra
2015-12-09
Quantitative analysis and mathematical models are useful tools in informing strategies to control or eliminate disease. Currently, there is an urgent need to develop these tools to inform policy to achieve the 2020 goals for neglected tropical diseases (NTDs). In this paper we give an overview of a collection of novel model-based analyses which aim to address key questions on the dynamics of transmission and control of nine NTDs: Chagas disease, visceral leishmaniasis, human African trypanosomiasis, leprosy, soil-transmitted helminths, schistosomiasis, lymphatic filariasis, onchocerciasis and trachoma. Several common themes resonate throughout these analyses, including: the importance of epidemiological setting on the success of interventions; targeting groups who are at highest risk of infection or re-infection; and reaching populations who are not accessing interventions and may act as a reservoir for infection,. The results also highlight the challenge of maintaining elimination 'as a public health problem' when true elimination is not reached. The models elucidate the factors that may be contributing most to persistence of disease and discuss the requirements for eventually achieving true elimination, if that is possible. Overall this collection presents new analyses to inform current control initiatives. These papers form a base from which further development of the models and more rigorous validation against a variety of datasets can help to give more detailed advice. At the moment, the models' predictions are being considered as the world prepares for a final push towards control or elimination of neglected tropical diseases by 2020.
A quantitative model of optimal data selection in Wason's selection task.
Hattori, Masasi
2002-10-01
The optimal data selection model proposed by Oaksford and Chater (1994) successfully formalized Wason's selection task (Wason, 1966). The model, however, involved some questionable assumptions and was also not sufficient as a model of the task because it could not provide quantitative predictions of the card selection frequencies. In this paper, the model was revised to provide quantitative fits to the data. The model can predict the selection frequencies of cards based on a selection tendency function (STF), or conversely, it enables the estimation of subjective probabilities from data. Past experimental data were first re-analysed based on the model. In Experiment 1, the superiority of the revised model was shown. However, when the relationship between antecedent and consequent was forced to deviate from the biconditional form, the model was not supported. In Experiment 2, it was shown that sufficient emphasis on probabilistic information can affect participants' performance. A detailed experimental method to sort participants by probabilistic strategies was introduced. Here, the model was supported by a subgroup of participants who used the probabilistic strategy. Finally, the results were discussed from the viewpoint of adaptive rationality.
Gamal El-Dien, Omnia; Ratcliffe, Blaise; Klápště, Jaroslav; Porth, Ilga; Chen, Charles; El-Kassaby, Yousry A.
2016-01-01
The open-pollinated (OP) family testing combines the simplest known progeny evaluation and quantitative genetics analyses as candidates’ offspring are assumed to represent independent half-sib families. The accuracy of genetic parameter estimates is often questioned as the assumption of “half-sibling” in OP families may often be violated. We compared the pedigree- vs. marker-based genetic models by analysing 22-yr height and 30-yr wood density for 214 white spruce [Picea glauca (Moench) Voss] OP families represented by 1694 individuals growing on one site in Quebec, Canada. Assuming half-sibling, the pedigree-based model was limited to estimating the additive genetic variances which, in turn, were grossly overestimated as they were confounded by very minor dominance and major additive-by-additive epistatic genetic variances. In contrast, the implemented genomic pairwise realized relationship models allowed the disentanglement of additive from all nonadditive factors through genetic variance decomposition. The marker-based models produced more realistic narrow-sense heritability estimates and, for the first time, allowed estimating the dominance and epistatic genetic variances from OP testing. In addition, the genomic models showed better prediction accuracies compared to pedigree models and were able to predict individual breeding values for new individuals from untested families, which was not possible using the pedigree-based model. Clearly, the use of marker-based relationship approach is effective in estimating the quantitative genetic parameters of complex traits even under simple and shallow pedigree structure. PMID:26801647
Moeyaert, Mariola; Ugille, Maaike; Ferron, John M; Beretvas, S Natasha; Van den Noortgate, Wim
2014-09-01
The quantitative methods for analyzing single-subject experimental data have expanded during the last decade, including the use of regression models to statistically analyze the data, but still a lot of questions remain. One question is how to specify predictors in a regression model to account for the specifics of the design and estimate the effect size of interest. These quantitative effect sizes are used in retrospective analyses and allow synthesis of single-subject experimental study results which is informative for evidence-based decision making, research and theory building, and policy discussions. We discuss different design matrices that can be used for the most common single-subject experimental designs (SSEDs), namely, the multiple-baseline designs, reversal designs, and alternating treatment designs, and provide empirical illustrations. The purpose of this article is to guide single-subject experimental data analysts interested in analyzing and meta-analyzing SSED data. © The Author(s) 2014.
Principal Components Analyses of the MMPI-2 PSY-5 Scales. Identification of Facet Subscales
ERIC Educational Resources Information Center
Arnau, Randolph C.; Handel, Richard W.; Archer, Robert P.
2005-01-01
The Personality Psychopathology Five (PSY-5) is a five-factor personality trait model designed for assessing personality pathology using quantitative dimensions. Harkness, McNulty, and Ben-Porath developed Minnesota Multiphasic Personality Inventory-2 (MMPI-2) scales based on the PSY-5 model, and these scales were recently added to the standard…
NASA Astrophysics Data System (ADS)
Cook, B.; Anchukaitis, K. J.
2017-12-01
Comparative analyses of paleoclimate reconstructions and climate model simulations can provide valuable insights into past and future climate events. Conducting meaningful and quantitative comparisons, however, can be difficult for a variety of reasons. Here, we use tree-ring based hydroclimate reconstructions to discuss some best practices for paleoclimate-model comparisons, highlighting recent studies that have successfully used this approach. These analyses have improved our understanding of the Medieval-era megadroughts, ocean forcing of large scale drought patterns, and even climate change contributions to future drought risk. Additional work is needed, however, to better reconcile and formalize uncertainties across observed, modeled, and reconstructed variables. In this regard, process based forward models of proxy-systems will likely be a critical tool moving forward.
Testing process predictions of models of risky choice: a quantitative model comparison approach
Pachur, Thorsten; Hertwig, Ralph; Gigerenzer, Gerd; Brandstätter, Eduard
2013-01-01
This article presents a quantitative model comparison contrasting the process predictions of two prominent views on risky choice. One view assumes a trade-off between probabilities and outcomes (or non-linear functions thereof) and the separate evaluation of risky options (expectation models). Another view assumes that risky choice is based on comparative evaluation, limited search, aspiration levels, and the forgoing of trade-offs (heuristic models). We derived quantitative process predictions for a generic expectation model and for a specific heuristic model, namely the priority heuristic (Brandstätter et al., 2006), and tested them in two experiments. The focus was on two key features of the cognitive process: acquisition frequencies (i.e., how frequently individual reasons are looked up) and direction of search (i.e., gamble-wise vs. reason-wise). In Experiment 1, the priority heuristic predicted direction of search better than the expectation model (although neither model predicted the acquisition process perfectly); acquisition frequencies, however, were inconsistent with both models. Additional analyses revealed that these frequencies were primarily a function of what Rubinstein (1988) called “similarity.” In Experiment 2, the quantitative model comparison approach showed that people seemed to rely more on the priority heuristic in difficult problems, but to make more trade-offs in easy problems. This finding suggests that risky choice may be based on a mental toolbox of strategies. PMID:24151472
A two-factor error model for quantitative steganalysis
NASA Astrophysics Data System (ADS)
Böhme, Rainer; Ker, Andrew D.
2006-02-01
Quantitative steganalysis refers to the exercise not only of detecting the presence of hidden stego messages in carrier objects, but also of estimating the secret message length. This problem is well studied, with many detectors proposed but only a sparse analysis of errors in the estimators. A deep understanding of the error model, however, is a fundamental requirement for the assessment and comparison of different detection methods. This paper presents a rationale for a two-factor model for sources of error in quantitative steganalysis, and shows evidence from a dedicated large-scale nested experimental set-up with a total of more than 200 million attacks. Apart from general findings about the distribution functions found in both classes of errors, their respective weight is determined, and implications for statistical hypothesis tests in benchmarking scenarios or regression analyses are demonstrated. The results are based on a rigorous comparison of five different detection methods under many different external conditions, such as size of the carrier, previous JPEG compression, and colour channel selection. We include analyses demonstrating the effects of local variance and cover saturation on the different sources of error, as well as presenting the case for a relative bias model for between-image error.
NASA Technical Reports Server (NTRS)
Kruse, Fred A.; Dwyer, John L.
1993-01-01
The Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) measures reflected light in 224 contiguous spectra bands in the 0.4 to 2.45 micron region of the electromagnetic spectrum. Numerous studies have used these data for mineralogic identification and mapping based on the presence of diagnostic spectral features. Quantitative mapping requires conversion of the AVIRIS data to physical units (usually reflectance) so that analysis results can be compared and validated with field and laboratory measurements. This study evaluated two different AVIRIS calibration techniques to ground reflectance: an empirically-based method and an atmospheric model based method to determine their effects on quantitative scientific analyses. Expert system analysis and linear spectral unmixing were applied to both calibrated data sets to determine the effect of the calibration on the mineral identification and quantitative mapping results. Comparison of the image-map results and image reflectance spectra indicate that the model-based calibrated data can be used with automated mapping techniques to produce accurate maps showing the spatial distribution and abundance of surface mineralogy. This has positive implications for future operational mapping using AVIRIS or similar imaging spectrometer data sets without requiring a priori knowledge.
Larue, Ruben T H M; Defraene, Gilles; De Ruysscher, Dirk; Lambin, Philippe; van Elmpt, Wouter
2017-02-01
Quantitative analysis of tumour characteristics based on medical imaging is an emerging field of research. In recent years, quantitative imaging features derived from CT, positron emission tomography and MR scans were shown to be of added value in the prediction of outcome parameters in oncology, in what is called the radiomics field. However, results might be difficult to compare owing to a lack of standardized methodologies to conduct quantitative image analyses. In this review, we aim to present an overview of the current challenges, technical routines and protocols that are involved in quantitative imaging studies. The first issue that should be overcome is the dependency of several features on the scan acquisition and image reconstruction parameters. Adopting consistent methods in the subsequent target segmentation step is evenly crucial. To further establish robust quantitative image analyses, standardization or at least calibration of imaging features based on different feature extraction settings is required, especially for texture- and filter-based features. Several open-source and commercial software packages to perform feature extraction are currently available, all with slightly different functionalities, which makes benchmarking quite challenging. The number of imaging features calculated is typically larger than the number of patients studied, which emphasizes the importance of proper feature selection and prediction model-building routines to prevent overfitting. Even though many of these challenges still need to be addressed before quantitative imaging can be brought into daily clinical practice, radiomics is expected to be a critical component for the integration of image-derived information to personalize treatment in the future.
PRIORITIZING FUTURE RESEACH ON OFF-LABEL PRESCRIBING: RESULTS OF A QUANTITATIVE EVALUATION
Walton, Surrey M.; Schumock, Glen T.; Lee, Ky-Van; Alexander, G. Caleb; Meltzer, David; Stafford, Randall S.
2015-01-01
Background Drug use for indications not approved by the Food and Drug Administration exceeds 20% of prescribing. Available compendia indicate that a minority of off-label uses are well supported by evidence. Policy makers, however, lack information to identify where systematic reviews of the evidence or other research would be most valuable. Methods We developed a quantitative model for prioritizing individual drugs for future research on off-label uses. The base model incorporated three key factors, 1) the volume of off-label use with inadequate evidence, 2) safety, and 3) cost and market considerations. Nationally representative prescribing data were used to estimate the number of off-label drug uses by indication from 1/2005 through 6/2007 in the United States, and these indications were then categorized according to the adequacy of scientific support. Black box warnings and safety alerts were used to quantify drug safety. Drug cost, date of market entry, and marketing expenditures were used to quantify cost and market considerations. Each drug was assigned a relative value for each factor, and the factors were then weighted in the final model to produce a priority score. Sensitivity analyses were conducted by varying the weightings and model parameters. Results Drugs that were consistently ranked highly in both our base model and sensitivity analyses included quetiapine, warfarin, escitalopram, risperidone, montelukast, bupropion, sertraline, venlafaxine, celecoxib, lisinopril, duloxetine, trazodone, olanzapine, and epoetin alfa. Conclusion Future research into off-label drug use should focus on drugs used frequently with inadequate supporting evidence, particularly if further concerns are raised by known safety issues, high drug cost, recent market entry, and extensive marketing. Based on quantitative measures of these factors, we have prioritized drugs where targeted research and policy activities have high potential value. PMID:19025425
Peters, Susan; Kromhout, Hans; Portengen, Lützen; Olsson, Ann; Kendzia, Benjamin; Vincent, Raymond; Savary, Barbara; Lavoué, Jérôme; Cavallo, Domenico; Cattaneo, Andrea; Mirabelli, Dario; Plato, Nils; Fevotte, Joelle; Pesch, Beate; Brüning, Thomas; Straif, Kurt; Vermeulen, Roel
2013-01-01
We describe the elaboration and sensitivity analyses of a quantitative job-exposure matrix (SYN-JEM) for respirable crystalline silica (RCS). The aim was to gain insight into the robustness of the SYN-JEM RCS estimates based on critical decisions taken in the elaboration process. SYN-JEM for RCS exposure consists of three axes (job, region, and year) based on estimates derived from a previously developed statistical model. To elaborate SYN-JEM, several decisions were taken: i.e. the application of (i) a single time trend; (ii) region-specific adjustments in RCS exposure; and (iii) a prior job-specific exposure level (by the semi-quantitative DOM-JEM), with an override of 0 mg/m(3) for jobs a priori defined as non-exposed. Furthermore, we assumed that exposure levels reached a ceiling in 1960 and remained constant prior to this date. We applied SYN-JEM to the occupational histories of subjects from a large international pooled community-based case-control study. Cumulative exposure levels derived with SYN-JEM were compared with those from alternative models, described by Pearson correlation ((Rp)) and differences in unit of exposure (mg/m(3)-year). Alternative models concerned changes in application of job- and region-specific estimates and exposure ceiling, and omitting the a priori exposure ranking. Cumulative exposure levels for the study subjects ranged from 0.01 to 60 mg/m(3)-years, with a median of 1.76 mg/m(3)-years. Exposure levels derived from SYN-JEM and alternative models were overall highly correlated (R(p) > 0.90), although somewhat lower when omitting the region estimate ((Rp) = 0.80) or not taking into account the assigned semi-quantitative exposure level (R(p) = 0.65). Modification of the time trend (i.e. exposure ceiling at 1950 or 1970, or assuming a decline before 1960) caused the largest changes in absolute exposure levels (26-33% difference), but without changing the relative ranking ((Rp) = 0.99). Exposure estimates derived from SYN-JEM appeared to be plausible compared with (historical) levels described in the literature. Decisions taken in the development of SYN-JEM did not critically change the cumulative exposure levels. The influence of region-specific estimates needs to be explored in future risk analyses.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-05
.... This draft document describes the quantitative analyses that are being conducted as part of the review... primary (health-based) CO NAAQS, the Agency is conducting qualitative and quantitative assessments... results, observations, and related uncertainties associated with the quantitative analyses performed. An...
75 FR 79370 - Official Release of the MOVES2010a and EMFAC2007 Motor Vehicle Emissions Models for...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-20
...: This Notice announces the availability of two new EPA guidance documents for: completing quantitative... of the MOVES model (MOVES2010a) for official use for quantitative CO, PM 2.5, and PM 10 hot-spot... emissions model is required to be used in quantitative CO and PM hot-spot analyses for project-level...
Quantification of Microbial Phenotypes
Martínez, Verónica S.; Krömer, Jens O.
2016-01-01
Metabolite profiling technologies have improved to generate close to quantitative metabolomics data, which can be employed to quantitatively describe the metabolic phenotype of an organism. Here, we review the current technologies available for quantitative metabolomics, present their advantages and drawbacks, and the current challenges to generate fully quantitative metabolomics data. Metabolomics data can be integrated into metabolic networks using thermodynamic principles to constrain the directionality of reactions. Here we explain how to estimate Gibbs energy under physiological conditions, including examples of the estimations, and the different methods for thermodynamics-based network analysis. The fundamentals of the methods and how to perform the analyses are described. Finally, an example applying quantitative metabolomics to a yeast model by 13C fluxomics and thermodynamics-based network analysis is presented. The example shows that (1) these two methods are complementary to each other; and (2) there is a need to take into account Gibbs energy errors. Better estimations of metabolic phenotypes will be obtained when further constraints are included in the analysis. PMID:27941694
Code of Federal Regulations, 2013 CFR
2013-01-01
... robust analytical methods. The Department seeks to use qualitative and quantitative analytical methods... uncertainties will be carried forward in subsequent analyses. The use of quantitative models will be... manufacturers and other interested parties. The use of quantitative models will be supplemented by qualitative...
Code of Federal Regulations, 2012 CFR
2012-01-01
... robust analytical methods. The Department seeks to use qualitative and quantitative analytical methods... uncertainties will be carried forward in subsequent analyses. The use of quantitative models will be... manufacturers and other interested parties. The use of quantitative models will be supplemented by qualitative...
Code of Federal Regulations, 2014 CFR
2014-01-01
... robust analytical methods. The Department seeks to use qualitative and quantitative analytical methods... uncertainties will be carried forward in subsequent analyses. The use of quantitative models will be... manufacturers and other interested parties. The use of quantitative models will be supplemented by qualitative...
Hadfield, J D; Nakagawa, S
2010-03-01
Although many of the statistical techniques used in comparative biology were originally developed in quantitative genetics, subsequent development of comparative techniques has progressed in relative isolation. Consequently, many of the new and planned developments in comparative analysis already have well-tested solutions in quantitative genetics. In this paper, we take three recent publications that develop phylogenetic meta-analysis, either implicitly or explicitly, and show how they can be considered as quantitative genetic models. We highlight some of the difficulties with the proposed solutions, and demonstrate that standard quantitative genetic theory and software offer solutions. We also show how results from Bayesian quantitative genetics can be used to create efficient Markov chain Monte Carlo algorithms for phylogenetic mixed models, thereby extending their generality to non-Gaussian data. Of particular utility is the development of multinomial models for analysing the evolution of discrete traits, and the development of multi-trait models in which traits can follow different distributions. Meta-analyses often include a nonrandom collection of species for which the full phylogenetic tree has only been partly resolved. Using missing data theory, we show how the presented models can be used to correct for nonrandom sampling and show how taxonomies and phylogenies can be combined to give a flexible framework with which to model dependence.
Quantitative Agent Based Model of User Behavior in an Internet Discussion Forum
Sobkowicz, Pawel
2013-01-01
The paper presents an agent based simulation of opinion evolution, based on a nonlinear emotion/information/opinion (E/I/O) individual dynamics, to an actual Internet discussion forum. The goal is to reproduce the results of two-year long observations and analyses of the user communication behavior and of the expressed opinions and emotions, via simulations using an agent based model. The model allowed to derive various characteristics of the forum, including the distribution of user activity and popularity (outdegree and indegree), the distribution of length of dialogs between the participants, their political sympathies and the emotional content and purpose of the comments. The parameters used in the model have intuitive meanings, and can be translated into psychological observables. PMID:24324606
Analysis of rocket engine injection combustion processes
NASA Technical Reports Server (NTRS)
Salmon, J. W.
1976-01-01
A critique is given of the JANNAF sub-critical propellant injection/combustion process analysis computer models and application of the models to correlation of well documented hot fire engine data bases. These programs are the distributed energy release (DER) model for conventional liquid propellants injectors and the coaxial injection combustion model (CICM) for gaseous annulus/liquid core coaxial injectors. The critique identifies model inconsistencies while the computer analyses provide quantitative data on predictive accuracy. The program is comprised of three tasks: (1) computer program review and operations; (2) analysis and data correlations; and (3) documentation.
Assessing the risk posed by natural hazards to infrastructures
NASA Astrophysics Data System (ADS)
Eidsvig, Unni; Kristensen, Krister; Vidar Vangelsten, Bjørn
2015-04-01
The modern society is increasingly dependent on infrastructures to maintain its function, and disruption in one of the infrastructure systems may have severe consequences. The Norwegian municipalities have, according to legislation, a duty to carry out a risk and vulnerability analysis and plan and prepare for emergencies in a short- and long term perspective. Vulnerability analysis of the infrastructures and their interdependencies is an important part of this analysis. This paper proposes a model for assessing the risk posed by natural hazards to infrastructures. The model prescribes a three level analysis with increasing level of detail, moving from qualitative to quantitative analysis. This paper focuses on the second level, which consists of a semi-quantitative analysis. The purpose of this analysis is to perform a screening of the scenarios of natural hazards threatening the infrastructures identified in the level 1 analysis and investigate the need for further analyses, i.e. level 3 quantitative analyses. The proposed level 2 analysis considers the frequency of the natural hazard, different aspects of vulnerability including the physical vulnerability of the infrastructure itself and the societal dependency on the infrastructure. An indicator-based approach is applied, ranking the indicators on a relative scale. The proposed indicators characterize the robustness of the infrastructure, the importance of the infrastructure as well as interdependencies between society and infrastructure affecting the potential for cascading effects. Each indicator is ranked on a 1-5 scale based on pre-defined ranking criteria. The aggregated risk estimate is a combination of the semi-quantitative vulnerability indicators, as well as quantitative estimates of the frequency of the natural hazard and the number of users of the infrastructure. Case studies for two Norwegian municipalities are presented, where risk to primary road, water supply and power network threatened by storm and landslide is assessed. The application examples show that the proposed model provides a useful tool for screening of undesirable events, with the ultimate goal to reduce the societal vulnerability.
Hou, Zhifei; Sun, Guoxiang; Guo, Yong
2016-01-01
The present study demonstrated the use of the Linear Quantitative Profiling Method (LQPM) to evaluate the quality of Alkaloids of Sophora flavescens (ASF) based on chromatographic fingerprints in an accurate, economical and fast way. Both linear qualitative and quantitative similarities were calculated in order to monitor the consistency of the samples. The results indicate that the linear qualitative similarity (LQLS) is not sufficiently discriminating due to the predominant presence of three alkaloid compounds (matrine, sophoridine and oxymatrine) in the test samples; however, the linear quantitative similarity (LQTS) was shown to be able to obviously identify the samples based on the difference in the quantitative content of all the chemical components. In addition, the fingerprint analysis was also supported by the quantitative analysis of three marker compounds. The LQTS was found to be highly correlated to the contents of the marker compounds, indicating that quantitative analysis of the marker compounds may be substituted with the LQPM based on the chromatographic fingerprints for the purpose of quantifying all chemicals of a complex sample system. Furthermore, once reference fingerprint (RFP) developed from a standard preparation in an immediate detection way and the composition similarities calculated out, LQPM could employ the classical mathematical model to effectively quantify the multiple components of ASF samples without any chemical standard.
The Structure of Psychopathology: Toward an Expanded Quantitative Empirical Model
Wright, Aidan G.C.; Krueger, Robert F.; Hobbs, Megan J.; Markon, Kristian E.; Eaton, Nicholas R.; Slade, Tim
2013-01-01
There has been substantial recent interest in the development of a quantitative, empirically based model of psychopathology. However, the majority of pertinent research has focused on analyses of diagnoses, as described in current official nosologies. This is a significant limitation because existing diagnostic categories are often heterogeneous. In the current research, we aimed to redress this limitation of the existing literature, and to directly compare the fit of categorical, continuous, and hybrid (i.e., combined categorical and continuous) models of syndromes derived from indicators more fine-grained than diagnoses. We analyzed data from a large representative epidemiologic sample (the 2007 Australian National Survey of Mental Health and Wellbeing; N = 8,841). Continuous models provided the best fit for each syndrome we observed (Distress, Obsessive Compulsivity, Fear, Alcohol Problems, Drug Problems, and Psychotic Experiences). In addition, the best fitting higher-order model of these syndromes grouped them into three broad spectra: Internalizing, Externalizing, and Psychotic Experiences. We discuss these results in terms of future efforts to refine emerging empirically based, dimensional-spectrum model of psychopathology, and to use the model to frame psychopathology research more broadly. PMID:23067258
Lackey, Daniel P; Carruth, Eric D; Lasher, Richard A; Boenisch, Jan; Sachse, Frank B; Hitchcock, Robert W
2011-11-01
Gap junctions play a fundamental role in intercellular communication in cardiac tissue. Various types of heart disease including hypertrophy and ischemia are associated with alterations of the spatial arrangement of gap junctions. Previous studies applied two-dimensional optical and electron-microscopy to visualize gap junction arrangements. In normal cardiomyocytes, gap junctions were primarily found at cell ends, but can be found also in more central regions. In this study, we extended these approaches toward three-dimensional reconstruction of gap junction distributions based on high-resolution scanning confocal microscopy and image processing. We developed methods for quantitative characterization of gap junction distributions based on analysis of intensity profiles along the principal axes of myocytes. The analyses characterized gap junction polarization at cell ends and higher-order statistical image moments of intensity profiles. The methodology was tested in rat ventricular myocardium. Our analysis yielded novel quantitative data on gap junction distributions. In particular, the analysis demonstrated that the distributions exhibit significant variability with respect to polarization, skewness, and kurtosis. We suggest that this methodology provides a quantitative alternative to current approaches based on visual inspection, with applications in particular in characterization of engineered and diseased myocardium. Furthermore, we propose that these data provide improved input for computational modeling of cardiac conduction.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-20
.... Please note that EPA's policy is that public comments, whether submitted electronically or in paper, will... learning to perform quantitative hot-spot analyses; new burden associated with using the MOVES model for..., adjustment for increased burden associated with quantitative hot-spot analyses, an adjustment for the...
Morris, Jeffrey S; Baladandayuthapani, Veerabhadran; Herrick, Richard C; Sanna, Pietro; Gutstein, Howard
2011-01-01
Image data are increasingly encountered and are of growing importance in many areas of science. Much of these data are quantitative image data, which are characterized by intensities that represent some measurement of interest in the scanned images. The data typically consist of multiple images on the same domain and the goal of the research is to combine the quantitative information across images to make inference about populations or interventions. In this paper, we present a unified analysis framework for the analysis of quantitative image data using a Bayesian functional mixed model approach. This framework is flexible enough to handle complex, irregular images with many local features, and can model the simultaneous effects of multiple factors on the image intensities and account for the correlation between images induced by the design. We introduce a general isomorphic modeling approach to fitting the functional mixed model, of which the wavelet-based functional mixed model is one special case. With suitable modeling choices, this approach leads to efficient calculations and can result in flexible modeling and adaptive smoothing of the salient features in the data. The proposed method has the following advantages: it can be run automatically, it produces inferential plots indicating which regions of the image are associated with each factor, it simultaneously considers the practical and statistical significance of findings, and it controls the false discovery rate. Although the method we present is general and can be applied to quantitative image data from any application, in this paper we focus on image-based proteomic data. We apply our method to an animal study investigating the effects of opiate addiction on the brain proteome. Our image-based functional mixed model approach finds results that are missed with conventional spot-based analysis approaches. In particular, we find that the significant regions of the image identified by the proposed method frequently correspond to subregions of visible spots that may represent post-translational modifications or co-migrating proteins that cannot be visually resolved from adjacent, more abundant proteins on the gel image. Thus, it is possible that this image-based approach may actually improve the realized resolution of the gel, revealing differentially expressed proteins that would not have even been detected as spots by modern spot-based analyses.
Geospace Environment Modeling 2008-2009 Challenge: Ground Magnetic Field Perturbations
NASA Technical Reports Server (NTRS)
Pulkkinen, A.; Kuznetsova, M.; Ridley, A.; Raeder, J.; Vapirev, A.; Weimer, D.; Weigel, R. S.; Wiltberger, M.; Millward, G.; Rastatter, L.;
2011-01-01
Acquiring quantitative metrics!based knowledge about the performance of various space physics modeling approaches is central for the space weather community. Quantification of the performance helps the users of the modeling products to better understand the capabilities of the models and to choose the approach that best suits their specific needs. Further, metrics!based analyses are important for addressing the differences between various modeling approaches and for measuring and guiding the progress in the field. In this paper, the metrics!based results of the ground magnetic field perturbation part of the Geospace Environment Modeling 2008 2009 Challenge are reported. Predictions made by 14 different models, including an ensemble model, are compared to geomagnetic observatory recordings from 12 different northern hemispheric locations. Five different metrics are used to quantify the model performances for four storm events. It is shown that the ranking of the models is strongly dependent on the type of metric used to evaluate the model performance. None of the models rank near or at the top systematically for all used metrics. Consequently, one cannot pick the absolute winner : the choice for the best model depends on the characteristics of the signal one is interested in. Model performances vary also from event to event. This is particularly clear for root!mean!square difference and utility metric!based analyses. Further, analyses indicate that for some of the models, increasing the global magnetohydrodynamic model spatial resolution and the inclusion of the ring current dynamics improve the models capability to generate more realistic ground magnetic field fluctuations.
He, Gu; Qiu, Minghua; Li, Rui; Ouyang, Liang; Wu, Fengbo; Song, Xiangrong; Cheng, Li; Xiang, Mingli; Yu, Luoting
2012-06-01
Aurora-A has been known as one of the most important targets for cancer therapy, and some Aurora-A inhibitors have entered clinical trails. In this study, combination of the ligand-based and structure-based methods is used to clarify the essential quantitative structure-activity relationship of known Aurora-A inhibitors, and multicomplex-based pharmacophore-guided method has been suggested to generate a comprehensive pharmacophore of Aurora-A kinase based on a collection of crystal structures of Aurora-A-inhibitor complex. This model has been successfully used to identify the bioactive conformation and align 37 structurally diverse N-substituted 2'-(aminoaryl)benzothiazoles derivatives. The quantitative structure-activity relationship analyses have been performed on these Aurora-A inhibitors based on multicomplex-based pharmacophore-guided alignment. These results may provide important information for further design and virtual screening of novel Aurora-A inhibitors. © 2012 John Wiley & Sons A/S.
SYN-JEM: A Quantitative Job-Exposure Matrix for Five Lung Carcinogens.
Peters, Susan; Vermeulen, Roel; Portengen, Lützen; Olsson, Ann; Kendzia, Benjamin; Vincent, Raymond; Savary, Barbara; Lavoué, Jérôme; Cavallo, Domenico; Cattaneo, Andrea; Mirabelli, Dario; Plato, Nils; Fevotte, Joelle; Pesch, Beate; Brüning, Thomas; Straif, Kurt; Kromhout, Hans
2016-08-01
The use of measurement data in occupational exposure assessment allows more quantitative analyses of possible exposure-response relations. We describe a quantitative exposure assessment approach for five lung carcinogens (i.e. asbestos, chromium-VI, nickel, polycyclic aromatic hydrocarbons (by its proxy benzo(a)pyrene (BaP)) and respirable crystalline silica). A quantitative job-exposure matrix (JEM) was developed based on statistical modeling of large quantities of personal measurements. Empirical linear models were developed using personal occupational exposure measurements (n = 102306) from Europe and Canada, as well as auxiliary information like job (industry), year of sampling, region, an a priori exposure rating of each job (none, low, and high exposed), sampling and analytical methods, and sampling duration. The model outcomes were used to create a JEM with a quantitative estimate of the level of exposure by job, year, and region. Decreasing time trends were observed for all agents between the 1970s and 2009, ranging from -1.2% per year for personal BaP and nickel exposures to -10.7% for asbestos (in the time period before an asbestos ban was implemented). Regional differences in exposure concentrations (adjusted for measured jobs, years of measurement, and sampling method and duration) varied by agent, ranging from a factor 3.3 for chromium-VI up to a factor 10.5 for asbestos. We estimated time-, job-, and region-specific exposure levels for four (asbestos, chromium-VI, nickel, and RCS) out of five considered lung carcinogens. Through statistical modeling of large amounts of personal occupational exposure measurement data we were able to derive a quantitative JEM to be used in community-based studies. © The Author 2016. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
2012-01-01
We compared the reproducibility of multiple reaction monitoring (MRM) mass spectrometry-based peptide quantitation in tryptic digests from formalin-fixed, paraffin-embedded (FFPE) and frozen clear cell renal cell carcinoma tissues. The analyses targeted a candidate set of 114 peptides previously identified in shotgun proteomic analyses, of which 104 were detectable in FFPE and frozen tissue. Although signal intensities for MRM of peptides from FFPE tissue were on average 66% of those in frozen tissue, median coefficients of variation (CV) for measurements in FFPE and frozen tissues were nearly identical (18–20%). Measurements of lysine C-terminal peptides and arginine C-terminal peptides from FFPE tissue were similarly reproducible (19.5% and 18.3% median CV, respectively). We further evaluated the precision of MRM-based quantitation by analysis of peptides from the Her2 receptor in FFPE and frozen tissues from a Her2 overexpressing mouse xenograft model of breast cancer and in human FFPE breast cancer specimens. We obtained equivalent MRM measurements of HER2 receptor levels in FFPE and frozen mouse xenografts derived from HER2-overexpressing BT474 cells and HER2-negative Sum159 cells. MRM analyses of 5 HER2-positive and 5 HER-negative human FFPE breast tumors confirmed the results of immunohistochemical analyses, thus demonstrating the feasibility of HER2 protein quantification in FFPE tissue specimens. The data demonstrate that MRM analyses can be performed with equal precision on FFPE and frozen tissues and that lysine-containing peptides can be selected for quantitative comparisons, despite the greater impact of formalin fixation on lysine residues. The data further illustrate the feasibility of applying MRM to quantify clinically important tissue biomarkers in FFPE specimens. PMID:22530795
Elsawah, Sondoss; Guillaume, Joseph H A; Filatova, Tatiana; Rook, Josefine; Jakeman, Anthony J
2015-03-15
This paper aims to contribute to developing better ways for incorporating essential human elements in decision making processes for modelling of complex socio-ecological systems. It presents a step-wise methodology for integrating perceptions of stakeholders (qualitative) into formal simulation models (quantitative) with the ultimate goal of improving understanding and communication about decision making in complex socio-ecological systems. The methodology integrates cognitive mapping and agent based modelling. It cascades through a sequence of qualitative/soft and numerical methods comprising: (1) Interviews to elicit mental models; (2) Cognitive maps to represent and analyse individual and group mental models; (3) Time-sequence diagrams to chronologically structure the decision making process; (4) All-encompassing conceptual model of decision making, and (5) computational (in this case agent-based) Model. We apply the proposed methodology (labelled ICTAM) in a case study of viticulture irrigation in South Australia. Finally, we use strengths-weakness-opportunities-threats (SWOT) analysis to reflect on the methodology. Results show that the methodology leverages the use of cognitive mapping to capture the richness of decision making and mental models, and provides a combination of divergent and convergent analysis methods leading to the construction of an Agent Based Model. Copyright © 2014 Elsevier Ltd. All rights reserved.
Primdahl, Jørgen; Vesterager, Jens Peter; Finn, John A; Vlahos, George; Kristensen, Lone; Vejre, Henrik
2010-06-01
Agri-Environment Schemes (AES) to maintain or promote environmentally-friendly farming practices were implemented on about 25% of all agricultural land in the EU by 2002. This article analyses and discusses the actual and potential use of impact models in supporting the design, implementation and evaluation of AES. Impact models identify and establish the causal relationships between policy objectives and policy outcomes. We review and discuss the role of impact models at different stages in the AES policy process, and present results from a survey of impact models underlying 60 agri-environmental schemes in seven EU member states. We distinguished among three categories of impact models (quantitative, qualitative or common sense), depending on the degree of evidence in the formal scheme description, additional documents, or key person interviews. The categories of impact models used mainly depended on whether scheme objectives were related to natural resources, biodiversity or landscape. A higher proportion of schemes dealing with natural resources (primarily water) were based on quantitative impact models, compared to those concerned with biodiversity or landscape. Schemes explicitly targeted either on particular parts of individual farms or specific areas tended to be based more on quantitative impact models compared to whole-farm schemes and broad, horizontal schemes. We conclude that increased and better use of impact models has significant potential to improve efficiency and effectiveness of AES. (c) 2009 Elsevier Ltd. All rights reserved.
Unnikrishnan, Ginu U.; Morgan, Elise F.
2011-01-01
Inaccuracies in the estimation of material properties and errors in the assignment of these properties into finite element models limit the reliability, accuracy, and precision of quantitative computed tomography (QCT)-based finite element analyses of the vertebra. In this work, a new mesh-independent, material mapping procedure was developed to improve the quality of predictions of vertebral mechanical behavior from QCT-based finite element models. In this procedure, an intermediate step, called the material block model, was introduced to determine the distribution of material properties based on bone mineral density, and these properties were then mapped onto the finite element mesh. A sensitivity study was first conducted on a calibration phantom to understand the influence of the size of the material blocks on the computed bone mineral density. It was observed that varying the material block size produced only marginal changes in the predictions of mineral density. Finite element (FE) analyses were then conducted on a square column-shaped region of the vertebra and also on the entire vertebra in order to study the effect of material block size on the FE-derived outcomes. The predicted values of stiffness for the column and the vertebra decreased with decreasing block size. When these results were compared to those of a mesh convergence analysis, it was found that the influence of element size on vertebral stiffness was less than that of the material block size. This mapping procedure allows the material properties in a finite element study to be determined based on the block size required for an accurate representation of the material field, while the size of the finite elements can be selected independently and based on the required numerical accuracy of the finite element solution. The mesh-independent, material mapping procedure developed in this study could be particularly helpful in improving the accuracy of finite element analyses of vertebroplasty and spine metastases, as these analyses typically require mesh refinement at the interfaces between distinct materials. Moreover, the mapping procedure is not specific to the vertebra and could thus be applied to many other anatomic sites. PMID:21823740
Modelling Ebola virus dynamics: Implications for therapy.
Martyushev, Alexey; Nakaoka, Shinji; Sato, Kei; Noda, Takeshi; Iwami, Shingo
2016-11-01
Ebola virus (EBOV) causes a severe, often fatal Ebola virus disease (EVD), for which no approved antivirals exist. Recently, some promising anti-EBOV drugs, which are experimentally potent in animal models, have been developed. However, because the quantitative dynamics of EBOV replication in humans is uncertain, it remains unclear how much antiviral suppression of viral replication affects EVD outcome in patients. Here, we developed a novel mathematical model to quantitatively analyse human viral load data obtained during the 2000/01 Uganda EBOV outbreak and evaluated the effects of different antivirals. We found that nucleoside analogue- and siRNA-based therapies are effective if a therapy with a >50% inhibition rate is initiated within a few days post-symptom-onset. In contrast, antibody-based therapy requires not only a higher inhibition rate but also an earlier administration, especially for otherwise fatal cases. Our results demonstrate that an appropriate choice of EBOV-specific drugs is required for effective EVD treatment. Copyright © 2016 Elsevier B.V. All rights reserved.
Hou, Zhifei; Sun, Guoxiang; Guo, Yong
2016-01-01
The present study demonstrated the use of the Linear Quantitative Profiling Method (LQPM) to evaluate the quality of Alkaloids of Sophora flavescens (ASF) based on chromatographic fingerprints in an accurate, economical and fast way. Both linear qualitative and quantitative similarities were calculated in order to monitor the consistency of the samples. The results indicate that the linear qualitative similarity (LQLS) is not sufficiently discriminating due to the predominant presence of three alkaloid compounds (matrine, sophoridine and oxymatrine) in the test samples; however, the linear quantitative similarity (LQTS) was shown to be able to obviously identify the samples based on the difference in the quantitative content of all the chemical components. In addition, the fingerprint analysis was also supported by the quantitative analysis of three marker compounds. The LQTS was found to be highly correlated to the contents of the marker compounds, indicating that quantitative analysis of the marker compounds may be substituted with the LQPM based on the chromatographic fingerprints for the purpose of quantifying all chemicals of a complex sample system. Furthermore, once reference fingerprint (RFP) developed from a standard preparation in an immediate detection way and the composition similarities calculated out, LQPM could employ the classical mathematical model to effectively quantify the multiple components of ASF samples without any chemical standard. PMID:27529425
Code of Federal Regulations, 2011 CFR
2011-07-01
... (“Localized CO, PM10, and PM2.5 violations”) must be based on quantitative analysis using the applicable air... § 93.116 may be based on either: (i) Quantitative methods that represent reasonable and common... hot-spot analyses. (1) The hot-spot demonstration required by § 93.116 must be based on quantitative...
Code of Federal Regulations, 2013 CFR
2013-07-01
... (“Localized CO, PM10, and PM2.5 violations”) must be based on quantitative analysis using the applicable air... § 93.116 may be based on either: (i) Quantitative methods that represent reasonable and common... hot-spot analyses. (1) The hot-spot demonstration required by § 93.116 must be based on quantitative...
Code of Federal Regulations, 2012 CFR
2012-07-01
... (“Localized CO, PM10, and PM2.5 violations”) must be based on quantitative analysis using the applicable air... § 93.116 may be based on either: (i) Quantitative methods that represent reasonable and common... hot-spot analyses. (1) The hot-spot demonstration required by § 93.116 must be based on quantitative...
Code of Federal Regulations, 2010 CFR
2010-07-01
... (“Localized CO, PM10, and PM2.5 violations”) must be based on quantitative analysis using the applicable air... § 93.116 may be based on either: (i) Quantitative methods that represent reasonable and common... hot-spot analyses. (1) The hot-spot demonstration required by § 93.116 must be based on quantitative...
ERIC Educational Resources Information Center
Brückner, Sebastian; Pellegrino, James W.
2016-01-01
The Standards for Educational and Psychological Testing indicate that validation of assessments should include analyses of participants' response processes. However, such analyses typically are conducted only to supplement quantitative field studies with qualitative data, and seldom are such data connected to quantitative data on student or item…
Naik, P K; Singh, T; Singh, H
2009-07-01
Quantitative structure-activity relationship (QSAR) analyses were performed independently on data sets belonging to two groups of insecticides, namely the organophosphates and carbamates. Several types of descriptors including topological, spatial, thermodynamic, information content, lead likeness and E-state indices were used to derive quantitative relationships between insecticide activities and structural properties of chemicals. A systematic search approach based on missing value, zero value, simple correlation and multi-collinearity tests as well as the use of a genetic algorithm allowed the optimal selection of the descriptors used to generate the models. The QSAR models developed for both organophosphate and carbamate groups revealed good predictability with r(2) values of 0.949 and 0.838 as well as [image omitted] values of 0.890 and 0.765, respectively. In addition, a linear correlation was observed between the predicted and experimental LD(50) values for the test set data with r(2) of 0.871 and 0.788 for both the organophosphate and carbamate groups, indicating that the prediction accuracy of the QSAR models was acceptable. The models were also tested successfully from external validation criteria. QSAR models developed in this study should help further design of novel potent insecticides.
Dolled-Filhart, Marisa P; Gustavson, Mark D
2012-11-01
Translational oncology has been improved by using tissue microarrays (TMAs), which facilitate biomarker analysis of large cohorts on a single slide. This has allowed for rapid analysis and validation of potential biomarkers for prognostic and predictive value, as well as for evaluation of biomarker prevalence. Coupled with quantitative analysis of immunohistochemical (IHC) staining, objective and standardized biomarker data from tumor samples can further advance companion diagnostic approaches for the identification of drug-responsive or resistant patient subpopulations. This review covers the advantages, disadvantages and applications of TMAs for biomarker research. Research literature and reviews of TMAs and quantitative image analysis methodology have been surveyed for this review (with an AQUA® analysis focus). Applications such as multi-marker diagnostic development and pathway-based biomarker subpopulation analyses are described. Tissue microarrays are a useful tool for biomarker analyses including prevalence surveys, disease progression assessment and addressing potential prognostic or predictive value. By combining quantitative image analysis with TMAs, analyses will be more objective and reproducible, allowing for more robust IHC-based diagnostic test development. Quantitative multi-biomarker IHC diagnostic tests that can predict drug response will allow for greater success of clinical trials for targeted therapies and provide more personalized clinical decision making.
2015-01-01
Changes in glycosylation have been shown to have a profound correlation with development/malignancy in many cancer types. Currently, two major enrichment techniques have been widely applied in glycoproteomics, namely, lectin affinity chromatography (LAC)-based and hydrazide chemistry (HC)-based enrichments. Here we report the LC–MS/MS quantitative analyses of human blood serum glycoproteins and glycopeptides associated with esophageal diseases by LAC- and HC-based enrichment. The separate and complementary qualitative and quantitative data analyses of protein glycosylation were performed using both enrichment techniques. Chemometric and statistical evaluations, PCA plots, or ANOVA test, respectively, were employed to determine and confirm candidate cancer-associated glycoprotein/glycopeptide biomarkers. Out of 139, 59 common glycoproteins (42% overlap) were observed in both enrichment techniques. This overlap is very similar to previously published studies. The quantitation and evaluation of significantly changed glycoproteins/glycopeptides are complementary between LAC and HC enrichments. LC–ESI–MS/MS analyses indicated that 7 glycoproteins enriched by LAC and 11 glycoproteins enriched by HC showed significantly different abundances between disease-free and disease cohorts. Multiple reaction monitoring quantitation resulted in 13 glycopeptides by LAC enrichment and 10 glycosylation sites by HC enrichment to be statistically different among disease cohorts. PMID:25134008
Airborne electromagnetic mapping of the base of aquifer in areas of western Nebraska
Abraham, Jared D.; Cannia, James C.; Bedrosian, Paul A.; Johnson, Michaela R.; Ball, Lyndsay B.; Sibray, Steven S.
2012-01-01
Airborne geophysical surveys of selected areas of the North and South Platte River valleys of Nebraska, including Lodgepole Creek valley, collected data to map aquifers and bedrock topography and thus improve the understanding of groundwater - surface-water relationships to be used in water-management decisions. Frequency-domain helicopter electromagnetic surveys, using a unique survey flight-line design, collected resistivity data that can be related to lithologic information for refinement of groundwater model inputs. To make the geophysical data useful to multidimensional groundwater models, numerical inversion converted measured data into a depth-dependent subsurface resistivity model. The inverted resistivity model, along with sensitivity analyses and test-hole information, is used to identify hydrogeologic features such as bedrock highs and paleochannels, to improve estimates of groundwater storage. The two- and three-dimensional interpretations provide the groundwater modeler with a high-resolution hydrogeologic framework and a quantitative estimate of framework uncertainty. The new hydrogeologic frameworks improve understanding of the flow-path orientation by refining the location of paleochannels and associated base of aquifer highs. These interpretations provide resource managers high-resolution hydrogeologic frameworks and quantitative estimates of framework uncertainty. The improved base of aquifer configuration represents the hydrogeology at a level of detail not achievable with previously available data.
NASA Astrophysics Data System (ADS)
Schwarz, W.; Schwub, S.; Quering, K.; Wiedmann, D.; Höppel, H. W.; Göken, M.
2011-09-01
During their operational life-time, actively cooled liners of cryogenic combustion chambers are known to exhibit a characteristic so-called doghouse deformation, pursued by formation of axial cracks. The present work aims at developing a model that quantitatively accounts for this failure mechanism. High-temperature material behaviour is characterised in a test programme and it is shown that stress relaxation, strain rate dependence, isotropic and kinematic hardening as well as material ageing have to be taken into account in the model formulation. From fracture surface analyses of a thrust chamber it is concluded that the failure mode of the hot wall ligament at the tip of the doghouse is related to ductile rupture. A material model is proposed that captures all stated effects. Basing on the concept of continuum damage mechanics, the model is further extended to incorporate softening effects due to material degradation. The model is assessed on experimental data and quantitative agreement is established for all tests available. A 3D finite element thermo-mechanical analysis is performed on a representative thrust chamber applying the developed material-damage model. The simulation successfully captures the observed accrued thinning of the hot wall and quantitatively reproduces the doghouse deformation.
Witzke, Kathrin E; Rosowski, Kristin; Müller, Christian; Ahrens, Maike; Eisenacher, Martin; Megger, Dominik A; Knobloch, Jürgen; Koch, Andrea; Bracht, Thilo; Sitek, Barbara
2017-01-06
Quantitative secretome analyses are a high-performance tool for the discovery of physiological and pathophysiological changes in cellular processes. However, serum supplements in cell culture media limit secretome analyses, but serum depletion often leads to cell starvation and consequently biased results. To overcome these limiting factors, we investigated a model of T cell activation (Jurkat cells) and performed an approach for the selective enrichment of secreted proteins from conditioned medium utilizing metabolic marking of newly synthesized glycoproteins. Marked glycoproteins were labeled via bioorthogonal click chemistry and isolated by affinity purification. We assessed two labeling compounds conjugated with either biotin or desthiobiotin and the respective secretome fractions. 356 proteins were quantified using the biotin probe and 463 using desthiobiotin. 59 proteins were found differentially abundant (adjusted p-value ≤0.05, absolute fold change ≥1.5) between inactive and activated T cells using the biotin method and 86 using the desthiobiotin approach, with 31 mutual proteins cross-verified by independent experiments. Moreover, we analyzed the cellular proteome of the same model to demonstrate the benefit of secretome analyses and provide comprehensive data sets of both. 336 proteins (61.3%) were quantified exclusively in the secretome. Data are available via ProteomeXchange with identifier PXD004280.
Bridging the divide: a model-data approach to Polar and Alpine microbiology.
Bradley, James A; Anesio, Alexandre M; Arndt, Sandra
2016-03-01
Advances in microbial ecology in the cryosphere continue to be driven by empirical approaches including field sampling and laboratory-based analyses. Although mathematical models are commonly used to investigate the physical dynamics of Polar and Alpine regions, they are rarely applied in microbial studies. Yet integrating modelling approaches with ongoing observational and laboratory-based work is ideally suited to Polar and Alpine microbial ecosystems given their harsh environmental and biogeochemical characteristics, simple trophic structures, distinct seasonality, often difficult accessibility, geographical expansiveness and susceptibility to accelerated climate changes. In this opinion paper, we explain how mathematical modelling ideally complements field and laboratory-based analyses. We thus argue that mathematical modelling is a powerful tool for the investigation of these extreme environments and that fully integrated, interdisciplinary model-data approaches could help the Polar and Alpine microbiology community address some of the great research challenges of the 21st century (e.g. assessing global significance and response to climate change). However, a better integration of field and laboratory work with model design and calibration/validation, as well as a stronger focus on quantitative information is required to advance models that can be used to make predictions and upscale processes and fluxes beyond what can be captured by observations alone. © FEMS 2016.
Bridging the divide: a model-data approach to Polar and Alpine microbiology
Bradley, James A.; Anesio, Alexandre M.; Arndt, Sandra
2016-01-01
Advances in microbial ecology in the cryosphere continue to be driven by empirical approaches including field sampling and laboratory-based analyses. Although mathematical models are commonly used to investigate the physical dynamics of Polar and Alpine regions, they are rarely applied in microbial studies. Yet integrating modelling approaches with ongoing observational and laboratory-based work is ideally suited to Polar and Alpine microbial ecosystems given their harsh environmental and biogeochemical characteristics, simple trophic structures, distinct seasonality, often difficult accessibility, geographical expansiveness and susceptibility to accelerated climate changes. In this opinion paper, we explain how mathematical modelling ideally complements field and laboratory-based analyses. We thus argue that mathematical modelling is a powerful tool for the investigation of these extreme environments and that fully integrated, interdisciplinary model-data approaches could help the Polar and Alpine microbiology community address some of the great research challenges of the 21st century (e.g. assessing global significance and response to climate change). However, a better integration of field and laboratory work with model design and calibration/validation, as well as a stronger focus on quantitative information is required to advance models that can be used to make predictions and upscale processes and fluxes beyond what can be captured by observations alone. PMID:26832206
A Bayesian network model for predicting type 2 diabetes risk based on electronic health records
NASA Astrophysics Data System (ADS)
Xie, Jiang; Liu, Yan; Zeng, Xu; Zhang, Wu; Mei, Zhen
2017-07-01
An extensive, in-depth study of diabetes risk factors (DBRF) is of crucial importance to prevent (or reduce) the chance of suffering from type 2 diabetes (T2D). Accumulation of electronic health records (EHRs) makes it possible to build nonlinear relationships between risk factors and diabetes. However, the current DBRF researches mainly focus on qualitative analyses, and the inconformity of physical examination items makes the risk factors likely to be lost, which drives us to study the novel machine learning approach for risk model development. In this paper, we use Bayesian networks (BNs) to analyze the relationship between physical examination information and T2D, and to quantify the link between risk factors and T2D. Furthermore, with the quantitative analyses of DBRF, we adopt EHR and propose a machine learning approach based on BNs to predict the risk of T2D. The experiments demonstrate that our approach can lead to better predictive performance than the classical risk model.
Knowledge-based modelling of historical surfaces using lidar data
NASA Astrophysics Data System (ADS)
Höfler, Veit; Wessollek, Christine; Karrasch, Pierre
2016-10-01
Currently in archaeological studies digital elevation models are mainly used especially in terms of shaded reliefs for the prospection of archaeological sites. Hesse (2010) provides a supporting software tool for the determination of local relief models during the prospection using LiDAR scans. Furthermore the search for relicts from WW2 is also in the focus of his research. In James et al. (2006) the determined contour lines were used to reconstruct locations of archaeological artefacts such as buildings. This study is much more and presents an innovative workflow of determining historical high resolution terrain surfaces using recent high resolution terrain models and sedimentological expert knowledge. Based on archaeological field studies (Franconian Saale near Bad Neustadt in Germany) the sedimentological analyses shows that archaeological interesting horizon and geomorphological expert knowledge in combination with particle size analyses (Koehn, DIN ISO 11277) are useful components for reconstructing surfaces of the early Middle Ages. Furthermore the paper traces how it is possible to use additional information (extracted from a recent digital terrain model) to support the process of determination historical surfaces. Conceptual this research is based on methodology of geomorphometry and geo-statistics. The basic idea is that the working procedure is based on the different input data. One aims at tracking the quantitative data and the other aims at processing the qualitative data. Thus, the first quantitative data were available for further processing, which were later processed with the qualitative data to convert them to historical heights. In the final stage of the workflow all gathered information are stored in a large data matrix for spatial interpolation using the geostatistical method of Kriging. Besides the historical surface, the algorithm also provides a first estimation of accuracy of the modelling. The presented workflow is characterized by a high flexibility and the opportunity to include new available data in the process at any time.
Dulin-Keita, Akilah; Clay, Olivio; Whittaker, Shannon; Hannon, Lonnie; Adams, Ingrid K; Rogers, Michelle; Gans, Kim
2015-08-01
This study uses a mixed methods approach to 1) identify surrounding residents' perceived expectations for Housing Opportunities for People Everywhere (HOPE VI) policy on physical activity outcomes and to 2) quantitatively examine the odds of neighborhood-based physical activity pre-/post-HOPE VI in a low socioeconomic status, predominantly African American community in Birmingham, Alabama. To address aim one, we used group concept mapping which is a structured approach for data collection and analyses that produces pictures/maps of ideas. Fifty-eight residents developed statements about potential influences of HOPE VI on neighborhood-based physical activity. In the quantitative study, we examined whether these potential influences increased the odds of neighborhood walking/jogging. We computed block entry logistic regression models with a larger cohort of residents at baseline (n = 184) and six-months (n = 142, 77% retention; n = 120 for all informative variables). We examined perceived neighborhood disorder (perceived neighborhood disorder scale), walkability and aesthetics (Neighborhood Environment Walkability Scale) and HOPE VI-related community safety and safety for physical activity as predictors. During concept mapping, residents generated statements that clustered into three distinct concepts, "Increased Leisure Physical Activity," "Safe Play Areas," and "Generating Health Promoting Resources." The quantitative analyses indicated that changes in neighborhood walkability increased the odds of neighborhood-based physical activity (p = 0.04). When HOPE VI-related safety for physical activity was entered into the model, it was associated with increased odds of physical activity (p = 0.04). Walkability was no longer statistically significant. These results suggest that housing policies that create walkable neighborhoods and that improve perceptions of safety for physical activity may increase neighborhood-based physical activity. However, the longer term impacts of neighborhood-level policies on physical activity require more longitudinal evidence to determine whether increased participation in physical activity is sustained. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hu, Kun; Zhu, Qi-zhi; Chen, Liang; Shao, Jian-fu; Liu, Jian
2018-06-01
As confining pressure increases, crystalline rocks of moderate porosity usually undergo a transition in failure mode from localized brittle fracture to diffused damage and ductile failure. This transition has been widely reported experimentally for several decades; however, satisfactory modeling is still lacking. The present paper aims at modeling the brittle-ductile transition process of rocks under conventional triaxial compression. Based on quantitative analyses of experimental results, it is found that there is a quite satisfactory linearity between the axial inelastic strain at failure and the confining pressure prescribed. A micromechanics-based frictional damage model is then formulated using an associated plastic flow rule and a strain energy release rate-based damage criterion. The analytical solution to the strong plasticity-damage coupling problem is provided and applied to simulate the nonlinear mechanical behaviors of Tennessee marble, Indiana limestone and Jinping marble, each presenting a brittle-ductile transition in stress-strain curves.
NASA Astrophysics Data System (ADS)
Zangori, Laura; Forbes, Cory T.; Schwarz, Christina V.
2015-10-01
Opportunities to generate model-based explanations are crucial for elementary students, yet are rarely foregrounded in elementary science learning environments despite evidence that early learners can reason from models when provided with scaffolding. We used a quasi-experimental research design to investigate the comparative impact of a scaffold test condition consisting of embedded physical scaffolds within a curricular modeling task on third-grade (age 8-9) students' formulation of model-based explanations for the water cycle. This condition was contrasted to the control condition where third-grade students used a curricular modeling task with no embedded physical scaffolds. Students from each condition ( n scaffold = 60; n unscaffold = 56) generated models of the water cycle before and after completion of a 10-week water unit. Results from quantitative analyses suggest that students in the scaffolded condition represented and linked more subsurface water process sequences with surface water process sequences than did students in the unscaffolded condition. However, results of qualitative analyses indicate that students in the scaffolded condition were less likely to build upon these process sequences to generate model-based explanations and experienced difficulties understanding their models as abstracted representations rather than recreations of real-world phenomena. We conclude that embedded curricular scaffolds may support students to consider non-observable components of the water cycle but, alone, may be insufficient for generation of model-based explanations about subsurface water movement.
Beratto, Angelo; Agurto, Cristian; Freer, Juanita; Peña-Farfal, Carlos; Troncoso, Nicolás; Agurto, Andrés; Castillo, Rosario Del P
2017-10-01
Brown algae biomass has been shown to be a highly important industrial source for the production of alginates and different nutraceutical products. The characterization of this biomass is necessary in order to allocate its use to specific applications according to the chemical and biological characteristics of this highly variable resource. The methods commonly used for algae characterization require a long time for the analysis and rigorous pretreatments of samples. In this work, nondestructive and fast analyses of different morphological structures from Lessonia spicata and Macrocystis pyrifera, which were collected during different seasons, were performed using Fourier transform infrared (FT-IR) techniques in combination with chemometric methods. Mid-infrared (IR) and near-infrared (NIR) spectral ranges were tested to evaluate the spectral differences between the species, seasons, and morphological structures of algae using a principal component analysis (PCA). Quantitative analyses of the polyphenol and alginate contents and the anti-oxidant capacity of the samples were performed using partial least squares (PLS) with both spectral ranges in order to build a predictive model for the rapid quantification of these parameters with industrial purposes. The PCA mainly showed differences in the samples based on seasonal sampling, where changes were observed in the bands corresponding to polysaccharides, proteins, and lipids. The obtained PLS models had high correlation coefficients (r) for the polyphenol content and anti-oxidant capacity (r > 0.9) and lower values for the alginate determination (0.7 < r < 0.8). Fourier transform infrared-based techniques were suitable tools for the rapid characterization of algae biomass, in which high variability in the samples was incorporated for the qualitative and quantitative analyses, and have the potential to be used on an industrial scale.
Quantitative structure-activity relationship models that stand the test of time.
Davis, Andrew M; Wood, David J
2013-04-01
The pharmaceutical industry is in a period of intense change. While this has many drivers, attrition through the development process continues to be an important pressure. The emerging definitions of "compound quality" that are based on retrospective analyses of developmental attrition have highlighted a new direction for medicinal chemistry and the paradigm of "quality at the point of design". The time has come for retrospective analyses to catalyze prospective action. Quality at the point of design places pressure on the quality of our predictive models. Empirical QSAR models when built with care provide true predictive control, but their accuracy and precision can be improved. Here we describe AstraZeneca's experience of automation in QSAR model building and validation, and how an informatics system can provide a step-change in predictive power to project design teams, if they choose to use it.
Direct Measurements of the Convective Recycling of the Upper Troposphere
NASA Technical Reports Server (NTRS)
Bertram, Timothy H.; Perring, Anne E.; Wooldridge, Paul J.; Crounse, John D.; Kwan, Alan J.; Wennberg, Paul O.; Scheuer, Eric; Dibb, Jack; Avery, Melody; Sachse, Glen;
2007-01-01
We present a statistical representation of the aggregate effects of deep convection on the chemistry and dynamics of the Upper Troposphere (UT) based on direct aircraft observations of the chemical composition of the UT over the Eastern United States and Canada during summer. These measurements provide new and unique observational constraints on the chemistry occurring downwind of convection and the rate at which air in the UT is recycled, previously only the province of model analyses. These results provide quantitative measures that can be used to evaluate global climate and chemistry models.
Wang, Yi-Shan; Potts, Jonathan R
2017-03-07
Recent advances in animal tracking have allowed us to uncover the drivers of movement in unprecedented detail. This has enabled modellers to construct ever more realistic models of animal movement, which aid in uncovering detailed patterns of space use in animal populations. Partial differential equations (PDEs) provide a popular tool for mathematically analysing such models. However, their construction often relies on simplifying assumptions which may greatly affect the model outcomes. Here, we analyse the effect of various PDE approximations on the analysis of some simple movement models, including a biased random walk, central-place foraging processes and movement in heterogeneous landscapes. Perhaps the most commonly-used PDE method dates back to a seminal paper of Patlak from 1953. However, our results show that this can be a very poor approximation in even quite simple models. On the other hand, more recent methods, based on transport equation formalisms, can provide more accurate results, as long as the kernel describing the animal's movement is sufficiently smooth. When the movement kernel is not smooth, we show that both the older and newer methods can lead to quantitatively misleading results. Our detailed analysis will aid future researchers in the appropriate choice of PDE approximation for analysing models of animal movement. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Dekiff, Markus; Kemper, Björn; Kröger, Elke; Denz, Cornelia; Dirksen, Dieter
2017-03-01
The mechanical loading of dental restorations and hard tissue is often investigated numerically. For validation and optimization of such simulations, comparisons with measured deformations are essential. We combine digital holographic interferometry and digital speckle photography for the determination of microscopic deformations with a photogrammetric method that is based on digital image correlation of a projected laser speckle pattern. This multimodal workstation allows the simultaneous acquisition of the specimen's macroscopic 3D shape and thus a quantitative comparison of measured deformations with simulation data. In order to demonstrate the feasibility of our system, two applications are presented: the quantitative determination of (1) the deformation of a mandible model due to mechanical loading of an inserted dental implant and of (2) the deformation of a (dental) bridge model under mechanical loading. The results were compared with data from finite element analyses of the investigated applications. The experimental results showed close agreement with those of the simulations.
Wolf, Louis; Scheffer-de Gooyert, Jolanda M.; Monedero, Ignacio; Torroja, Laura; Coromina, Lluis; van der Laak, Jeroen A. W. M.; Schenck, Annette
2016-01-01
The morphology of synapses is of central interest in neuroscience because of the intimate relation with synaptic efficacy. Two decades of gene manipulation studies in different animal models have revealed a repertoire of molecules that contribute to synapse development. However, since such studies often assessed only one, or at best a few, morphological features at a given synapse, it remained unaddressed how different structural aspects relate to one another. Furthermore, such focused and sometimes only qualitative approaches likely left many of the more subtle players unnoticed. Here, we present the image analysis algorithm ‘Drosophila_NMJ_Morphometrics’, available as a Fiji-compatible macro, for quantitative, accurate and objective synapse morphometry of the Drosophila larval neuromuscular junction (NMJ), a well-established glutamatergic model synapse. We developed this methodology for semi-automated multiparametric analyses of NMJ terminals immunolabeled for the commonly used markers Dlg1 and Brp and showed that it also works for Hrp, Csp and Syt. We demonstrate that gender, genetic background and identity of abdominal body segment consistently and significantly contribute to variability in our data, suggesting that controlling for these parameters is important to minimize variability in quantitative analyses. Correlation and principal component analyses (PCA) were performed to investigate which morphometric parameters are inter-dependent and which ones are regulated rather independently. Based on nine acquired parameters, we identified five morphometric groups: NMJ size, geometry, muscle size, number of NMJ islands and number of active zones. Based on our finding that the parameters of the first two principal components hardly correlated with each other, we suggest that different molecular processes underlie these two morphometric groups. Our study sets the stage for systems morphometry approaches at the well-studied Drosophila NMJ. PMID:26998933
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fisk, William J.; Eliseeva, Ekaterina A.; Mendell, Mark J.
Dampness and mold have been shown in qualitative reviews to be associated with a variety of adverse respiratory health effects, including respiratory tract infections. Several published meta-analyses have provided quantitative summaries for some of these associations, but not for respiratory infections. Demonstrating a causal relationship between dampness-related agents, which are preventable exposures, and respiratory tract infections would suggest important new public health strategies. We report the results of quantitative meta-analyses of published studies that examined the association of dampness or mold in homes with respiratory infections and bronchitis. For primary studies meeting eligibility criteria, we transformed reported odds ratios (ORs)more » and confidence intervals (CIs) to the log scale. Both fixed and random effects models were applied to the log ORs and their variances. Most studies contained multiple estimated ORs. Models accounted for the correlation between multiple results within the studies analyzed. One set of analyses was performed with all eligible studies, and another set restricted to studies that controlled for age, gender, smoking, and socioeconomic status. Subgroups of studies were assessed to explore heterogeneity. Funnel plots were used to assess publication bias. The resulting summary estimates of ORs from random effects models based on all studies ranged from 1.38 to 1.50, with 95% CIs excluding the null in all cases. Use of different analysis models and restricting analyses based on control of multiple confounding variables changed findings only slightly. ORs (95% CIs) from random effects models using studies adjusting for major confounding variables were, for bronchitis, 1.45 (1.32-1.59); for respiratory infections, 1.44 (1.31-1.59); for respiratory infections excluding nonspecific upper respiratory infections, 1.50 (1.32-1.70), and for respiratory infections in children or infants, 1.48 (1.33-1.65). Little effect of publication bias was evident. Estimated attributable risk proportions ranged from 8% to 20%. Residential dampness and mold are associated with substantial and statistically significant increases in both respiratory infections and bronchitis. If these associations were confirmed as causal, effective control of dampness and mold in buildings would prevent a substantial proportion of respiratory infections.« less
Owens, Douglas K; Whitlock, Evelyn P; Henderson, Jillian; Pignone, Michael P; Krist, Alex H; Bibbins-Domingo, Kirsten; Curry, Susan J; Davidson, Karina W; Ebell, Mark; Gillman, Matthew W; Grossman, David C; Kemper, Alex R; Kurth, Ann E; Maciosek, Michael; Siu, Albert L; LeFevre, Michael L
2016-10-04
The U.S. Preventive Services Task Force (USPSTF) develops evidence-based recommendations about preventive care based on comprehensive systematic reviews of the best available evidence. Decision models provide a complementary, quantitative approach to support the USPSTF as it deliberates about the evidence and develops recommendations for clinical and policy use. This article describes the rationale for using modeling, an approach to selecting topics for modeling, and how modeling may inform recommendations about clinical preventive services. Decision modeling is useful when clinical questions remain about how to target an empirically established clinical preventive service at the individual or program level or when complex determinations of magnitude of net benefit, overall or among important subpopulations, are required. Before deciding whether to use decision modeling, the USPSTF assesses whether the benefits and harms of the preventive service have been established empirically, assesses whether there are key issues about applicability or implementation that modeling could address, and then defines the decision problem and key questions to address through modeling. Decision analyses conducted for the USPSTF are expected to follow best practices for modeling. For chosen topics, the USPSTF assesses the strengths and limitations of the systematically reviewed evidence and the modeling analyses and integrates the results of each to make preventive service recommendations.
The physical and biological basis of quantitative parameters derived from diffusion MRI
2012-01-01
Diffusion magnetic resonance imaging is a quantitative imaging technique that measures the underlying molecular diffusion of protons. Diffusion-weighted imaging (DWI) quantifies the apparent diffusion coefficient (ADC) which was first used to detect early ischemic stroke. However this does not take account of the directional dependence of diffusion seen in biological systems (anisotropy). Diffusion tensor imaging (DTI) provides a mathematical model of diffusion anisotropy and is widely used. Parameters, including fractional anisotropy (FA), mean diffusivity (MD), parallel and perpendicular diffusivity can be derived to provide sensitive, but non-specific, measures of altered tissue structure. They are typically assessed in clinical studies by voxel-based or region-of-interest based analyses. The increasing recognition of the limitations of the diffusion tensor model has led to more complex multi-compartment models such as CHARMED, AxCaliber or NODDI being developed to estimate microstructural parameters including axonal diameter, axonal density and fiber orientations. However these are not yet in routine clinical use due to lengthy acquisition times. In this review, I discuss how molecular diffusion may be measured using diffusion MRI, the biological and physical bases for the parameters derived from DWI and DTI, how these are used in clinical studies and the prospect of more complex tissue models providing helpful micro-structural information. PMID:23289085
Feng, Taotao; Wang, Hai; Zhang, Xiaojin; Sun, Haopeng; You, Qidong
2014-06-01
Protein lysine methyltransferase G9a, which catalyzes methylation of lysine 9 of histone H3 (H3K9) and lysine 373 (K373) of p53, is overexpressed in human cancers. This suggests that small molecular inhibitors of G9a might be attractive antitumor agents. Herein we report our efforts on the design of novel G9a inhibitor based on the 3D quantitative structure-activity relationship (3D-QSAR) analysis of a series of 2,4-diamino-7-aminoalkoxyquinazolineas G9a inhibitors. The 3D-QSAR model was generated from 47 compounds using docking based molecular alignment. The best predictions were obtained with CoMFA standard model (q2 =0.700, r2 = 0.952) and CoMSIA model combined with steric, electrostatic, hydrophobic, hydrogen bond donor and acceptor fields (q2 = 0.724, r2 =0.960). The structural requirements for substituted 2,4-diamino-7-aminoalkoxyquinazoline for G9a inhibitory activity can be obtained by analysing the COMSIA plots. Based on the information, six novel follow-up analogs were designed.
NASA Astrophysics Data System (ADS)
Kühn, Michael; Vieth-Hillebrand, Andrea; Wilke, Franziska D. H.
2017-04-01
Black shales are a heterogeneous mixture of minerals, organic matter and formation water and little is actually known about the fluid-rock interactions during hydraulic fracturing and their effects on composition of flowback and produced water. Geochemical simulations have been performed based on the analyses of "real" flowback water samples and artificial stimulation fluids from lab experiments with the aim to set up a chemical process model for shale gas reservoirs. Prediction of flowback water compositions for potential or already chosen sites requires validated and parameterized geochemical models. For the software "Geochemist's Workbench" (GWB) data bases are adapted and amended based on a literature review. Evaluation of the system has been performed in comparison with the results from laboratory experiments. Parameterization was done in regard to field data provided. Finally, reaction path models are applied for quantitative information about the mobility of compounds in specific settings. Our work leads to quantitative estimates of reservoir compounds in the flowback based on calibrations by laboratory experiments. Such information is crucial for the assessment of environmental impacts as well as to estimate human- and ecotoxicological effects of the flowback waters from a variety of natural gas shales. With a comprehensive knowledge about potential composition and mobility of flowback water, selection of water treatment techniques will become easier.
NASA Astrophysics Data System (ADS)
Adar, E. M.; Rosenthal, E.; Issar, A. S.; Batelaan, O.
1992-08-01
This paper demonstrates the implementation of a novel mathematical model to quantify subsurface inflows from various sources into the arid alluvial basin of the southern Arava Valley divided between Israel and Jordan. The model is based on spatial distribution of environmental tracers and is aimed for use on basins with complex hydrogeological structure and/or with scarce physical hydrologic information. However, a sufficient qualified number of wells and springs are required to allow water sampling for chemical and isotopic analyses. Environmental tracers are used in a multivariable cluster analysis to define potential sources of recharge, and also to delimit homogeneous mixing compartments within the modeled aquifer. Six mixing cells were identified based on 13 constituents. A quantitative assessment of 11 significant subsurface inflows was obtained. Results revealed that the total recharge into the southern Arava basin is around 12.52 × 10 6m3year-1. The major source of inflow into the alluvial aquifer is from the Nubian sandstone aquifer which comprises 65-75% of the total recharge. Only 19-24% of the recharge, but the most important source of fresh water, originates over the eastern Jordanian mountains and alluvial fans.
Microfluidics-based digital quantitative PCR for single-cell small RNA quantification.
Yu, Tian; Tang, Chong; Zhang, Ying; Zhang, Ruirui; Yan, Wei
2017-09-01
Quantitative analyses of small RNAs at the single-cell level have been challenging because of limited sensitivity and specificity of conventional real-time quantitative PCR methods. A digital quantitative PCR (dqPCR) method for miRNA quantification has been developed, but it requires the use of proprietary stem-loop primers and only applies to miRNA quantification. Here, we report a microfluidics-based dqPCR (mdqPCR) method, which takes advantage of the Fluidigm BioMark HD system for both template partition and the subsequent high-throughput dqPCR. Our mdqPCR method demonstrated excellent sensitivity and reproducibility suitable for quantitative analyses of not only miRNAs but also all other small RNA species at the single-cell level. Using this method, we discovered that each sperm has a unique miRNA profile. © The Authors 2017. Published by Oxford University Press on behalf of Society for the Study of Reproduction. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Quantitative Residual Strain Analyses on Strain Hardened Nickel Based Alloy
NASA Astrophysics Data System (ADS)
Yonezawa, Toshio; Maeguchi, Takaharu; Goto, Toru; Juan, Hou
Many papers have reported about the effects of strain hardening by cold rolling, grinding, welding, etc. on stress corrosion cracking susceptibility of nickel based alloys and austenitic stainless steels for LWR pipings and components. But, the residual strain value due to cold rolling, grinding, welding, etc. is not so quantitatively evaluated.
Aarons, Gregory A; Green, Amy E; Willging, Cathleen E; Ehrhart, Mark G; Roesch, Scott C; Hecht, Debra B; Chaffin, Mark J
2014-12-10
This study examines sustainment of an EBI implemented in 11 United States service systems across two states, and delivered in 87 counties. The aims are to 1) determine the impact of state and county policies and contracting on EBI provision and sustainment; 2) investigate the role of public, private, and academic relationships and collaboration in long-term EBI sustainment; 3) assess organizational and provider factors that affect EBI reach/penetration, fidelity, and organizational sustainment climate; and 4) integrate findings through a collaborative process involving the investigative team, consultants, and system and community-based organization (CBO) stakeholders in order to further develop and refine a conceptual model of sustainment to guide future research and provide a resource for service systems to prepare for sustainment as the ultimate goal of the implementation process. A mixed-method prospective and retrospective design will be used. Semi-structured individual and group interviews will be used to collect information regarding influences on EBI sustainment including policies, attitudes, and practices; organizational factors and external policies affecting model implementation; involvement of or collaboration with other stakeholders; and outer- and inner-contextual supports that facilitate ongoing EBI sustainment. Document review (e.g., legislation, executive orders, regulations, monitoring data, annual reports, agendas and meeting minutes) will be used to examine the roles of state, county, and local policies in EBI sustainment. Quantitative measures will be collected via administrative data and web surveys to assess EBI reach/penetration, staff turnover, EBI model fidelity, organizational culture and climate, work attitudes, implementation leadership, sustainment climate, attitudes toward EBIs, program sustainment, and level of institutionalization. Hierarchical linear modeling will be used for quantitative analyses. Qualitative analyses will be tailored to each of the qualitative methods (e.g., document review, interviews). Qualitative and quantitative approaches will be integrated through an inclusive process that values stakeholder perspectives. The study of sustainment is critical to capitalizing on and benefiting from the time and fiscal investments in EBI implementation. Sustainment is also critical to realizing broad public health impact of EBI implementation. The present study takes a comprehensive mixed-method approach to understanding sustainment and refining a conceptual model of sustainment.
Pütter, Carolin; Pechlivanis, Sonali; Nöthen, Markus M; Jöckel, Karl-Heinz; Wichmann, Heinz-Erich; Scherag, André
2011-01-01
Genome-wide association studies have identified robust associations between single nucleotide polymorphisms and complex traits. As the proportion of phenotypic variance explained is still limited for most of the traits, larger and larger meta-analyses are being conducted to detect additional associations. Here we investigate the impact of the study design and the underlying assumption about the true genetic effect in a bimodal mixture situation on the power to detect associations. We performed simulations of quantitative phenotypes analysed by standard linear regression and dichotomized case-control data sets from the extremes of the quantitative trait analysed by standard logistic regression. Using linear regression, markers with an effect in the extremes of the traits were almost undetectable, whereas analysing extremes by case-control design had superior power even for much smaller sample sizes. Two real data examples are provided to support our theoretical findings and to explore our mixture and parameter assumption. Our findings support the idea to re-analyse the available meta-analysis data sets to detect new loci in the extremes. Moreover, our investigation offers an explanation for discrepant findings when analysing quantitative traits in the general population and in the extremes. Copyright © 2011 S. Karger AG, Basel.
The C23A system, an exmaple of quantitative control of plant growth associated with a data base
NASA Technical Reports Server (NTRS)
Andre, M.; Daguenet, A.; Massimino, D.; Gerbaud, A.
1986-01-01
The architecture of the C23A (Chambers de Culture Automatique en Atmosphere Artificielles) system for the controlled study of plant physiology is described. A modular plant growth chambers and associated instruments (I.R. CO2 analyser, Mass spectrometer and Chemical analyser); network of frontal processors controlling this apparatus; a central computer for the periodic control and the multiplex work of processors; and a network of terminal computers able to ask the data base for data processing and modeling are discussed. Examples of present results are given. A growth curve analysis study of CO2 and O2 gas exchanges of shoots and roots, and daily evolution of algal photosynthesis and of the pools of dissolved CO2 in sea water are discussed.
J.C. Douma; M. Pautasso; R.C. Venette; C. Robinet; L. Hemerik; M.C.M. Mourits; J. Schans; W. van der Werf
2016-01-01
Alien plant pests are introduced into new areas at unprecedented rates through global trade, transport, tourism and travel, threatening biodiversity and agriculture. Increasingly, the movement and introduction of pests is analysed with pathway models to provide risk managers with quantitative estimates of introduction risks and effectiveness of management options....
Industrial ecology: Quantitative methods for exploring a lower carbon future
NASA Astrophysics Data System (ADS)
Thomas, Valerie M.
2015-03-01
Quantitative methods for environmental and cost analyses of energy, industrial, and infrastructure systems are briefly introduced and surveyed, with the aim of encouraging broader utilization and development of quantitative methods in sustainable energy research. Material and energy flow analyses can provide an overall system overview. The methods of engineering economics and cost benefit analysis, such as net present values, are the most straightforward approach for evaluating investment options, with the levelized cost of energy being a widely used metric in electricity analyses. Environmental lifecycle assessment has been extensively developed, with both detailed process-based and comprehensive input-output approaches available. Optimization methods provide an opportunity to go beyond engineering economics to develop detailed least-cost or least-impact combinations of many different choices.
Automated detection of arterial input function in DSC perfusion MRI in a stroke rat model
NASA Astrophysics Data System (ADS)
Yeh, M.-Y.; Lee, T.-H.; Yang, S.-T.; Kuo, H.-H.; Chyi, T.-K.; Liu, H.-L.
2009-05-01
Quantitative cerebral blood flow (CBF) estimation requires deconvolution of the tissue concentration time curves with an arterial input function (AIF). However, image-based determination of AIF in rodent is challenged due to limited spatial resolution. We evaluated the feasibility of quantitative analysis using automated AIF detection and compared the results with commonly applied semi-quantitative analysis. Permanent occlusion of bilateral or unilateral common carotid artery was used to induce cerebral ischemia in rats. The image using dynamic susceptibility contrast method was performed on a 3-T magnetic resonance scanner with a spin-echo echo-planar-image sequence (TR/TE = 700/80 ms, FOV = 41 mm, matrix = 64, 3 slices, SW = 2 mm), starting from 7 s prior to contrast injection (1.2 ml/kg) at four different time points. For quantitative analysis, CBF was calculated by the AIF which was obtained from 10 voxels with greatest contrast enhancement after deconvolution. For semi-quantitative analysis, relative CBF was estimated by the integral divided by the first moment of the relaxivity time curves. We observed if the AIFs obtained in the three different ROIs (whole brain, hemisphere without lesion and hemisphere with lesion) were similar, the CBF ratios (lesion/normal) between quantitative and semi-quantitative analyses might have a similar trend at different operative time points. If the AIFs were different, the CBF ratios might be different. We concluded that using local maximum one can define proper AIF without knowing the anatomical location of arteries in a stroke rat model.
Physically-based in silico light sheet microscopy for visualizing fluorescent brain models
2015-01-01
Background We present a physically-based computational model of the light sheet fluorescence microscope (LSFM). Based on Monte Carlo ray tracing and geometric optics, our method simulates the operational aspects and image formation process of the LSFM. This simulated, in silico LSFM creates synthetic images of digital fluorescent specimens that can resemble those generated by a real LSFM, as opposed to established visualization methods producing visually-plausible images. We also propose an accurate fluorescence rendering model which takes into account the intrinsic characteristics of fluorescent dyes to simulate the light interaction with fluorescent biological specimen. Results We demonstrate first results of our visualization pipeline to a simplified brain tissue model reconstructed from the somatosensory cortex of a young rat. The modeling aspects of the LSFM units are qualitatively analysed, and the results of the fluorescence model were quantitatively validated against the fluorescence brightness equation and characteristic emission spectra of different fluorescent dyes. AMS subject classification Modelling and simulation PMID:26329404
Zanzonico, Pat; Carrasquillo, Jorge A; Pandit-Taskar, Neeta; O'Donoghue, Joseph A; Humm, John L; Smith-Jones, Peter; Ruan, Shutian; Divgi, Chaitanya; Scott, Andrew M; Kemeny, Nancy E; Fong, Yuman; Wong, Douglas; Scheinberg, David; Ritter, Gerd; Jungbluth, Achem; Old, Lloyd J; Larson, Steven M
2015-10-01
The molecular specificity of monoclonal antibodies (mAbs) directed against tumor antigens has proven effective for targeted therapy of human cancers, as shown by a growing list of successful antibody-based drug products. We describe a novel, nonlinear compartmental model using PET-derived data to determine the "best-fit" parameters and model-derived quantities for optimizing biodistribution of intravenously injected (124)I-labeled antitumor antibodies. As an example of this paradigm, quantitative image and kinetic analyses of anti-A33 humanized mAb (also known as "A33") were performed in 11 colorectal cancer patients. Serial whole-body PET scans of (124)I-labeled A33 and blood samples were acquired and the resulting tissue time-activity data for each patient were fit to a nonlinear compartmental model using the SAAM II computer code. Excellent agreement was observed between fitted and measured parameters of tumor uptake, "off-target" uptake in bowel mucosa, blood clearance, tumor antigen levels, and percent antigen occupancy. This approach should be generally applicable to antibody-antigen systems in human tumors for which the masses of antigen-expressing tumor and of normal tissues can be estimated and for which antibody kinetics can be measured with PET. Ultimately, based on each patient's resulting "best-fit" nonlinear model, a patient-specific optimum mAb dose (in micromoles, for example) may be derived.
Highly Reproducible Label Free Quantitative Proteomic Analysis of RNA Polymerase Complexes*
Mosley, Amber L.; Sardiu, Mihaela E.; Pattenden, Samantha G.; Workman, Jerry L.; Florens, Laurence; Washburn, Michael P.
2011-01-01
The use of quantitative proteomics methods to study protein complexes has the potential to provide in-depth information on the abundance of different protein components as well as their modification state in various cellular conditions. To interrogate protein complex quantitation using shotgun proteomic methods, we have focused on the analysis of protein complexes using label-free multidimensional protein identification technology and studied the reproducibility of biological replicates. For these studies, we focused on three highly related and essential multi-protein enzymes, RNA polymerase I, II, and III from Saccharomyces cerevisiae. We found that label-free quantitation using spectral counting is highly reproducible at the protein and peptide level when analyzing RNA polymerase I, II, and III. In addition, we show that peptide sampling does not follow a random sampling model, and we show the need for advanced computational models to predict peptide detection probabilities. In order to address these issues, we used the APEX protocol to model the expected peptide detectability based on whole cell lysate acquired using the same multidimensional protein identification technology analysis used for the protein complexes. Neither method was able to predict the peptide sampling levels that we observed using replicate multidimensional protein identification technology analyses. In addition to the analysis of the RNA polymerase complexes, our analysis provides quantitative information about several RNAP associated proteins including the RNAPII elongation factor complexes DSIF and TFIIF. Our data shows that DSIF and TFIIF are the most highly enriched RNAP accessory factors in Rpb3-TAP purifications and demonstrate our ability to measure low level associated protein abundance across biological replicates. In addition, our quantitative data supports a model in which DSIF and TFIIF interact with RNAPII in a dynamic fashion in agreement with previously published reports. PMID:21048197
Vesicular stomatitis forecasting based on Google Trends
Lu, Yi; Zhou, GuangYa; Chen, Qin
2018-01-01
Background Vesicular stomatitis (VS) is an important viral disease of livestock. The main feature of VS is irregular blisters that occur on the lips, tongue, oral mucosa, hoof crown and nipple. Humans can also be infected with vesicular stomatitis and develop meningitis. This study analyses 2014 American VS outbreaks in order to accurately predict vesicular stomatitis outbreak trends. Methods American VS outbreaks data were collected from OIE. The data for VS keywords were obtained by inputting 24 disease-related keywords into Google Trends. After calculating the Pearson and Spearman correlation coefficients, it was found that there was a relationship between outbreaks and keywords derived from Google Trends. Finally, the predicted model was constructed based on qualitative classification and quantitative regression. Results For the regression model, the Pearson correlation coefficients between the predicted outbreaks and actual outbreaks are 0.953 and 0.948, respectively. For the qualitative classification model, we constructed five classification predictive models and chose the best classification predictive model as the result. The results showed, SN (sensitivity), SP (specificity) and ACC (prediction accuracy) values of the best classification predictive model are 78.52%,72.5% and 77.14%, respectively. Conclusion This study applied Google search data to construct a qualitative classification model and a quantitative regression model. The results show that the method is effective and that these two models obtain more accurate forecast. PMID:29385198
FDTD-based quantitative analysis of terahertz wave detection for multilayered structures.
Tu, Wanli; Zhong, Shuncong; Shen, Yaochun; Zhou, Qing; Yao, Ligang
2014-10-01
Experimental investigations have shown that terahertz pulsed imaging (TPI) is able to quantitatively characterize a range of multilayered media (e.g., biological issues, pharmaceutical tablet coatings, layered polymer composites, etc.). Advanced modeling of the interaction of terahertz radiation with a multilayered medium is required to enable the wide application of terahertz technology in a number of emerging fields, including nondestructive testing. Indeed, there have already been many theoretical analyses performed on the propagation of terahertz radiation in various multilayered media. However, to date, most of these studies used 1D or 2D models, and the dispersive nature of the dielectric layers was not considered or was simplified. In the present work, the theoretical framework of using terahertz waves for the quantitative characterization of multilayered media was established. A 3D model based on the finite difference time domain (FDTD) method is proposed. A batch of pharmaceutical tablets with a single coating layer of different coating thicknesses and different refractive indices was modeled. The reflected terahertz wave from such a sample was computed using the FDTD method, assuming that the incident terahertz wave is broadband, covering a frequency range up to 3.5 THz. The simulated results for all of the pharmaceutical-coated tablets considered were found to be in good agreement with the experimental results obtained using a commercial TPI system. In addition, we studied a three-layered medium to mimic the occurrence of defects in the sample.
NASA Astrophysics Data System (ADS)
Li, Xuan; Liu, Zhiping; Jiang, Xiaoli; Lodewijks, Gabrol
2018-01-01
Eddy current pulsed thermography (ECPT) is well established for non-destructive testing of electrical conductive materials, featuring the advantages of contactless, intuitive detecting and efficient heating. The concept of divergence characterization of the damage rate of carbon fibre-reinforced plastic (CFRP)-steel structures can be extended to ECPT thermal pattern characterization. It was found in this study that the use of ECPT technology on CFRP-steel structures generated a sizeable amount of valuable information for comprehensive material diagnostics. The relationship between divergence and transient thermal patterns can be identified and analysed by deploying mathematical models to analyse the information about fibre texture-like orientations, gaps and undulations in these multi-layered materials. The developed algorithm enabled the removal of information about fibre texture and the extraction of damage features. The model of the CFRP-glue-steel structures with damage was established using COMSOL Multiphysics® software, and quantitative non-destructive damage evaluation from the ECPT image areas was derived. The results of this proposed method illustrate that damaged areas are highly affected by available information about fibre texture. This proposed work can be applied for detection of impact induced damage and quantitative evaluation of CFRP structures.
Guidelines for a graph-theoretic implementation of structural equation modeling
Grace, James B.; Schoolmaster, Donald R.; Guntenspergen, Glenn R.; Little, Amanda M.; Mitchell, Brian R.; Miller, Kathryn M.; Schweiger, E. William
2012-01-01
Structural equation modeling (SEM) is increasingly being chosen by researchers as a framework for gaining scientific insights from the quantitative analyses of data. New ideas and methods emerging from the study of causality, influences from the field of graphical modeling, and advances in statistics are expanding the rigor, capability, and even purpose of SEM. Guidelines for implementing the expanded capabilities of SEM are currently lacking. In this paper we describe new developments in SEM that we believe constitute a third-generation of the methodology. Most characteristic of this new approach is the generalization of the structural equation model as a causal graph. In this generalization, analyses are based on graph theoretic principles rather than analyses of matrices. Also, new devices such as metamodels and causal diagrams, as well as an increased emphasis on queries and probabilistic reasoning, are now included. Estimation under a graph theory framework permits the use of Bayesian or likelihood methods. The guidelines presented start from a declaration of the goals of the analysis. We then discuss how theory frames the modeling process, requirements for causal interpretation, model specification choices, selection of estimation method, model evaluation options, and use of queries, both to summarize retrospective results and for prospective analyses. The illustrative example presented involves monitoring data from wetlands on Mount Desert Island, home of Acadia National Park. Our presentation walks through the decision process involved in developing and evaluating models, as well as drawing inferences from the resulting prediction equations. In addition to evaluating hypotheses about the connections between human activities and biotic responses, we illustrate how the structural equation (SE) model can be queried to understand how interventions might take advantage of an environmental threshold to limit Typha invasions. The guidelines presented provide for an updated definition of the SEM process that subsumes the historical matrix approach under a graph-theory implementation. The implementation is also designed to permit complex specifications and to be compatible with various estimation methods. Finally, they are meant to foster the use of probabilistic reasoning in both retrospective and prospective considerations of the quantitative implications of the results.
DEMOGRAPHY AND VIABILITY ANALYSES OF A DIAMONDBACK TERRAPIN POPULATION
The diamondback terrapin Malaclemys terrapin is a long-lived species with special management requirements, but quantitative analyses to support management are lacking. I analyzed mark-recapture data and constructed an age-classified matrix population model to determine the status...
Development of in vitro and in vivo neutralization assays based on the pseudotyped H7N9 virus.
Tian, Yabin; Zhao, Hui; Liu, Qiang; Zhang, Chuntao; Nie, Jianhui; Huang, Weijing; Li, Changgui; Li, Xuguang; Wang, Youchun
2018-05-31
H7N9 viral infections pose a great threat to both animal and human health. This avian virus cannot be handled in level 2 biocontainment laboratories, substantially hindering evaluation of prophylactic vaccines and therapeutic agents. Here, we report a high-titer pseudoviral system with a bioluminescent reporter gene, enabling us to visually and quantitatively conduct analyses of virus replications in both tissue cultures and animals. For evaluation of immunogenicity of H7N9 vaccines, we developed an in vitro assay for neutralizing antibody measurement based on the pseudoviral system; results generated by the in vitro assay were found to be strongly correlated with those by either hemagglutination inhibition (HI) or micro-neutralization (MN) assay. Furthermore, we injected the viruses into Balb/c mice and observed dynamic distributions of the viruses in the animals, which provides an ideal imaging model for quantitative analyses of prophylactic and therapeutic monoclonal antibodies. Taken together, the pseudoviral systems reported here could be of great value for both in vitro and in vivo evaluations of vaccines and antiviral agents without the need of wild type H7N9 virus.
Poobalan, Amudha S; Aucott, Lorna S; Clarke, Amanda; Smith, William Cairns S
2014-01-01
Background : Young people (18-25 years) during the adolescence/adulthood transition are vulnerable to weight gain and notoriously hard to reach. Despite increased levels of overweight/obesity in this age group, diet behaviour, a major contributor to obesity, is poorly understood. The purpose of this study was to explore diet behaviour among 18-25 year olds with influential factors including attitudes, motivators and barriers. Methods : An explanatory mixed method study design, based on health Behaviour Change Theories was used. Those at University/college and in the community, including those Not in Education, Employment or Training (NEET) were included. An initial quantitative questionnaire survey underpinned by the Theory of Planned Behaviour and Social Cognitive Theory was conducted and the results from this were incorporated into the qualitative phase. Seven focus groups were conducted among similar young people, varying in education and socioeconomic status. Exploratory univariate analysis was followed by multi-staged modelling to analyse the quantitative data. 'Framework Analysis' was used to analyse the focus groups. Results : 1313 questionnaires were analysed. Self-reported overweight/obesity prevalence was 22%, increasing with age, particularly in males. Based on the survey, 40% of young people reported eating an adequate amount of fruits and vegetables and 59% eating regular meals, but 32% reported unhealthy snacking. Based on the statistical modelling, positive attitudes towards diet and high intention (89%), did not translate into healthy diet behaviour. From the focus group discussions, the main motivators for diet behaviour were 'self-appearance' and having 'variety of food'. There were mixed opinions on 'cost' of food and 'taste'. Conclusion : Elements deemed really important to young people have been identified. This mixed method study is the largest in this vulnerable and neglected group covering a wide spectrum of the community. It provides evidence base to inform tailored interventions for a healthy diet within this age group.
Diet behaviour among young people in transition to adulthood (18–25 year olds): a mixed method study
Poobalan, Amudha S.; Aucott, Lorna S.; Clarke, Amanda; Smith, William Cairns S.
2014-01-01
Background : Young people (18–25 years) during the adolescence/adulthood transition are vulnerable to weight gain and notoriously hard to reach. Despite increased levels of overweight/obesity in this age group, diet behaviour, a major contributor to obesity, is poorly understood. The purpose of this study was to explore diet behaviour among 18–25 year olds with influential factors including attitudes, motivators and barriers. Methods: An explanatory mixed method study design, based on health Behaviour Change Theories was used. Those at University/college and in the community, including those Not in Education, Employment or Training (NEET) were included. An initial quantitative questionnaire survey underpinned by the Theory of Planned Behaviour and Social Cognitive Theory was conducted and the results from this were incorporated into the qualitative phase. Seven focus groups were conducted among similar young people, varying in education and socioeconomic status. Exploratory univariate analysis was followed by multi-staged modelling to analyse the quantitative data. ‘Framework Analysis’ was used to analyse the focus groups. Results: 1313 questionnaires were analysed. Self-reported overweight/obesity prevalence was 22%, increasing with age, particularly in males. Based on the survey, 40% of young people reported eating an adequate amount of fruits and vegetables and 59% eating regular meals, but 32% reported unhealthy snacking. Based on the statistical modelling, positive attitudes towards diet and high intention (89%), did not translate into healthy diet behaviour. From the focus group discussions, the main motivators for diet behaviour were ‘self-appearance’ and having ‘variety of food’. There were mixed opinions on ‘cost’ of food and ‘taste’. Conclusion: Elements deemed really important to young people have been identified. This mixed method study is the largest in this vulnerable and neglected group covering a wide spectrum of the community. It provides evidence base to inform tailored interventions for a healthy diet within this age group. PMID:25750826
Impact of TRMM and SSM/I Rainfall Assimilation on Global Analysis and QPF
NASA Technical Reports Server (NTRS)
Hou, Arthur; Zhang, Sara; Reale, Oreste
2002-01-01
Evaluation of QPF skills requires quantitatively accurate precipitation analyses. We show that assimilation of surface rain rates derived from the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager and Special Sensor Microwave/Imager (SSM/I) improves quantitative precipitation estimates (QPE) and many aspects of global analyses. Short-range forecasts initialized with analyses with satellite rainfall data generally yield significantly higher QPF threat scores and better storm track predictions. These results were obtained using a variational procedure that minimizes the difference between the observed and model rain rates by correcting the moist physics tendency of the forecast model over a 6h assimilation window. In two case studies of Hurricanes Bonnie and Floyd, synoptic analysis shows that this procedure produces initial conditions with better-defined tropical storm features and stronger precipitation intensity associated with the storm.
Sung, Yun Ju; Di, Yanming; Fu, Audrey Q; Rothstein, Joseph H; Sieh, Weiva; Tong, Liping; Thompson, Elizabeth A; Wijsman, Ellen M
2007-01-01
We performed multipoint linkage analyses with multiple programs and models for several gene expression traits in the Centre d'Etude du Polymorphisme Humain families. All analyses provided consistent results for both peak location and shape. Variance-components (VC) analysis gave wider peaks and Bayes factors gave fewer peaks. Among programs from the MORGAN package, lm_multiple performed better than lm_markers, resulting in less Markov-chain Monte Carlo (MCMC) variability between runs, and the program lm_twoqtl provided higher LOD scores by also including either a polygenic component or an additional quantitative trait locus.
Sung, Yun Ju; Di, Yanming; Fu, Audrey Q; Rothstein, Joseph H; Sieh, Weiva; Tong, Liping; Thompson, Elizabeth A; Wijsman, Ellen M
2007-01-01
We performed multipoint linkage analyses with multiple programs and models for several gene expression traits in the Centre d'Etude du Polymorphisme Humain families. All analyses provided consistent results for both peak location and shape. Variance-components (VC) analysis gave wider peaks and Bayes factors gave fewer peaks. Among programs from the MORGAN package, lm_multiple performed better than lm_markers, resulting in less Markov-chain Monte Carlo (MCMC) variability between runs, and the program lm_twoqtl provided higher LOD scores by also including either a polygenic component or an additional quantitative trait locus. PMID:18466597
[Clinical research XXIII. From clinical judgment to meta-analyses].
Rivas-Ruiz, Rodolfo; Castelán-Martínez, Osvaldo D; Pérez-Rodríguez, Marcela; Palacios-Cruz, Lino; Noyola-Castillo, Maura E; Talavera, Juan O
2014-01-01
Systematic reviews (SR) are studies made in order to ask clinical questions based on original articles. Meta-analysis (MTA) is the mathematical analysis of SR. These analyses are divided in two groups, those which evaluate the measured results of quantitative variables (for example, the body mass index -BMI-) and those which evaluate qualitative variables (for example, if a patient is alive or dead, or if he is healing or not). Quantitative variables generally use the mean difference analysis and qualitative variables can be performed using several calculations: odds ratio (OR), relative risk (RR), absolute risk reduction (ARR) and hazard ratio (HR). These analyses are represented through forest plots which allow the evaluation of each individual study, as well as the heterogeneity between studies and the overall effect of the intervention. These analyses are mainly based on Student's t test and chi-squared. To take appropriate decisions based on the MTA, it is important to understand the characteristics of statistical methods in order to avoid misinterpretations.
mRNA-Based Parallel Detection of Active Methanotroph Populations by Use of a Diagnostic Microarray
Bodrossy, Levente; Stralis-Pavese, Nancy; Konrad-Köszler, Marianne; Weilharter, Alexandra; Reichenauer, Thomas G.; Schöfer, David; Sessitsch, Angela
2006-01-01
A method was developed for the mRNA-based application of microbial diagnostic microarrays to detect active microbial populations. DNA- and mRNA-based analyses of environmental samples were compared and confirmed via quantitative PCR. Results indicated that mRNA-based microarray analyses may provide additional information on the composition and functioning of microbial communities. PMID:16461725
Climate Shocks and Migration: An Agent-Based Modeling Approach.
Entwisle, Barbara; Williams, Nathalie E; Verdery, Ashton M; Rindfuss, Ronald R; Walsh, Stephen J; Malanson, George P; Mucha, Peter J; Frizzelle, Brian G; McDaniel, Philip M; Yao, Xiaozheng; Heumann, Benjamin W; Prasartkul, Pramote; Sawangdee, Yothin; Jampaklay, Aree
2016-09-01
This is a study of migration responses to climate shocks. We construct an agent-based model that incorporates dynamic linkages between demographic behaviors, such as migration, marriage, and births, and agriculture and land use, which depend on rainfall patterns. The rules and parameterization of our model are empirically derived from qualitative and quantitative analyses of a well-studied demographic field site, Nang Rong district, Northeast Thailand. With this model, we simulate patterns of migration under four weather regimes in a rice economy: 1) a reference, 'normal' scenario; 2) seven years of unusually wet weather; 3) seven years of unusually dry weather; and 4) seven years of extremely variable weather. Results show relatively small impacts on migration. Experiments with the model show that existing high migration rates and strong selection factors, which are unaffected by climate change, are likely responsible for the weak migration response.
Climate Shocks and Migration: An Agent-Based Modeling Approach
Entwisle, Barbara; Williams, Nathalie E.; Verdery, Ashton M.; Rindfuss, Ronald R.; Walsh, Stephen J.; Malanson, George P.; Mucha, Peter J.; Frizzelle, Brian G.; McDaniel, Philip M.; Yao, Xiaozheng; Heumann, Benjamin W.; Prasartkul, Pramote; Sawangdee, Yothin; Jampaklay, Aree
2016-01-01
This is a study of migration responses to climate shocks. We construct an agent-based model that incorporates dynamic linkages between demographic behaviors, such as migration, marriage, and births, and agriculture and land use, which depend on rainfall patterns. The rules and parameterization of our model are empirically derived from qualitative and quantitative analyses of a well-studied demographic field site, Nang Rong district, Northeast Thailand. With this model, we simulate patterns of migration under four weather regimes in a rice economy: 1) a reference, ‘normal’ scenario; 2) seven years of unusually wet weather; 3) seven years of unusually dry weather; and 4) seven years of extremely variable weather. Results show relatively small impacts on migration. Experiments with the model show that existing high migration rates and strong selection factors, which are unaffected by climate change, are likely responsible for the weak migration response. PMID:27594725
Models of Quantitative Estimations: Rule-Based and Exemplar-Based Processes Compared
ERIC Educational Resources Information Center
von Helversen, Bettina; Rieskamp, Jorg
2009-01-01
The cognitive processes underlying quantitative estimations vary. Past research has identified task-contingent changes between rule-based and exemplar-based processes (P. Juslin, L. Karlsson, & H. Olsson, 2008). B. von Helversen and J. Rieskamp (2008), however, proposed a simple rule-based model--the mapping model--that outperformed the…
Lorenz, Alyson; Dhingra, Radhika; Chang, Howard H; Bisanzio, Donal; Liu, Yang; Remais, Justin V
2014-01-01
Extrapolating landscape regression models for use in assessing vector-borne disease risk and other applications requires thoughtful evaluation of fundamental model choice issues. To examine implications of such choices, an analysis was conducted to explore the extent to which disparate landscape models agree in their epidemiological and entomological risk predictions when extrapolated to new regions. Agreement between six literature-drawn landscape models was examined by comparing predicted county-level distributions of either Lyme disease or Ixodes scapularis vector using Spearman ranked correlation. AUC analyses and multinomial logistic regression were used to assess the ability of these extrapolated landscape models to predict observed national data. Three models based on measures of vegetation, habitat patch characteristics, and herbaceous landcover emerged as effective predictors of observed disease and vector distribution. An ensemble model containing these three models improved precision and predictive ability over individual models. A priori assessment of qualitative model characteristics effectively identified models that subsequently emerged as better predictors in quantitative analysis. Both a methodology for quantitative model comparison and a checklist for qualitative assessment of candidate models for extrapolation are provided; both tools aim to improve collaboration between those producing models and those interested in applying them to new areas and research questions.
SNPassoc: an R package to perform whole genome association studies.
González, Juan R; Armengol, Lluís; Solé, Xavier; Guinó, Elisabet; Mercader, Josep M; Estivill, Xavier; Moreno, Víctor
2007-03-01
The popularization of large-scale genotyping projects has led to the widespread adoption of genetic association studies as the tool of choice in the search for single nucleotide polymorphisms (SNPs) underlying susceptibility to complex diseases. Although the analysis of individual SNPs is a relatively trivial task, when the number is large and multiple genetic models need to be explored it becomes necessary a tool to automate the analyses. In order to address this issue, we developed SNPassoc, an R package to carry out most common analyses in whole genome association studies. These analyses include descriptive statistics and exploratory analysis of missing values, calculation of Hardy-Weinberg equilibrium, analysis of association based on generalized linear models (either for quantitative or binary traits), and analysis of multiple SNPs (haplotype and epistasis analysis). Package SNPassoc is available at CRAN from http://cran.r-project.org. A tutorial is available on Bioinformatics online and in http://davinci.crg.es/estivill_lab/snpassoc.
Fatigue Damage of Collagenous Tissues: Experiment, Modeling and Simulation Studies
Martin, Caitlin; Sun, Wei
2017-01-01
Mechanical fatigue damage is a critical issue for soft tissues and tissue-derived materials, particularly for musculoskeletal and cardiovascular applications; yet, our understanding of the fatigue damage process is incomplete. Soft tissue fatigue experiments are often difficult and time-consuming to perform, which has hindered progress in this area. However, the recent development of soft-tissue fatigue-damage constitutive models has enabled simulation-based fatigue analyses of tissues under various conditions. Computational simulations facilitate highly controlled and quantitative analyses to study the distinct effects of various loading conditions and design features on tissue durability; thus, they are advantageous over complex fatigue experiments. Although significant work to calibrate the constitutive models from fatigue experiments and to validate predictability remains, further development in these areas will add to our knowledge of soft-tissue fatigue damage and will facilitate the design of durable treatments and devices. In this review, the experimental, modeling, and simulation efforts to study collagenous tissue fatigue damage are summarized and critically assessed. PMID:25955007
Numerical framework for the modeling of electrokinetic flows
NASA Astrophysics Data System (ADS)
Deshpande, Manish; Ghaddar, Chahid; Gilbert, John R.; St. John, Pamela M.; Woudenberg, Timothy M.; Connell, Charles R.; Molho, Joshua; Herr, Amy; Mungal, Godfrey; Kenny, Thomas W.
1998-09-01
This paper presents a numerical framework for design-based analyses of electrokinetic flow in interconnects. Electrokinetic effects, which can be broadly divided into electrophoresis and electroosmosis, are of importance in providing a transport mechanism in microfluidic devices for both pumping and separation. Models for the electrokinetic effects can be derived and coupled to the fluid dynamic equations through appropriate source terms. In the design of practical microdevices, however, accurate coupling of the electrokinetic effects requires the knowledge of several material and physical parameters, such as the diffusivity and the mobility of the solute in the solvent. Additionally wall-based effects such as chemical binding sites might exist that affect the flow patterns. In this paper, we address some of these issues by describing a synergistic numerical/experimental process to extract the parameters required. Experiments were conducted to provide the numerical simulations with a mechanism to extract these parameters based on quantitative comparisons with each other. These parameters were then applied in predicting further experiments to validate the process. As part of this research, we have created NetFlow, a tool for micro-fluid analyses. The tool can be validated and applied in existing technologies by first creating test structures to extract representations of the physical phenomena in the device, and then applying them in the design analyses to predict correct behavior.
Lewis, Ari S; Beyer, Leslie A; Zu, Ke
2015-01-01
The inhalation unit risk (IUR) that currently exists in the United States Environmental Protection Agency's (US EPA's) Integrated Risk Information System was developed in 1984 based on studies examining the relationship between respiratory cancer and arsenic exposure in copper smelters from two US locations: the copper smelter in Anaconda, Montana, and the American Smelting And Refining COmpany (ASARCO) smelter in Tacoma, Washington. Since US EPA last conducted its assessment, additional data have become available from epidemiology and mechanistic studies. In addition, the California Air Resources Board, Texas Commission of Environmental Quality, and Dutch Expert Committee on Occupational Safety have all conducted new risk assessments. All three analyses, which calculated IURs based on respiratory/lung cancer mortality, generated IURs that are lower (i.e., less restrictive) than the current US EPA value of 4.3×10(-3) (μg/m(3))(-1). The IURs developed by these agencies, which vary more than 20-fold, are based on somewhat different studies and use different methodologies to address uncertainties in the underlying datasets. Despite these differences, all were developed based on a cumulative exposure metric assuming a low-dose linear dose-response relationship. In this paper, we contrast and compare the analyses conducted by these agencies and critically evaluate strengths and limitations inherent in the data and methodologies used to develop quantitative risk estimates. In addition, we consider how these data could be best used to assess risk at much lower levels of arsenic in air, such as those experienced by the general public. Given that the mode of action for arsenic supports a threshold effect, and epidemiological evidence suggests that the arsenic concentration in air is a reliable predictor of lung/respiratory cancer risk, we developed a quantitative cancer risk analysis using a nonlinear threshold model. Applying a nonlinear model to occupational data, we established points of departure based on both cumulative exposure (μg/m(3)-years) to arsenic and arsenic concentration (μg/m(3)) via inhalation. Using these values, one can assess the lifetime risk of respiratory cancer mortality associated with ambient air concentrations of arsenic for the general US population. Copyright © 2014 Elsevier Ltd. All rights reserved.
Assessing uncertainty in published risk estimates using ...
Introduction: The National Research Council recommended quantitative evaluation of uncertainty in effect estimates for risk assessment. This analysis considers uncertainty across model forms and model parameterizations with hexavalent chromium [Cr(VI)] and lung cancer mortality as an example. The objective is to characterize model uncertainty by evaluating estimates across published epidemiologic studies of the same cohort.Methods: This analysis was based on 5 studies analyzing a cohort of 2,357 workers employed from 1950-74 in a chromate production plant in Maryland. Cox and Poisson models were the only model forms considered by study authors to assess the effect of Cr(VI) on lung cancer mortality. All models adjusted for smoking and included a 5-year exposure lag, however other latency periods and model covariates such as age and race were considered. Published effect estimates were standardized to the same units and normalized by their variances to produce a standardized metric to compare variability within and between model forms. A total of 5 similarly parameterized analyses were considered across model form, and 16 analyses with alternative parameterizations were considered within model form (10 Cox; 6 Poisson). Results: Across Cox and Poisson model forms, adjusted cumulative exposure coefficients (betas) for 5 similar analyses ranged from 2.47 to 4.33 (mean=2.97, σ2=0.63). Within the 10 Cox models, coefficients ranged from 2.53 to 4.42 (mean=3.29, σ2=0.
Medland, Sarah E; Loesch, Danuta Z; Mdzewski, Bogdan; Zhu, Gu; Montgomery, Grant W; Martin, Nicholas G
2007-01-01
The finger ridge count (a measure of pattern size) is one of the most heritable complex traits studied in humans and has been considered a model human polygenic trait in quantitative genetic analysis. Here, we report the results of the first genome-wide linkage scan for finger ridge count in a sample of 2,114 offspring from 922 nuclear families. Both univariate linkage to the absolute ridge count (a sum of all the ridge counts on all ten fingers), and multivariate linkage analyses of the counts on individual fingers, were conducted. The multivariate analyses yielded significant linkage to 5q14.1 (Logarithm of odds [LOD] = 3.34, pointwise-empirical p-value = 0.00025) that was predominantly driven by linkage to the ring, index, and middle fingers. The strongest univariate linkage was to 1q42.2 (LOD = 2.04, point-wise p-value = 0.002, genome-wide p-value = 0.29). In summary, the combination of univariate and multivariate results was more informative than simple univariate analyses alone. Patterns of quantitative trait loci factor loadings consistent with developmental fields were observed, and the simple pleiotropic model underlying the absolute ridge count was not sufficient to characterize the interrelationships between the ridge counts of individual fingers. PMID:17907812
Dong, Yingying; Luo, Ruisen; Feng, Haikuan; Wang, Jihua; Zhao, Jinling; Zhu, Yining; Yang, Guijun
2014-01-01
Differences exist among analysis results of agriculture monitoring and crop production based on remote sensing observations, which are obtained at different spatial scales from multiple remote sensors in same time period, and processed by same algorithms, models or methods. These differences can be mainly quantitatively described from three aspects, i.e. multiple remote sensing observations, crop parameters estimation models, and spatial scale effects of surface parameters. Our research proposed a new method to analyse and correct the differences between multi-source and multi-scale spatial remote sensing surface reflectance datasets, aiming to provide references for further studies in agricultural application with multiple remotely sensed observations from different sources. The new method was constructed on the basis of physical and mathematical properties of multi-source and multi-scale reflectance datasets. Theories of statistics were involved to extract statistical characteristics of multiple surface reflectance datasets, and further quantitatively analyse spatial variations of these characteristics at multiple spatial scales. Then, taking the surface reflectance at small spatial scale as the baseline data, theories of Gaussian distribution were selected for multiple surface reflectance datasets correction based on the above obtained physical characteristics and mathematical distribution properties, and their spatial variations. This proposed method was verified by two sets of multiple satellite images, which were obtained in two experimental fields located in Inner Mongolia and Beijing, China with different degrees of homogeneity of underlying surfaces. Experimental results indicate that differences of surface reflectance datasets at multiple spatial scales could be effectively corrected over non-homogeneous underlying surfaces, which provide database for further multi-source and multi-scale crop growth monitoring and yield prediction, and their corresponding consistency analysis evaluation.
Dong, Yingying; Luo, Ruisen; Feng, Haikuan; Wang, Jihua; Zhao, Jinling; Zhu, Yining; Yang, Guijun
2014-01-01
Differences exist among analysis results of agriculture monitoring and crop production based on remote sensing observations, which are obtained at different spatial scales from multiple remote sensors in same time period, and processed by same algorithms, models or methods. These differences can be mainly quantitatively described from three aspects, i.e. multiple remote sensing observations, crop parameters estimation models, and spatial scale effects of surface parameters. Our research proposed a new method to analyse and correct the differences between multi-source and multi-scale spatial remote sensing surface reflectance datasets, aiming to provide references for further studies in agricultural application with multiple remotely sensed observations from different sources. The new method was constructed on the basis of physical and mathematical properties of multi-source and multi-scale reflectance datasets. Theories of statistics were involved to extract statistical characteristics of multiple surface reflectance datasets, and further quantitatively analyse spatial variations of these characteristics at multiple spatial scales. Then, taking the surface reflectance at small spatial scale as the baseline data, theories of Gaussian distribution were selected for multiple surface reflectance datasets correction based on the above obtained physical characteristics and mathematical distribution properties, and their spatial variations. This proposed method was verified by two sets of multiple satellite images, which were obtained in two experimental fields located in Inner Mongolia and Beijing, China with different degrees of homogeneity of underlying surfaces. Experimental results indicate that differences of surface reflectance datasets at multiple spatial scales could be effectively corrected over non-homogeneous underlying surfaces, which provide database for further multi-source and multi-scale crop growth monitoring and yield prediction, and their corresponding consistency analysis evaluation. PMID:25405760
Napolitano, Assunta; Akay, Seref; Mari, Angela; Bedir, Erdal; Pizza, Cosimo; Piacente, Sonia
2013-11-01
Astragalus species are widely used as health foods and dietary supplements, as well as drugs in traditional medicine. To rapidly evaluate metabolite similarities and differences among the EtOH extracts of the roots of eight commercial Astragalus spp., an approach based on direct analyses by ESI-MS followed by PCA of ESI-MS data, was carried out. Successively, quali-quantitative analyses of cycloartane derivatives in the eight Astragalus spp. by LC-ESI-MS(n) and PCA of LC-ESI-MS data were performed. This approach allowed to promptly highlighting metabolite similarities and differences among the various Astragalus spp. PCA results from LC-ESI-MS data of Astragalus samples were in reasonable agreement with both PCA results of ESI-MS data and quantitative results. This study affords an analytical method for the quali-quantitative determination of cycloartane derivatives in herbal preparations used as health and food supplements. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Niri, Mohammad Emami; Lumley, David E.
2017-10-01
Integration of 3D and time-lapse 4D seismic data into reservoir modelling and history matching processes poses a significant challenge due to the frequent mismatch between the initial reservoir model, the true reservoir geology, and the pre-production (baseline) seismic data. A fundamental step of a reservoir characterisation and performance study is the preconditioning of the initial reservoir model to equally honour both the geological knowledge and seismic data. In this paper we analyse the issues that have a significant impact on the (mis)match of the initial reservoir model with well logs and inverted 3D seismic data. These issues include the constraining methods for reservoir lithofacies modelling, the sensitivity of the results to the presence of realistic resolution and noise in the seismic data, the geostatistical modelling parameters, and the uncertainties associated with quantitative incorporation of inverted seismic data in reservoir lithofacies modelling. We demonstrate that in a geostatistical lithofacies simulation process, seismic constraining methods based on seismic litho-probability curves and seismic litho-probability cubes yield the best match to the reference model, even when realistic resolution and noise is included in the dataset. In addition, our analyses show that quantitative incorporation of inverted 3D seismic data in static reservoir modelling carries a range of uncertainties and should be cautiously applied in order to minimise the risk of misinterpretation. These uncertainties are due to the limited vertical resolution of the seismic data compared to the scale of the geological heterogeneities, the fundamental instability of the inverse problem, and the non-unique elastic properties of different lithofacies types.
Nielsen, Signe Smith; Hempler, Nana Folmann; Krasnik, Allan
2013-01-01
The relationship between migration and health is complex, yet, immigrant-related inequalities in health are largely influenced by socioeconomic position. Drawing upon previous findings, this paper discusses issues to consider when measuring and applying socioeconomic position in quantitative immigrant health research. When measuring socioeconomic position, it is important to be aware of four aspects: (1) there is a lack of clarity about how socioeconomic position should be measured; (2) different types of socioeconomic position may be relevant to immigrants compared with the native-born population; (3) choices of measures of socioeconomic position in quantitative analyses often rely on data availability; and (4) different measures of socioeconomic position have different effects in population groups. Therefore, caution should be used in the collection, presentation, analyses, and interpretation of data and researchers need to display their proposed conceptual models and data limitations as well as apply different approaches for analyses. PMID:24287857
Chen, Chia-Lin; Wang, Yuchuan; Lee, Jason J. S.; Tsui, Benjamin M. W.
2011-01-01
Purpose We assessed the quantitation accuracy of small animal pinhole single photon emission computed tomography (SPECT) under the current preclinical settings, where image compensations are not routinely applied. Procedures The effects of several common image-degrading factors and imaging parameters on quantitation accuracy were evaluated using Monte-Carlo simulation methods. Typical preclinical imaging configurations were modeled, and quantitative analyses were performed based on image reconstructions without compensating for attenuation, scatter, and limited system resolution. Results Using mouse-sized phantom studies as examples, attenuation effects alone degraded quantitation accuracy by up to −18% (Tc-99m or In-111) or −41% (I-125). The inclusion of scatter effects changed the above numbers to −12% (Tc-99m or In-111) and −21% (I-125), respectively, indicating the significance of scatter in quantitative I-125 imaging. Region-of-interest (ROI) definitions have greater impacts on regional quantitation accuracy for small sphere sources as compared to attenuation and scatter effects. For the same ROI, SPECT acquisitions using pinhole apertures of different sizes could significantly affect the outcome, whereas the use of different radii-of-rotation yielded negligible differences in quantitation accuracy for the imaging configurations simulated. Conclusions We have systematically quantified the influence of several factors affecting the quantitation accuracy of small animal pinhole SPECT. In order to consistently achieve accurate quantitation within 5% of the truth, comprehensive image compensation methods are needed. PMID:19048346
Assessing the risk posed by natural hazards to infrastructures
NASA Astrophysics Data System (ADS)
Eidsvig, Unni Marie K.; Kristensen, Krister; Vidar Vangelsten, Bjørn
2017-03-01
This paper proposes a model for assessing the risk posed by natural hazards to infrastructures, with a focus on the indirect losses and loss of stability for the population relying on the infrastructure. The model prescribes a three-level analysis with increasing level of detail, moving from qualitative to quantitative analysis. The focus is on a methodology for semi-quantitative analyses to be performed at the second level. The purpose of this type of analysis is to perform a screening of the scenarios of natural hazards threatening the infrastructures, identifying the most critical scenarios and investigating the need for further analyses (third level). The proposed semi-quantitative methodology considers the frequency of the natural hazard, different aspects of vulnerability, including the physical vulnerability of the infrastructure itself, and the societal dependency on the infrastructure. An indicator-based approach is applied, ranking the indicators on a relative scale according to pre-defined ranking criteria. The proposed indicators, which characterise conditions that influence the probability of an infrastructure malfunctioning caused by a natural event, are defined as (1) robustness and buffer capacity, (2) level of protection, (3) quality/level of maintenance and renewal, (4) adaptability and quality of operational procedures and (5) transparency/complexity/degree of coupling. Further indicators describe conditions influencing the socio-economic consequences of the infrastructure malfunctioning, such as (1) redundancy and/or substitution, (2) cascading effects and dependencies, (3) preparedness and (4) early warning, emergency response and measures. The aggregated risk estimate is a combination of the semi-quantitative vulnerability indicators, as well as quantitative estimates of the frequency of the natural hazard, the potential duration of the infrastructure malfunctioning (e.g. depending on the required restoration effort) and the number of users of the infrastructure. Case studies for two Norwegian municipalities are presented for demonstration purposes, where risk posed by adverse weather and natural hazards to primary road, water supply and power networks is assessed. The application examples show that the proposed model provides a useful tool for screening of potential undesirable events, contributing to a targeted reduction of the risk.
Analysis of swimming performance: perceptions and practices of US-based swimming coaches.
Mooney, Robert; Corley, Gavin; Godfrey, Alan; Osborough, Conor; Newell, John; Quinlan, Leo Richard; ÓLaighin, Gearóid
2016-01-01
In elite swimming, a broad range of methods are used to assess performance, inform coaching practices and monitor athletic progression. The aim of this paper was to examine the performance analysis practices of swimming coaches and to explore the reasons behind the decisions that coaches take when analysing performance. Survey data were analysed from 298 Level 3 competitive swimming coaches (245 male, 53 female) based in the United States. Results were compiled to provide a generalised picture of practices and perceptions and to examine key emerging themes. It was found that a disparity exists between the importance swim coaches place on biomechanical analysis of swimming performance and the types of analyses that are actually conducted. Video-based methods are most frequently employed, with over 70% of coaches using these methods at least monthly, with analyses being mainly qualitative in nature rather than quantitative. Barriers to the more widespread use of quantitative biomechanical analysis in elite swimming environments were explored. Constraints include time, cost and availability of resources, but other factors such as sources of information on swimming performance and analysis and control over service provision are also discussed, with particular emphasis on video-based methods and emerging sensor-based technologies.
Tang, Min; Zhao, Rui; van de Velde, Helgi; Tross, Jennifer G; Mitsiades, Constantine; Viselli, Suzanne; Neuwirth, Rachel; Esseltine, Dixie-Lee; Anderson, Kenneth; Ghobrial, Irene M; San Miguel, Jesús F; Richardson, Paul G; Tomasson, Michael H; Michor, Franziska
2016-08-15
Since the pioneering work of Salmon and Durie, quantitative measures of tumor burden in multiple myeloma have been used to make clinical predictions and model tumor growth. However, such quantitative analyses have not yet been performed on large datasets from trials using modern chemotherapy regimens. We analyzed a large set of tumor response data from three randomized controlled trials of bortezomib-based chemotherapy regimens (total sample size n = 1,469 patients) to establish and validate a novel mathematical model of multiple myeloma cell dynamics. Treatment dynamics in newly diagnosed patients were most consistent with a model postulating two tumor cell subpopulations, "progenitor cells" and "differentiated cells." Differential treatment responses were observed with significant tumoricidal effects on differentiated cells and less clear effects on progenitor cells. We validated this model using a second trial of newly diagnosed patients and a third trial of refractory patients. When applying our model to data of relapsed patients, we found that a hybrid model incorporating both a differentiation hierarchy and clonal evolution best explains the response patterns. The clinical data, together with mathematical modeling, suggest that bortezomib-based therapy exerts a selection pressure on myeloma cells that can shape the disease phenotype, thereby generating further inter-patient variability. This model may be a useful tool for improving our understanding of disease biology and the response to chemotherapy regimens. Clin Cancer Res; 22(16); 4206-14. ©2016 AACR. ©2016 American Association for Cancer Research.
Models of Mars' atmosphere (1974)
NASA Technical Reports Server (NTRS)
1974-01-01
Atmospheric models for support of design and mission planning of space vehicles that are to orbit the planet Mars, enter its atmosphere, or land on the surface are presented. Quantitative data for the Martian atmosphere were obtained from Earth-base observations and from spacecraft that have orbited Mars or passed within several planetary radii. These data were used in conjunction with existing theories of planetary atmospheres to predict other characteristics of the Martian atmosphere. Earth-based observations provided information on the composition, temperature, and optical properties of Mars with rather coarse spatial resolution, whereas spacecraft measurements yielded data on composition, temperature, pressure, density, and atmospheric structure with moderately good spatial resolution. The models provide the temperature, pressure, and density profiles required to perform basic aerodynamic analyses. The profiles are supplemented by computed values of viscosity, specific heat, and speed of sound.
Using computer-based video analysis in the study of fidgety movements.
Adde, Lars; Helbostad, Jorunn L; Jensenius, Alexander Refsum; Taraldsen, Gunnar; Støen, Ragnhild
2009-09-01
Absence of fidgety movements (FM) in high-risk infants is a strong marker for later cerebral palsy (CP). FMs can be classified by the General Movement Assessment (GMA), based on Gestalt perception of the infant's movement pattern. More objective movement analysis may be provided by computer-based technology. The aim of this study was to explore the feasibility of a computer-based video analysis of infants' spontaneous movements in classifying non-fidgety versus fidgety movements. GMA was performed from video material of the fidgety period in 82 term and preterm infants at low and high risks of developing CP. The same videos were analysed using the developed software called General Movement Toolbox (GMT) with visualisation of the infant's movements for qualitative analyses. Variables derived from the calculation of displacement of pixels from one video frame to the next were used for quantitative analyses. Visual representations from GMT showed easily recognisable patterns of FMs. Of the eight quantitative variables derived, the variability in displacement of a spatial centre of active pixels in the image had the highest sensitivity (81.5) and specificity (70.0) in classifying FMs. By setting triage thresholds at 90% sensitivity and specificity for FM, the need for further referral was reduced by 70%. Video recordings can be used for qualitative and quantitative analyses of FMs provided by GMT. GMT is easy to implement in clinical practice, and may provide assistance in detecting infants without FMs.
Hertrampf, A; Sousa, R M; Menezes, J C; Herdling, T
2016-05-30
Quality control (QC) in the pharmaceutical industry is a key activity in ensuring medicines have the required quality, safety and efficacy for their intended use. QC departments at pharmaceutical companies are responsible for all release testing of final products but also all incoming raw materials. Near-infrared spectroscopy (NIRS) and Raman spectroscopy are important techniques for fast and accurate identification and qualification of pharmaceutical samples. Tablets containing two different active pharmaceutical ingredients (API) [bisoprolol, hydrochlorothiazide] in different commercially available dosages were analysed using Raman- and NIR Spectroscopy. The goal was to define multivariate models based on each vibrational spectroscopy to discriminate between different dosages (identity) and predict their dosage (semi-quantitative). Furthermore the combination of spectroscopic techniques was investigated. Therefore, two different multiblock techniques based on PLS have been applied: multiblock PLS (MB-PLS) and sequential-orthogonalised PLS (SO-PLS). NIRS showed better results compared to Raman spectroscopy for both identification and quantitation. The multiblock techniques investigated showed that each spectroscopy contains information not present or captured with the other spectroscopic technique, thus demonstrating that there is a potential benefit in their combined use for both identification and quantitation purposes. Copyright © 2016 Elsevier B.V. All rights reserved.
Web-based tools for modelling and analysis of multivariate data: California ozone pollution activity
Dinov, Ivo D.; Christou, Nicolas
2014-01-01
This article presents a hands-on web-based activity motivated by the relation between human health and ozone pollution in California. This case study is based on multivariate data collected monthly at 20 locations in California between 1980 and 2006. Several strategies and tools for data interrogation and exploratory data analysis, model fitting and statistical inference on these data are presented. All components of this case study (data, tools, activity) are freely available online at: http://wiki.stat.ucla.edu/socr/index.php/SOCR_MotionCharts_CAOzoneData. Several types of exploratory (motion charts, box-and-whisker plots, spider charts) and quantitative (inference, regression, analysis of variance (ANOVA)) data analyses tools are demonstrated. Two specific human health related questions (temporal and geographic effects of ozone pollution) are discussed as motivational challenges. PMID:24465054
Dinov, Ivo D; Christou, Nicolas
2011-09-01
This article presents a hands-on web-based activity motivated by the relation between human health and ozone pollution in California. This case study is based on multivariate data collected monthly at 20 locations in California between 1980 and 2006. Several strategies and tools for data interrogation and exploratory data analysis, model fitting and statistical inference on these data are presented. All components of this case study (data, tools, activity) are freely available online at: http://wiki.stat.ucla.edu/socr/index.php/SOCR_MotionCharts_CAOzoneData. Several types of exploratory (motion charts, box-and-whisker plots, spider charts) and quantitative (inference, regression, analysis of variance (ANOVA)) data analyses tools are demonstrated. Two specific human health related questions (temporal and geographic effects of ozone pollution) are discussed as motivational challenges.
Evaluating planetary digital terrain models-The HRSC DTM test
Heipke, C.; Oberst, J.; Albertz, J.; Attwenger, M.; Dorninger, P.; Dorrer, E.; Ewe, M.; Gehrke, S.; Gwinner, K.; Hirschmuller, H.; Kim, J.R.; Kirk, R.L.; Mayer, H.; Muller, Jan-Peter; Rengarajan, R.; Rentsch, M.; Schmidt, R.; Scholten, F.; Shan, J.; Spiegel, M.; Wahlisch, M.; Neukum, G.
2007-01-01
The High Resolution Stereo Camera (HRSC) has been orbiting the planet Mars since January 2004 onboard the European Space Agency (ESA) Mars Express mission and delivers imagery which is being used for topographic mapping of the planet. The HRSC team has conducted a systematic inter-comparison of different alternatives for the production of high resolution digital terrain models (DTMs) from the multi look HRSC push broom imagery. Based on carefully chosen test sites the test participants have produced DTMs which have been subsequently analysed in a quantitative and a qualitative manner. This paper reports on the results obtained in this test. ?? 2007 Elsevier Ltd. All rights reserved.
Detection of regional air pollution episodes utilizing satellite digital data in the visual range
NASA Technical Reports Server (NTRS)
Burke, H.-H. K.
1982-01-01
Digital analyses of satellite visible data for selected high-sulfate cases over the northeastern U.S., on July 21 and 22, 1978, are compared with ground-based measurements. Quantitative information on total aerosol loading derived from the satellite digitized data using an atmospheric radiative transfer model is found to agree with the ground measurements, and it is shown that the extent and transport of the haze pattern may be monitored from the satellite data over the period of maximum intensity for the episode. Attention is drawn to the potential benefits of satellite monitoring of pollution episodes demonstrated by the model.
Quantitative measures for redox signaling.
Pillay, Ché S; Eagling, Beatrice D; Driscoll, Scott R E; Rohwer, Johann M
2016-07-01
Redox signaling is now recognized as an important regulatory mechanism for a number of cellular processes including the antioxidant response, phosphokinase signal transduction and redox metabolism. While there has been considerable progress in identifying the cellular machinery involved in redox signaling, quantitative measures of redox signals have been lacking, limiting efforts aimed at understanding and comparing redox signaling under normoxic and pathogenic conditions. Here we have outlined some of the accepted principles for redox signaling, including the description of hydrogen peroxide as a signaling molecule and the role of kinetics in conferring specificity to these signaling events. Based on these principles, we then develop a working definition for redox signaling and review a number of quantitative methods that have been employed to describe signaling in other systems. Using computational modeling and published data, we show how time- and concentration- dependent analyses, in particular, could be used to quantitatively describe redox signaling and therefore provide important insights into the functional organization of redox networks. Finally, we consider some of the key challenges with implementing these methods. Copyright © 2016 Elsevier Inc. All rights reserved.
Tissue Sampling Guides for Porcine Biomedical Models.
Albl, Barbara; Haesner, Serena; Braun-Reichhart, Christina; Streckel, Elisabeth; Renner, Simone; Seeliger, Frank; Wolf, Eckhard; Wanke, Rüdiger; Blutke, Andreas
2016-04-01
This article provides guidelines for organ and tissue sampling adapted to porcine animal models in translational medical research. Detailed protocols for the determination of sampling locations and numbers as well as recommendations on the orientation, size, and trimming direction of samples from ∼50 different porcine organs and tissues are provided in the Supplementary Material. The proposed sampling protocols include the generation of samples suitable for subsequent qualitative and quantitative analyses, including cryohistology, paraffin, and plastic histology; immunohistochemistry;in situhybridization; electron microscopy; and quantitative stereology as well as molecular analyses of DNA, RNA, proteins, metabolites, and electrolytes. With regard to the planned extent of sampling efforts, time, and personnel expenses, and dependent upon the scheduled analyses, different protocols are provided. These protocols are adjusted for (I) routine screenings, as used in general toxicity studies or in analyses of gene expression patterns or histopathological organ alterations, (II) advanced analyses of single organs/tissues, and (III) large-scale sampling procedures to be applied in biobank projects. Providing a robust reference for studies of porcine models, the described protocols will ensure the efficiency of sampling, the systematic recovery of high-quality samples representing the entire organ or tissue as well as the intra-/interstudy comparability and reproducibility of results. © The Author(s) 2016.
USING MICROSOFT OFFICE EXCEL® 2007 TO CONDUCT GENERALIZED MATCHING ANALYSES
Reed, Derek D
2009-01-01
The generalized matching equation is a robust and empirically supported means of analyzing relations between reinforcement and behavior. Unfortunately, no simple task analysis is available to behavior analysts interested in using the matching equation to evaluate data in clinical or applied settings. This technical article presents a task analysis for the use of Microsoft Excel to analyze and plot the generalized matching equation. Using a data-based case example and a step-by-step guide for completing the analysis, these instructions are intended to promote the use of quantitative analyses by researchers with little to no experience in quantitative analyses or the matching law. PMID:20514196
Using Microsoft Office Excel 2007 to conduct generalized matching analyses.
Reed, Derek D
2009-01-01
The generalized matching equation is a robust and empirically supported means of analyzing relations between reinforcement and behavior. Unfortunately, no simple task analysis is available to behavior analysts interested in using the matching equation to evaluate data in clinical or applied settings. This technical article presents a task analysis for the use of Microsoft Excel to analyze and plot the generalized matching equation. Using a data-based case example and a step-by-step guide for completing the analysis, these instructions are intended to promote the use of quantitative analyses by researchers with little to no experience in quantitative analyses or the matching law.
Review of the Primary National Ambient Air Quality Standards ...
The U.S. Environmental Protection Agency (EPA) is conducting a review of the air quality criteria and the primary (health-based) national ambient air quality standards (NAAQS) for nitrogen dioxide (NO2). The major phases of the process for reviewing NAAQS include the following: (1) planning, (2) science assessment, (3) risk and exposure assessment, and (4) policy assessment. As an initial step in the risk and exposure assessment phase, EPA staff has considered the extent to which updated quantitative analyses of NO2 exposures and/or NO2-attributable health risks are warranted in the current review, based on the available scientific evidence and technical information. These considerations focus on the degree to which important uncertainties identified in quantitative analyses from the last review have been addressed by newly available evidence, tools, or information. The purpose of the REA planning document is to present staff's considerations and preliminary conclusions regarding potential updated quantitative analyses in the current review of the primary NO2 NAAQS. Provide opportunity for CASAC feedback on EPA's plans for the risk and exposure assessment for the Nitrogen Oxides NAAQS review
Quantifying patterns of research interest evolution
NASA Astrophysics Data System (ADS)
Jia, Tao; Wang, Dashun; Szymanski, Boleslaw
Changing and shifting research interest is an integral part of a scientific career. Despite extensive investigations of various factors that influence a scientist's choice of research topics, quantitative assessments of mechanisms that give rise to macroscopic patterns characterizing research interest evolution of individual scientists remain limited. Here we perform a large-scale analysis of extensive publication records, finding that research interest change follows a reproducible pattern characterized by an exponential distribution. We identify three fundamental features responsible for the observed exponential distribution, which arise from a subtle interplay between exploitation and exploration in research interest evolution. We develop a random walk based model, which adequately reproduces our empirical observations. Our study presents one of the first quantitative analyses of macroscopic patterns governing research interest change, documenting a high degree of regularity underlying scientific research and individual careers.
Effects of Various Architectural Parameters on Six Room Acoustical Measures in Auditoria.
NASA Astrophysics Data System (ADS)
Chiang, Wei-Hwa
The effects of architectural parameters on six room acoustical measures were investigated by means of correlation analyses, factor analyses and multiple regression analyses based on data taken in twenty halls. Architectural parameters were used to estimate acoustical measures taken at individual locations within each room as well as the averages and standard deviations of all measured values in the rooms. The six acoustical measures were Early Decay Time (EDT10), Clarity Index (C80), Overall Level (G), Bass Ratio based on Early Decay Time (BR(EDT)), Treble Ratio based on Early Decay Time (TR(EDT)), and Early Inter-aural Cross Correlation (IACC80). A comprehensive method of quantifying various architectural characteristics of rooms was developed to define a large number of architectural parameters that were hypothesized to effect the acoustical measurements made in the rooms. This study quantitatively confirmed many of the principles used in the design of concert halls and auditoria. Three groups of room architectural parameters such as the parameters associated with the depth of diffusing surfaces were significantly correlated with the hall standard deviations of most of the acoustical measures. Significant differences of statistical relations among architectural parameters and receiver specific acoustical measures were found between a group of music halls and a group of lecture halls. For example, architectural parameters such as the relative distance from the receiver to the overhead ceiling increased the percentage of the variance of acoustical measures that was explained by Barron's revised theory from approximately 70% to 80% only when data were taken in the group of music halls. This study revealed the major architectural parameters which have strong relations with individual acoustical measures forming the basis for a more quantitative method for advancing the theoretical design of concert halls and other auditoria. The results of this study provide designers the information to predict acoustical measures in buildings at very early stages of the design process without using computer models or scale models.
Enterococci are frequently monitored in water samples as indicators of fecal pollution. Attention is now shifting from culture based methods for enumerating these organisms to more rapid molecular methods such as QPCR. Accurate quantitative analyses by this method requires highly...
Tiwari, Anjani K; Ojha, Himanshu; Kaul, Ankur; Dutta, Anupama; Srivastava, Pooja; Shukla, Gauri; Srivastava, Rakesh; Mishra, Anil K
2009-07-01
Nuclear magnetic resonance imaging is a very useful tool in modern medical diagnostics, especially when gadolinium (III)-based contrast agents are administered to the patient with the aim of increasing the image contrast between normal and diseased tissues. With the use of soft modelling techniques such as quantitative structure-activity relationship/quantitative structure-property relationship after a suitable description of their molecular structure, we have studied a series of phosphonic acid for designing new MRI contrast agent. Quantitative structure-property relationship studies with multiple linear regression analysis were applied to find correlation between different calculated molecular descriptors of the phosphonic acid-based chelating agent and their stability constants. The final quantitative structure-property relationship mathematical models were found as--quantitative structure-property relationship Model for phosphonic acid series (Model 1)--log K(ML) = {5.00243(+/-0.7102)}- MR {0.0263(+/-0.540)}n = 12 l r l = 0.942 s = 0.183 F = 99.165 quantitative structure-property relationship Model for phosphonic acid series (Model 2)--log K(ML) = {5.06280(+/-0.3418)}- MR {0.0252(+/- .198)}n = 12 l r l = 0.956 s = 0.186 F = 99.256.
Determination of nutritional parameters of yoghurts by FT Raman spectroscopy
NASA Astrophysics Data System (ADS)
Czaja, Tomasz; Baranowska, Maria; Mazurek, Sylwester; Szostak, Roman
2018-05-01
FT-Raman quantitative analysis of nutritional parameters of yoghurts was performed with the help of partial least squares models. The relative standard errors of prediction for fat, lactose and protein determination in the quantified commercial samples equalled to 3.9, 3.2 and 3.6%, respectively. Models based on attenuated total reflectance spectra of the liquid yoghurt samples and of dried yoghurt films collected with the single reflection diamond accessory showed relative standard errors of prediction values of 1.6-5.0% and 2.7-5.2%, respectively, for the analysed components. Despite a relatively low signal-to-noise ratio in the obtained spectra, Raman spectroscopy, combined with chemometrics, constitutes a fast and powerful tool for macronutrients quantification in yoghurts. Errors received for attenuated total reflectance method were found to be relatively higher than those for Raman spectroscopy due to inhomogeneity of the analysed samples.
De Benedetti, Pier G; Fanelli, Francesca
2018-03-21
Simple comparative correlation analyses and quantitative structure-kinetics relationship (QSKR) models highlight the interplay of kinetic rates and binding affinity as an essential feature in drug design and discovery. The choice of the molecular series, and their structural variations, used in QSKR modeling is fundamental to understanding the mechanistic implications of ligand and/or drug-target binding and/or unbinding processes. Here, we discuss the implications of linear correlations between kinetic rates and binding affinity constants and the relevance of the computational approaches to QSKR modeling. Copyright © 2018 Elsevier Ltd. All rights reserved.
Analyser-based phase contrast image reconstruction using geometrical optics.
Kitchen, M J; Pavlov, K M; Siu, K K W; Menk, R H; Tromba, G; Lewis, R A
2007-07-21
Analyser-based phase contrast imaging can provide radiographs of exceptional contrast at high resolution (<100 microm), whilst quantitative phase and attenuation information can be extracted using just two images when the approximations of geometrical optics are satisfied. Analytical phase retrieval can be performed by fitting the analyser rocking curve with a symmetric Pearson type VII function. The Pearson VII function provided at least a 10% better fit to experimentally measured rocking curves than linear or Gaussian functions. A test phantom, a hollow nylon cylinder, was imaged at 20 keV using a Si(1 1 1) analyser at the ELETTRA synchrotron radiation facility. Our phase retrieval method yielded a more accurate object reconstruction than methods based on a linear fit to the rocking curve. Where reconstructions failed to map expected values, calculations of the Takagi number permitted distinction between the violation of the geometrical optics conditions and the failure of curve fitting procedures. The need for synchronized object/detector translation stages was removed by using a large, divergent beam and imaging the object in segments. Our image acquisition and reconstruction procedure enables quantitative phase retrieval for systems with a divergent source and accounts for imperfections in the analyser.
Application of Petri net based analysis techniques to signal transduction pathways.
Sackmann, Andrea; Heiner, Monika; Koch, Ina
2006-11-02
Signal transduction pathways are usually modelled using classical quantitative methods, which are based on ordinary differential equations (ODEs). However, some difficulties are inherent in this approach. On the one hand, the kinetic parameters involved are often unknown and have to be estimated. With increasing size and complexity of signal transduction pathways, the estimation of missing kinetic data is not possible. On the other hand, ODEs based models do not support any explicit insights into possible (signal-) flows within the network. Moreover, a huge amount of qualitative data is available due to high-throughput techniques. In order to get information on the systems behaviour, qualitative analysis techniques have been developed. Applications of the known qualitative analysis methods concern mainly metabolic networks. Petri net theory provides a variety of established analysis techniques, which are also applicable to signal transduction models. In this context special properties have to be considered and new dedicated techniques have to be designed. We apply Petri net theory to model and analyse signal transduction pathways first qualitatively before continuing with quantitative analyses. This paper demonstrates how to build systematically a discrete model, which reflects provably the qualitative biological behaviour without any knowledge of kinetic parameters. The mating pheromone response pathway in Saccharomyces cerevisiae serves as case study. We propose an approach for model validation of signal transduction pathways based on the network structure only. For this purpose, we introduce the new notion of feasible t-invariants, which represent minimal self-contained subnets being active under a given input situation. Each of these subnets stands for a signal flow in the system. We define maximal common transition sets (MCT-sets), which can be used for t-invariant examination and net decomposition into smallest biologically meaningful functional units. The paper demonstrates how Petri net analysis techniques can promote a deeper understanding of signal transduction pathways. The new concepts of feasible t-invariants and MCT-sets have been proven to be useful for model validation and the interpretation of the biological system behaviour. Whereas MCT-sets provide a decomposition of the net into disjunctive subnets, feasible t-invariants describe subnets, which generally overlap. This work contributes to qualitative modelling and to the analysis of large biological networks by their fully automatic decomposition into biologically meaningful modules.
Application of Petri net based analysis techniques to signal transduction pathways
Sackmann, Andrea; Heiner, Monika; Koch, Ina
2006-01-01
Background Signal transduction pathways are usually modelled using classical quantitative methods, which are based on ordinary differential equations (ODEs). However, some difficulties are inherent in this approach. On the one hand, the kinetic parameters involved are often unknown and have to be estimated. With increasing size and complexity of signal transduction pathways, the estimation of missing kinetic data is not possible. On the other hand, ODEs based models do not support any explicit insights into possible (signal-) flows within the network. Moreover, a huge amount of qualitative data is available due to high-throughput techniques. In order to get information on the systems behaviour, qualitative analysis techniques have been developed. Applications of the known qualitative analysis methods concern mainly metabolic networks. Petri net theory provides a variety of established analysis techniques, which are also applicable to signal transduction models. In this context special properties have to be considered and new dedicated techniques have to be designed. Methods We apply Petri net theory to model and analyse signal transduction pathways first qualitatively before continuing with quantitative analyses. This paper demonstrates how to build systematically a discrete model, which reflects provably the qualitative biological behaviour without any knowledge of kinetic parameters. The mating pheromone response pathway in Saccharomyces cerevisiae serves as case study. Results We propose an approach for model validation of signal transduction pathways based on the network structure only. For this purpose, we introduce the new notion of feasible t-invariants, which represent minimal self-contained subnets being active under a given input situation. Each of these subnets stands for a signal flow in the system. We define maximal common transition sets (MCT-sets), which can be used for t-invariant examination and net decomposition into smallest biologically meaningful functional units. Conclusion The paper demonstrates how Petri net analysis techniques can promote a deeper understanding of signal transduction pathways. The new concepts of feasible t-invariants and MCT-sets have been proven to be useful for model validation and the interpretation of the biological system behaviour. Whereas MCT-sets provide a decomposition of the net into disjunctive subnets, feasible t-invariants describe subnets, which generally overlap. This work contributes to qualitative modelling and to the analysis of large biological networks by their fully automatic decomposition into biologically meaningful modules. PMID:17081284
Lover, Andrew A; Coker, Richard J
2014-05-01
Infections with the malaria parasite Plasmodium vivax are noteworthy for potentially very long incubation periods (6-9 months), which present a major barrier to disease elimination. Increased sporozoite challenge has been reported to be associated with both shorter incubation and pre-patent periods in a range of human challenge studies. However, this evidence base has scant empirical foundation, as these historical analyses were limited by available analytic methods, and provides no quantitative estimates of effect size. Following a comprehensive literature search, we re-analysed all identified studies using survival and/or logistic models plus contingency tables. We have found very weak evidence for dose-dependence at entomologically plausible inocula levels. These results strongly suggest that sporozoite dosage is not an important driver of long-latency. Evidence presented suggests that parasite strain and vector species have quantitatively greater impacts, and the potential existence of a dose threshold for human dose-response to sporozoites. Greater consideration of the complex interplay between these aspects of vectors and parasites are important for human challenge experiments, vaccine trials, and epidemiology towards global malaria elimination.
Purnell, Mark; Seehausen, Ole; Galis, Frietson
2012-01-01
Resource polymorphisms and competition for resources are significant factors in speciation. Many examples come from fishes, and cichlids are of particular importance because of their role as model organisms at the interface of ecology, development, genetics and evolution. However, analysis of trophic resource use in fishes can be difficult and time-consuming, and for fossil fish species it is particularly problematic. Here, we present evidence from cichlids that analysis of tooth microwear based on high-resolution (sub-micrometre scale) three-dimensional data and new ISO standards for quantification of surface textures provides a powerful tool for dietary discrimination and investigation of trophic resource exploitation. Our results suggest that three-dimensional approaches to analysis offer significant advantages over two-dimensional operator-scored methods of microwear analysis, including applicability to rough tooth surfaces that lack distinct scratches and pits. Tooth microwear textures develop over a longer period of time than is represented by stomach contents, and analyses based on textures are less prone to biases introduced by opportunistic feeding. They are more sensitive to subtle dietary differences than isotopic analysis. Quantitative textural analysis of tooth microwear has a useful role to play, complementing existing approaches, in trophic analysis of fishes—both extant and extinct. PMID:22491979
Computational simulation of extravehicular activity dynamics during a satellite capture attempt.
Schaffner, G; Newman, D J; Robinson, S K
2000-01-01
A more quantitative approach to the analysis of astronaut extravehicular activity (EVA) tasks is needed because of their increasing complexity, particularly in preparation for the on-orbit assembly of the International Space Station. Existing useful EVA computer analyses produce either high-resolution three-dimensional computer images based on anthropometric representations or empirically derived predictions of astronaut strength based on lean body mass and the position and velocity of body joints but do not provide multibody dynamic analysis of EVA tasks. Our physics-based methodology helps fill the current gap in quantitative analysis of astronaut EVA by providing a multisegment human model and solving the equations of motion in a high-fidelity simulation of the system dynamics. The simulation work described here improves on the realism of previous efforts by including three-dimensional astronaut motion, incorporating joint stops to account for the physiological limits of range of motion, and incorporating use of constraint forces to model interaction with objects. To demonstrate the utility of this approach, the simulation is modeled on an actual EVA task, namely, the attempted capture of a spinning Intelsat VI satellite during STS-49 in May 1992. Repeated capture attempts by an EVA crewmember were unsuccessful because the capture bar could not be held in contact with the satellite long enough for the capture latches to fire and successfully retrieve the satellite.
NASA Astrophysics Data System (ADS)
Bandrowski, D.; Lai, Y.; Bradley, N.; Gaeuman, D. A.; Murauskas, J.; Som, N. A.; Martin, A.; Goodman, D.; Alvarez, J.
2014-12-01
In the field of river restoration sciences there is a growing need for analytical modeling tools and quantitative processes to help identify and prioritize project sites. 2D hydraulic models have become more common in recent years and with the availability of robust data sets and computing technology, it is now possible to evaluate large river systems at the reach scale. The Trinity River Restoration Program is now analyzing a 40 mile segment of the Trinity River to determine priority and implementation sequencing for its Phase II rehabilitation projects. A comprehensive approach and quantitative tool has recently been developed to analyze this complex river system referred to as: 2D-Hydrodynamic Based Logic Modeling (2D-HBLM). This tool utilizes various hydraulic output parameters combined with biological, ecological, and physical metrics at user-defined spatial scales. These metrics and their associated algorithms are the underpinnings of the 2D-HBLM habitat module used to evaluate geomorphic characteristics, riverine processes, and habitat complexity. The habitat metrics are further integrated into a comprehensive Logic Model framework to perform statistical analyses to assess project prioritization. The Logic Model will analyze various potential project sites by evaluating connectivity using principal component methods. The 2D-HBLM tool will help inform management and decision makers by using a quantitative process to optimize desired response variables with balancing important limiting factors in determining the highest priority locations within the river corridor to implement restoration projects. Effective river restoration prioritization starts with well-crafted goals that identify the biological objectives, address underlying causes of habitat change, and recognizes that social, economic, and land use limiting factors may constrain restoration options (Bechie et. al. 2008). Applying natural resources management actions, like restoration prioritization, is essential for successful project implementation (Conroy and Peterson, 2013). Evaluating tradeoffs and examining alternatives to improve fish habitat through optimization modeling is not just a trend but rather the scientific strategy by which management needs embrace and apply in its decision framework.
Crovelli, R.A.
1988-01-01
The geologic appraisal model that is selected for a petroleum resource assessment depends upon purpose of the assessment, basic geologic assumptions of the area, type of available data, time available before deadlines, available human and financial resources, available computer facilities, and, most importantly, the available quantitative methodology with corresponding computer software and any new quantitative methodology that would have to be developed. Therefore, different resource assessment projects usually require different geologic models. Also, more than one geologic model might be needed in a single project for assessing different regions of the study or for cross-checking resource estimates of the area. Some geologic analyses used in the past for petroleum resource appraisal involved play analysis. The corresponding quantitative methodologies of these analyses usually consisted of Monte Carlo simulation techniques. A probabilistic system of petroleum resource appraisal for play analysis has been designed to meet the following requirements: (1) includes a variety of geologic models, (2) uses an analytic methodology instead of Monte Carlo simulation, (3) possesses the capacity to aggregate estimates from many areas that have been assessed by different geologic models, and (4) runs quickly on a microcomputer. Geologic models consist of four basic types: reservoir engineering, volumetric yield, field size, and direct assessment. Several case histories and present studies by the U.S. Geological Survey are discussed. ?? 1988 International Association for Mathematical Geology.
ERIC Educational Resources Information Center
Mulford, Bill; Silins, Halia
2011-01-01
Purpose: This study aims to present revised models and a reconceptualisation of successful school principalship for improved student outcomes. Design/methodology/approach: The study's approach is qualitative and quantitative, culminating in model building and multi-level statistical analyses. Findings: Principals who promote both capacity building…
Matos-Maraví, Pável
2016-07-01
Different diversification scenarios have been proposed to explain the origin of extant biodiversity. However, most existing meta-analyses of time-calibrated phylogenies rely on approaches that do not quantitatively test alternative diversification processes. Here, I highlight the shortcomings of using species divergence ranks, which is a method widely used in meta-analyses. Divergence ranks consist of categorizing cladogenetic events to certain periods of time, typically to either Pleistocene or to pre-Pleistocene ages. This approach has been claimed to shed light on the origin of most extant species and the timing and dynamics of diversification in any biogeographical region. However, interpretations drawn from such method often confound two fundamental questions in macroevolutionary studies, tempo (timing of evolutionary rate shifts) and mode ("how" and "why" of speciation). By using simulated phylogenies under four diversification scenarios, constant-rate, diversity-dependence, high extinction, and high speciation rates in the Pleistocene, I showed that interpretations based on species divergence ranks might have been seriously misleading. Future meta-analyses of dated phylogenies need to be aware of the impacts of incomplete taxonomic sampling, tree topology, and divergence time uncertainties, as well as they might be benefited by including quantitative tests of alternative diversification models that acknowledge extinction and diversity dependence. © 2016 The Author(s).
ERIC Educational Resources Information Center
Linn, Robert L.
The New Standards Project conducted a pilot test of a series of performance-based assessment tasks in mathematics and English language arts at Grades 4 and 8 in the spring of 1993. This paper reports the results of a series of generalizability analyses conducted for a subset of the 1993 pilot study data in mathematics. Generalizability analyses…
Approaches to developing alternative and predictive toxicology based on PBPK/PD and QSAR modeling.
Yang, R S; Thomas, R S; Gustafson, D L; Campain, J; Benjamin, S A; Verhaar, H J; Mumtaz, M M
1998-01-01
Systematic toxicity testing, using conventional toxicology methodologies, of single chemicals and chemical mixtures is highly impractical because of the immense numbers of chemicals and chemical mixtures involved and the limited scientific resources. Therefore, the development of unconventional, efficient, and predictive toxicology methods is imperative. Using carcinogenicity as an end point, we present approaches for developing predictive tools for toxicologic evaluation of chemicals and chemical mixtures relevant to environmental contamination. Central to the approaches presented is the integration of physiologically based pharmacokinetic/pharmacodynamic (PBPK/PD) and quantitative structure--activity relationship (QSAR) modeling with focused mechanistically based experimental toxicology. In this development, molecular and cellular biomarkers critical to the carcinogenesis process are evaluated quantitatively between different chemicals and/or chemical mixtures. Examples presented include the integration of PBPK/PD and QSAR modeling with a time-course medium-term liver foci assay, molecular biology and cell proliferation studies. Fourier transform infrared spectroscopic analyses of DNA changes, and cancer modeling to assess and attempt to predict the carcinogenicity of the series of 12 chlorobenzene isomers. Also presented is an ongoing effort to develop and apply a similar approach to chemical mixtures using in vitro cell culture (Syrian hamster embryo cell transformation assay and human keratinocytes) methodologies and in vivo studies. The promise and pitfalls of these developments are elaborated. When successfully applied, these approaches may greatly reduce animal usage, personnel, resources, and time required to evaluate the carcinogenicity of chemicals and chemical mixtures. Images Figure 6 PMID:9860897
Shin, S M; Kim, Y-I; Choi, Y-S; Yamaguchi, T; Maki, K; Cho, B-H; Park, S-B
2015-01-01
To evaluate axial cervical vertebral (ACV) shape quantitatively and to build a prediction model for skeletal maturation level using statistical shape analysis for Japanese individuals. The sample included 24 female and 19 male patients with hand-wrist radiographs and CBCT images. Through generalized Procrustes analysis and principal components (PCs) analysis, the meaningful PCs were extracted from each ACV shape and analysed for the estimation regression model. Each ACV shape had meaningful PCs, except for the second axial cervical vertebra. Based on these models, the smallest prediction intervals (PIs) were from the combination of the shape space PCs, age and gender. Overall, the PIs of the male group were smaller than those of the female group. There was no significant correlation between centroid size as a size factor and skeletal maturation level. Our findings suggest that the ACV maturation method, which was applied by statistical shape analysis, could confirm information about skeletal maturation in Japanese individuals as an available quantifier of skeletal maturation and could be as useful a quantitative method as the skeletal maturation index.
Shin, S M; Choi, Y-S; Yamaguchi, T; Maki, K; Cho, B-H; Park, S-B
2015-01-01
Objectives: To evaluate axial cervical vertebral (ACV) shape quantitatively and to build a prediction model for skeletal maturation level using statistical shape analysis for Japanese individuals. Methods: The sample included 24 female and 19 male patients with hand–wrist radiographs and CBCT images. Through generalized Procrustes analysis and principal components (PCs) analysis, the meaningful PCs were extracted from each ACV shape and analysed for the estimation regression model. Results: Each ACV shape had meaningful PCs, except for the second axial cervical vertebra. Based on these models, the smallest prediction intervals (PIs) were from the combination of the shape space PCs, age and gender. Overall, the PIs of the male group were smaller than those of the female group. There was no significant correlation between centroid size as a size factor and skeletal maturation level. Conclusions: Our findings suggest that the ACV maturation method, which was applied by statistical shape analysis, could confirm information about skeletal maturation in Japanese individuals as an available quantifier of skeletal maturation and could be as useful a quantitative method as the skeletal maturation index. PMID:25411713
ERIC Educational Resources Information Center
Vázquez-Alonso, Ángel; Manassero-Mas, María-Antonia; García-Carmona, Antonio; Montesano de Talavera, Marisa
2016-01-01
This study applies a new quantitative methodological approach to diagnose epistemology conceptions in a large sample. The analyses use seven multiple-rating items on the epistemology of science drawn from the item pool Views on Science-Technology-Society (VOSTS). The bases of the new methodological diagnostic approach are the empirical…
A traits-based approach for prioritizing species for monitoring and surrogacy selection
Pracheil, Brenda M.; McManamay, Ryan A.; Bevelhimer, Mark S.; ...
2016-11-28
The bar for justifying the use of vertebrate animals for study is being increasingly raised, thus requiring increased rigor for species selection and study design. Although we have power analyses to provide quantitative backing for the numbers of organisms used, quantitative backing for selection of study species is not frequently employed. This can be especially important when measuring the impacts of ecosystem alteration, when study species must be chosen that are both sensitive to the alteration and of sufficient abundance for study. Just as important is providing justification for designation of surrogate species for study, especially when the species ofmore » interest is rare or of conservation concern and selection of an appropriate surrogate can have legal implications. In this study, we use a combination of GIS, a fish traits database and multivariate statistical analyses to quantitatively prioritize species for study and to determine potential study surrogate species. We provide two case studies to illustrate our quantitative, traits-based approach for designating study species and surrogate species. In the first case study, we select broadly representative fish species to understand the effects of turbine passage on adult fishes based on traits that suggest sensitivity to turbine passage. In our second case study, we present a framework for selecting a surrogate species for an endangered species. Lastly, we suggest that our traits-based framework can provide quantitative backing and added justification to selection of study species while expanding the inference space of study results.« less
Standardized protocols for quality control of MRM-based plasma proteomic workflows.
Percy, Andrew J; Chambers, Andrew G; Smith, Derek S; Borchers, Christoph H
2013-01-04
Mass spectrometry (MS)-based proteomics is rapidly emerging as a viable technology for the identification and quantitation of biological samples, such as human plasma--the most complex yet commonly employed biofluid in clinical analyses. The transition from a qualitative to quantitative science is required if proteomics is going to successfully make the transition to a clinically useful technique. MS, however, has been criticized for a lack of reproducibility and interlaboratory transferability. Currently, the MS and plasma proteomics communities lack standardized protocols and reagents to ensure that high-quality quantitative data can be accurately and precisely reproduced by laboratories across the world using different MS technologies. Toward addressing this issue, we have developed standard protocols for multiple reaction monitoring (MRM)-based assays with customized isotopically labeled internal standards for quality control of the sample preparation workflow and the MS platform in quantitative plasma proteomic analyses. The development of reference standards and their application to a single MS platform is discussed herein, along with the results from intralaboratory tests. The tests highlighted the importance of the reference standards in assessing the efficiency and reproducibility of the entire bottom-up proteomic workflow and revealed errors related to the sample preparation and performance quality and deficits of the MS and LC systems. Such evaluations are necessary if MRM-based quantitative plasma proteomics is to be used in verifying and validating putative disease biomarkers across different research laboratories and eventually in clinical laboratories.
A traits-based approach for prioritizing species for monitoring and surrogacy selection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pracheil, Brenda M.; McManamay, Ryan A.; Bevelhimer, Mark S.
The bar for justifying the use of vertebrate animals for study is being increasingly raised, thus requiring increased rigor for species selection and study design. Although we have power analyses to provide quantitative backing for the numbers of organisms used, quantitative backing for selection of study species is not frequently employed. This can be especially important when measuring the impacts of ecosystem alteration, when study species must be chosen that are both sensitive to the alteration and of sufficient abundance for study. Just as important is providing justification for designation of surrogate species for study, especially when the species ofmore » interest is rare or of conservation concern and selection of an appropriate surrogate can have legal implications. In this study, we use a combination of GIS, a fish traits database and multivariate statistical analyses to quantitatively prioritize species for study and to determine potential study surrogate species. We provide two case studies to illustrate our quantitative, traits-based approach for designating study species and surrogate species. In the first case study, we select broadly representative fish species to understand the effects of turbine passage on adult fishes based on traits that suggest sensitivity to turbine passage. In our second case study, we present a framework for selecting a surrogate species for an endangered species. Lastly, we suggest that our traits-based framework can provide quantitative backing and added justification to selection of study species while expanding the inference space of study results.« less
Belland, Brian R; Walker, Andrew E; Kim, Nam Ju
2017-12-01
Computer-based scaffolding provides temporary support that enables students to participate in and become more proficient at complex skills like problem solving, argumentation, and evaluation. While meta-analyses have addressed between-subject differences on cognitive outcomes resulting from scaffolding, none has addressed within-subject gains. This leaves much quantitative scaffolding literature not covered by existing meta-analyses. To address this gap, this study used Bayesian network meta-analysis to synthesize within-subjects (pre-post) differences resulting from scaffolding in 56 studies. We generated the posterior distribution using 20,000 Markov Chain Monte Carlo samples. Scaffolding has a consistently strong effect across student populations, STEM (science, technology, engineering, and mathematics) disciplines, and assessment levels, and a strong effect when used with most problem-centered instructional models (exception: inquiry-based learning and modeling visualization) and educational levels (exception: secondary education). Results also indicate some promising areas for future scaffolding research, including scaffolding among students with learning disabilities, for whom the effect size was particularly large (ḡ = 3.13).
Belland, Brian R.; Walker, Andrew E.; Kim, Nam Ju
2017-01-01
Computer-based scaffolding provides temporary support that enables students to participate in and become more proficient at complex skills like problem solving, argumentation, and evaluation. While meta-analyses have addressed between-subject differences on cognitive outcomes resulting from scaffolding, none has addressed within-subject gains. This leaves much quantitative scaffolding literature not covered by existing meta-analyses. To address this gap, this study used Bayesian network meta-analysis to synthesize within-subjects (pre–post) differences resulting from scaffolding in 56 studies. We generated the posterior distribution using 20,000 Markov Chain Monte Carlo samples. Scaffolding has a consistently strong effect across student populations, STEM (science, technology, engineering, and mathematics) disciplines, and assessment levels, and a strong effect when used with most problem-centered instructional models (exception: inquiry-based learning and modeling visualization) and educational levels (exception: secondary education). Results also indicate some promising areas for future scaffolding research, including scaffolding among students with learning disabilities, for whom the effect size was particularly large (ḡ = 3.13). PMID:29200508
2010-01-01
Background Quantitative models of biochemical and cellular systems are used to answer a variety of questions in the biological sciences. The number of published quantitative models is growing steadily thanks to increasing interest in the use of models as well as the development of improved software systems and the availability of better, cheaper computer hardware. To maximise the benefits of this growing body of models, the field needs centralised model repositories that will encourage, facilitate and promote model dissemination and reuse. Ideally, the models stored in these repositories should be extensively tested and encoded in community-supported and standardised formats. In addition, the models and their components should be cross-referenced with other resources in order to allow their unambiguous identification. Description BioModels Database http://www.ebi.ac.uk/biomodels/ is aimed at addressing exactly these needs. It is a freely-accessible online resource for storing, viewing, retrieving, and analysing published, peer-reviewed quantitative models of biochemical and cellular systems. The structure and behaviour of each simulation model distributed by BioModels Database are thoroughly checked; in addition, model elements are annotated with terms from controlled vocabularies as well as linked to relevant data resources. Models can be examined online or downloaded in various formats. Reaction network diagrams generated from the models are also available in several formats. BioModels Database also provides features such as online simulation and the extraction of components from large scale models into smaller submodels. Finally, the system provides a range of web services that external software systems can use to access up-to-date data from the database. Conclusions BioModels Database has become a recognised reference resource for systems biology. It is being used by the community in a variety of ways; for example, it is used to benchmark different simulation systems, and to study the clustering of models based upon their annotations. Model deposition to the database today is advised by several publishers of scientific journals. The models in BioModels Database are freely distributed and reusable; the underlying software infrastructure is also available from SourceForge https://sourceforge.net/projects/biomodels/ under the GNU General Public License. PMID:20587024
Comparison of particular logistic models' adoption in the Czech Republic
NASA Astrophysics Data System (ADS)
Vrbová, Petra; Cempírek, Václav
2016-12-01
Managing inventory is considered as one of the most challenging tasks facing supply chain managers and specialists. Decisions related to inventory locations along with level of inventory kept throughout the supply chain have a fundamental impact on the response time, service level, delivery lead-time and the total cost of the supply chain. The main objective of this paper is to identify and analyse the share of a particular logistic model adopted in the Czech Republic (Consignment stock, Buffer stock, Safety stock) and also compare their usage and adoption according to different industries. This paper also aims to specify possible reasons of particular logistic model preferences in comparison to the others. The analysis is based on quantitative survey held in the Czech Republic.
Cypko, Mario A; Stoehr, Matthaeus; Kozniewski, Marcin; Druzdzel, Marek J; Dietz, Andreas; Berliner, Leonard; Lemke, Heinz U
2017-11-01
Oncological treatment is being increasingly complex, and therefore, decision making in multidisciplinary teams is becoming the key activity in the clinical pathways. The increased complexity is related to the number and variability of possible treatment decisions that may be relevant to a patient. In this paper, we describe validation of a multidisciplinary cancer treatment decision in the clinical domain of head and neck oncology. Probabilistic graphical models and corresponding inference algorithms, in the form of Bayesian networks, can support complex decision-making processes by providing a mathematically reproducible and transparent advice. The quality of BN-based advice depends on the quality of the model. Therefore, it is vital to validate the model before it is applied in practice. For an example BN subnetwork of laryngeal cancer with 303 variables, we evaluated 66 patient records. To validate the model on this dataset, a validation workflow was applied in combination with quantitative and qualitative analyses. In the subsequent analyses, we observed four sources of imprecise predictions: incorrect data, incomplete patient data, outvoting relevant observations, and incorrect model. Finally, the four problems were solved by modifying the data and the model. The presented validation effort is related to the model complexity. For simpler models, the validation workflow is the same, although it may require fewer validation methods. The validation success is related to the model's well-founded knowledge base. The remaining laryngeal cancer model may disclose additional sources of imprecise predictions.
Data-based mathematical modeling of vectorial transport across double-transfected polarized cells.
Bartholomé, Kilian; Rius, Maria; Letschert, Katrin; Keller, Daniela; Timmer, Jens; Keppler, Dietrich
2007-09-01
Vectorial transport of endogenous small molecules, toxins, and drugs across polarized epithelial cells contributes to their half-life in the organism and to detoxification. To study vectorial transport in a quantitative manner, an in vitro model was used that includes polarized MDCKII cells stably expressing the recombinant human uptake transporter OATP1B3 in their basolateral membrane and the recombinant ATP-driven efflux pump ABCC2 in their apical membrane. These double-transfected cells enabled mathematical modeling of the vectorial transport of the anionic prototype substance bromosulfophthalein (BSP) that has frequently been used to examine hepatobiliary transport. Time-dependent analyses of (3)H-labeled BSP in the basolateral, intracellular, and apical compartments of cells cultured on filter membranes and efflux experiments in cells preloaded with BSP were performed. A mathematical model was fitted to the experimental data. Data-based modeling was optimized by including endogenous transport processes in addition to the recombinant transport proteins. The predominant contributions to the overall vectorial transport of BSP were mediated by OATP1B3 (44%) and ABCC2 (28%). Model comparison predicted a previously unrecognized endogenous basolateral efflux process as a negative contribution to total vectorial transport, amounting to 19%, which is in line with the detection of the basolateral efflux pump Abcc4 in MDCKII cells. Rate-determining steps in the vectorial transport were identified by calculating control coefficients. Data-based mathematical modeling of vectorial transport of BSP as a model substance resulted in a quantitative description of this process and its components. The same systems biology approach may be applied to other cellular systems and to different substances.
NASA Astrophysics Data System (ADS)
Wolter, Andrea; Stead, Doug; Clague, John J.
2014-02-01
The 1963 Vajont Slide in northeast Italy is an important engineering and geological event. Although the landslide has been extensively studied, new insights can be derived by applying modern techniques such as remote sensing and numerical modelling. This paper presents the first digital terrestrial photogrammetric analyses of the failure scar, landslide deposits, and the area surrounding the failure, with a focus on the scar. We processed photogrammetric models to produce discontinuity stereonets, residual maps and profiles, and slope and aspect maps, all of which provide information on the failure scar morphology. Our analyses enabled the creation of a preliminary semi-quantitative morphologic classification of the Vajont failure scar based on the large-scale tectonic folds and step-paths that define it. The analyses and morphologic classification have implications for the kinematics, dynamics, and mechanism of the slide. Metre- and decametre-scale features affected the initiation, direction, and displacement rate of sliding. The most complexly folded and stepped areas occur close to the intersection of orthogonal synclinal features related to the Dinaric and Neoalpine deformation events. Our analyses also highlight, for the first time, the evolution of the Vajont failure scar from 1963 to the present.
Point-by-point compositional analysis for atom probe tomography.
Stephenson, Leigh T; Ceguerra, Anna V; Li, Tong; Rojhirunsakool, Tanaporn; Nag, Soumya; Banerjee, Rajarshi; Cairney, Julie M; Ringer, Simon P
2014-01-01
This new alternate approach to data processing for analyses that traditionally employed grid-based counting methods is necessary because it removes a user-imposed coordinate system that not only limits an analysis but also may introduce errors. We have modified the widely used "binomial" analysis for APT data by replacing grid-based counting with coordinate-independent nearest neighbour identification, improving the measurements and the statistics obtained, allowing quantitative analysis of smaller datasets, and datasets from non-dilute solid solutions. It also allows better visualisation of compositional fluctuations in the data. Our modifications include:.•using spherical k-atom blocks identified by each detected atom's first k nearest neighbours.•3D data visualisation of block composition and nearest neighbour anisotropy.•using z-statistics to directly compare experimental and expected composition curves. Similar modifications may be made to other grid-based counting analyses (contingency table, Langer-Bar-on-Miller, sinusoidal model) and could be instrumental in developing novel data visualisation options.
A simulation-based approach for estimating premining water quality: Red Mountain Creek, Colorado
Runkel, Robert L.; Kimball, Briant A; Walton-Day, Katherine; Verplanck, Philip L.
2007-01-01
Regulatory agencies are often charged with the task of setting site-specific numeric water quality standards for impaired streams. This task is particularly difficult for streams draining highly mineralized watersheds with past mining activity. Baseline water quality data obtained prior to mining are often non-existent and application of generic water quality standards developed for unmineralized watersheds is suspect given the geology of most watersheds affected by mining. Various approaches have been used to estimate premining conditions, but none of the existing approaches rigorously consider the physical and geochemical processes that ultimately determine instream water quality. An approach based on simulation modeling is therefore proposed herein. The approach utilizes synoptic data that provide spatially-detailed profiles of concentration, streamflow, and constituent load along the study reach. This field data set is used to calibrate a reactive stream transport model that considers the suite of physical and geochemical processes that affect constituent concentrations during instream transport. A key input to the model is the quality and quantity of waters entering the study reach. This input is based on chemical analyses available from synoptic sampling and observed increases in streamflow along the study reach. Given the calibrated model, additional simulations are conducted to estimate premining conditions. In these simulations, the chemistry of mining-affected sources is replaced with the chemistry of waters that are thought to be unaffected by mining (proximal, premining analogues). The resultant simulations provide estimates of premining water quality that reflect both the reduced loads that were present prior to mining and the processes that affect these loads as they are transported downstream. This simulation-based approach is demonstrated using data from Red Mountain Creek, Colorado, a small stream draining a heavily-mined watershed. Model application to the premining problem for Red Mountain Creek is based on limited field reconnaissance and chemical analyses; additional field work and analyses may be needed to develop definitive, quantitative estimates of premining water quality.
Use of machine learning methods to reduce predictive error of groundwater models.
Xu, Tianfang; Valocchi, Albert J; Choi, Jaesik; Amir, Eyal
2014-01-01
Quantitative analyses of groundwater flow and transport typically rely on a physically-based model, which is inherently subject to error. Errors in model structure, parameter and data lead to both random and systematic error even in the output of a calibrated model. We develop complementary data-driven models (DDMs) to reduce the predictive error of physically-based groundwater models. Two machine learning techniques, the instance-based weighting and support vector regression, are used to build the DDMs. This approach is illustrated using two real-world case studies of the Republican River Compact Administration model and the Spokane Valley-Rathdrum Prairie model. The two groundwater models have different hydrogeologic settings, parameterization, and calibration methods. In the first case study, cluster analysis is introduced for data preprocessing to make the DDMs more robust and computationally efficient. The DDMs reduce the root-mean-square error (RMSE) of the temporal, spatial, and spatiotemporal prediction of piezometric head of the groundwater model by 82%, 60%, and 48%, respectively. In the second case study, the DDMs reduce the RMSE of the temporal prediction of piezometric head of the groundwater model by 77%. It is further demonstrated that the effectiveness of the DDMs depends on the existence and extent of the structure in the error of the physically-based model. © 2013, National GroundWater Association.
Synchrony and motor mimicking in chimpanzee observational learning
Fuhrmann, Delia; Ravignani, Andrea; Marshall-Pescini, Sarah; Whiten, Andrew
2014-01-01
Cumulative tool-based culture underwrote our species' evolutionary success, and tool-based nut-cracking is one of the strongest candidates for cultural transmission in our closest relatives, chimpanzees. However the social learning processes that may explain both the similarities and differences between the species remain unclear. A previous study of nut-cracking by initially naïve chimpanzees suggested that a learning chimpanzee holding no hammer nevertheless replicated hammering actions it witnessed. This observation has potentially important implications for the nature of the social learning processes and underlying motor coding involved. In the present study, model and observer actions were quantified frame-by-frame and analysed with stringent statistical methods, demonstrating synchrony between the observer's and model's movements, cross-correlation of these movements above chance level and a unidirectional transmission process from model to observer. These results provide the first quantitative evidence for motor mimicking underlain by motor coding in apes, with implications for mirror neuron function. PMID:24923651
Synchrony and motor mimicking in chimpanzee observational learning.
Fuhrmann, Delia; Ravignani, Andrea; Marshall-Pescini, Sarah; Whiten, Andrew
2014-06-13
Cumulative tool-based culture underwrote our species' evolutionary success, and tool-based nut-cracking is one of the strongest candidates for cultural transmission in our closest relatives, chimpanzees. However the social learning processes that may explain both the similarities and differences between the species remain unclear. A previous study of nut-cracking by initially naïve chimpanzees suggested that a learning chimpanzee holding no hammer nevertheless replicated hammering actions it witnessed. This observation has potentially important implications for the nature of the social learning processes and underlying motor coding involved. In the present study, model and observer actions were quantified frame-by-frame and analysed with stringent statistical methods, demonstrating synchrony between the observer's and model's movements, cross-correlation of these movements above chance level and a unidirectional transmission process from model to observer. These results provide the first quantitative evidence for motor mimicking underlain by motor coding in apes, with implications for mirror neuron function.
NASA Astrophysics Data System (ADS)
Li, Peizhen; Tian, Yueli; Zhai, Honglin; Deng, Fangfang; Xie, Meihong; Zhang, Xiaoyun
2013-11-01
Non-purine derivatives have been shown to be promising novel drug candidates as xanthine oxidase inhibitors. Based on three-dimensional quantitative structure-activity relationship (3D-QSAR) methods including comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA), two 3D-QSAR models for a series of non-purine xanthine oxidase (XO) inhibitors were established, and their reliability was supported by statistical parameters. Combined 3D-QSAR modeling and the results of molecular docking between non-purine xanthine oxidase inhibitors and XO, the main factors that influenced activity of inhibitors were investigated, and the obtained results could explain known experimental facts. Furthermore, several new potential inhibitors with higher activity predicted were designed, which based on our analyses, and were supported by the simulation of molecular docking. This study provided some useful information for the development of non-purine xanthine oxidase inhibitors with novel structures.
Space shuttle’s liftoff: a didactical model
NASA Astrophysics Data System (ADS)
Borghi, Riccardo; Spinozzi, Turi Maria
2017-07-01
The pedagogical aim of the present paper, thought for an undergraduate audience, is to help students to appreciate how the development of elementary models based on physics first principles is a fundamental and necessary preliminary step for the behaviour of complex real systems to be grasped with minimal amounts of math. In some particularly fortunate cases, such models also show reasonably good results when are compared to reality. The speed behaviour of the Space Shuttle during its first two minutes of flight from liftoff is here analysed from such a didactical point of view. Only the momentum conservation law is employed to develop the model, which is eventually applied to quantitatively interpret the telemetry of the 2011 last launches of Shuttle Discovery and Shuttle Endeavour. To the STS-51-L and STS-107 astronauts, in memoriam.
Loo, Lit-Hsin; Laksameethanasan, Danai; Tung, Yi-Ling
2014-03-01
Protein subcellular localization is a major determinant of protein function. However, this important protein feature is often described in terms of discrete and qualitative categories of subcellular compartments, and therefore it has limited applications in quantitative protein function analyses. Here, we present Protein Localization Analysis and Search Tools (PLAST), an automated analysis framework for constructing and comparing quantitative signatures of protein subcellular localization patterns based on microscopy images. PLAST produces human-interpretable protein localization maps that quantitatively describe the similarities in the localization patterns of proteins and major subcellular compartments, without requiring manual assignment or supervised learning of these compartments. Using the budding yeast Saccharomyces cerevisiae as a model system, we show that PLAST is more accurate than existing, qualitative protein localization annotations in identifying known co-localized proteins. Furthermore, we demonstrate that PLAST can reveal protein localization-function relationships that are not obvious from these annotations. First, we identified proteins that have similar localization patterns and participate in closely-related biological processes, but do not necessarily form stable complexes with each other or localize at the same organelles. Second, we found an association between spatial and functional divergences of proteins during evolution. Surprisingly, as proteins with common ancestors evolve, they tend to develop more diverged subcellular localization patterns, but still occupy similar numbers of compartments. This suggests that divergence of protein localization might be more frequently due to the development of more specific localization patterns over ancestral compartments than the occupation of new compartments. PLAST enables systematic and quantitative analyses of protein localization-function relationships, and will be useful to elucidate protein functions and how these functions were acquired in cells from different organisms or species. A public web interface of PLAST is available at http://plast.bii.a-star.edu.sg.
Loo, Lit-Hsin; Laksameethanasan, Danai; Tung, Yi-Ling
2014-01-01
Protein subcellular localization is a major determinant of protein function. However, this important protein feature is often described in terms of discrete and qualitative categories of subcellular compartments, and therefore it has limited applications in quantitative protein function analyses. Here, we present Protein Localization Analysis and Search Tools (PLAST), an automated analysis framework for constructing and comparing quantitative signatures of protein subcellular localization patterns based on microscopy images. PLAST produces human-interpretable protein localization maps that quantitatively describe the similarities in the localization patterns of proteins and major subcellular compartments, without requiring manual assignment or supervised learning of these compartments. Using the budding yeast Saccharomyces cerevisiae as a model system, we show that PLAST is more accurate than existing, qualitative protein localization annotations in identifying known co-localized proteins. Furthermore, we demonstrate that PLAST can reveal protein localization-function relationships that are not obvious from these annotations. First, we identified proteins that have similar localization patterns and participate in closely-related biological processes, but do not necessarily form stable complexes with each other or localize at the same organelles. Second, we found an association between spatial and functional divergences of proteins during evolution. Surprisingly, as proteins with common ancestors evolve, they tend to develop more diverged subcellular localization patterns, but still occupy similar numbers of compartments. This suggests that divergence of protein localization might be more frequently due to the development of more specific localization patterns over ancestral compartments than the occupation of new compartments. PLAST enables systematic and quantitative analyses of protein localization-function relationships, and will be useful to elucidate protein functions and how these functions were acquired in cells from different organisms or species. A public web interface of PLAST is available at http://plast.bii.a-star.edu.sg. PMID:24603469
Muñoz-Redondo, José Manuel; Cuevas, Francisco Julián; León, Juan Manuel; Ramírez, Pilar; Moreno-Rojas, José Manuel; Ruiz-Moreno, María José
2017-04-05
A quantitative approach using HS-SPME-GC-MS was performed to investigate the ester changes related to the second fermentation in bottle. The contribution of the type of base wine to the final wine style is detailed. Furthermore, a discriminant model was developed based on ester changes according to the second fermentation (with 100% sensitivity and specificity values). The application of a double-check criteria according to univariate and multivariate analyses allowed the identification of potential volatile markers related to the second fermentation. Some of them presented a synthesis-ratio around 3-fold higher after this period and they are known to play a key role in wine aroma. Up to date, this is the first study reporting the role of esters as markers of the second fermentation. The methodology described in this study confirmed its suitability for the wine aroma field. The results contribute to enhance our understanding of this fermentative step.
Refining the quantitative pathway of the Pathways to Mathematics model.
Sowinski, Carla; LeFevre, Jo-Anne; Skwarchuk, Sheri-Lynn; Kamawar, Deepthi; Bisanz, Jeffrey; Smith-Chant, Brenda
2015-03-01
In the current study, we adopted the Pathways to Mathematics model of LeFevre et al. (2010). In this model, there are three cognitive domains--labeled as the quantitative, linguistic, and working memory pathways--that make unique contributions to children's mathematical development. We attempted to refine the quantitative pathway by combining children's (N=141 in Grades 2 and 3) subitizing, counting, and symbolic magnitude comparison skills using principal components analysis. The quantitative pathway was examined in relation to dependent numerical measures (backward counting, arithmetic fluency, calculation, and number system knowledge) and a dependent reading measure, while simultaneously accounting for linguistic and working memory skills. Analyses controlled for processing speed, parental education, and gender. We hypothesized that the quantitative, linguistic, and working memory pathways would account for unique variance in the numerical outcomes; this was the case for backward counting and arithmetic fluency. However, only the quantitative and linguistic pathways (not working memory) accounted for unique variance in calculation and number system knowledge. Not surprisingly, only the linguistic pathway accounted for unique variance in the reading measure. These findings suggest that the relative contributions of quantitative, linguistic, and working memory skills vary depending on the specific cognitive task. Copyright © 2014 Elsevier Inc. All rights reserved.
Li, Xiang; Basu, Saonli; Miller, Michael B; Iacono, William G; McGue, Matt
2011-01-01
Genome-wide association studies (GWAS) using family data involve association analyses between hundreds of thousands of markers and a trait for a large number of related individuals. The correlations among relatives bring statistical and computational challenges when performing these large-scale association analyses. Recently, several rapid methods accounting for both within- and between-family variation have been proposed. However, these techniques mostly model the phenotypic similarities in terms of genetic relatedness. The familial resemblances in many family-based studies such as twin studies are not only due to the genetic relatedness, but also derive from shared environmental effects and assortative mating. In this paper, we propose 2 generalized least squares (GLS) models for rapid association analysis of family-based GWAS, which accommodate both genetic and environmental contributions to familial resemblance. In our first model, we estimated the joint genetic and environmental variations. In our second model, we estimated the genetic and environmental components separately. Through simulation studies, we demonstrated that our proposed approaches are more powerful and computationally efficient than a number of existing methods are. We show that estimating the residual variance-covariance matrix in the GLS models without SNP effects does not lead to an appreciable bias in the p values as long as the SNP effect is small (i.e. accounting for no more than 1% of trait variance). Copyright © 2011 S. Karger AG, Basel.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collee, R.; Govaerts, J.; Winand, L.
1959-10-31
A brief resume of the classical methods of quantitative determination of thorium in ores and thoriferous products is given to show that a rapid, accurate, and precise physical method based on the radioactivity of thorium would be of great utility. A method based on the utilization of the characteristic spectrum of the thorium gamma radiation is presented. The preparation of the samples and the instruments needed for the measurements is discussed. The experimental results show that the reproducibility is very satisfactory and that it is possible to detect Th contents of 1% or smaller. (J.S.R.)
Ostovaneh, Mohammad R; Vavere, Andrea L; Mehra, Vishal C; Kofoed, Klaus F; Matheson, Matthew B; Arbab-Zadeh, Armin; Fujisawa, Yasuko; Schuijf, Joanne D; Rochitte, Carlos E; Scholte, Arthur J; Kitagawa, Kakuya; Dewey, Marc; Cox, Christopher; DiCarli, Marcelo F; George, Richard T; Lima, Joao A C
To determine the diagnostic accuracy of semi-automatic quantitative metrics compared to expert reading for interpretation of computed tomography perfusion (CTP) imaging. The CORE320 multicenter diagnostic accuracy clinical study enrolled patients between 45 and 85 years of age who were clinically referred for invasive coronary angiography (ICA). Computed tomography angiography (CTA), CTP, single photon emission computed tomography (SPECT), and ICA images were interpreted manually in blinded core laboratories by two experienced readers. Additionally, eight quantitative CTP metrics as continuous values were computed semi-automatically from myocardial and blood attenuation and were combined using logistic regression to derive a final quantitative CTP metric score. For the reference standard, hemodynamically significant coronary artery disease (CAD) was defined as a quantitative ICA stenosis of 50% or greater and a corresponding perfusion defect by SPECT. Diagnostic accuracy was determined by area under the receiver operating characteristic curve (AUC). Of the total 377 included patients, 66% were male, median age was 62 (IQR: 56, 68) years, and 27% had prior myocardial infarction. In patient based analysis, the AUC (95% CI) for combined CTA-CTP expert reading and combined CTA-CTP semi-automatic quantitative metrics was 0.87(0.84-0.91) and 0.86 (0.83-0.9), respectively. In vessel based analyses the AUC's were 0.85 (0.82-0.88) and 0.84 (0.81-0.87), respectively. No significant difference in AUC was found between combined CTA-CTP expert reading and CTA-CTP semi-automatic quantitative metrics in patient based or vessel based analyses(p > 0.05 for all). Combined CTA-CTP semi-automatic quantitative metrics is as accurate as CTA-CTP expert reading to detect hemodynamically significant CAD. Copyright © 2018 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Timchalk, Chuck; Poet, Torka S.; Kousba, Ahmed A.
2004-04-01
There is a need to develop approaches for assessing risk associated with acute exposures to a broad-range of chemical agents and to rapidly determine the potential implications to human health. Non-invasive biomonitoring approaches are being developed using reliable portable analytical systems to quantitate dosimetry utilizing readily obtainable body fluids, such as saliva. Saliva has been used to evaluate a broad range of biomarkers, drugs, and environmental contaminants including heavy metals and pesticides. To advance the application of non-invasive biomonitoring a microfluidic/ electrochemical device has also been developed for the analysis of lead (Pb), using square wave anodic stripping voltammetry. Themore » system demonstrates a linear response over a broad concentration range (1 2000 ppb) and is capable of quantitating saliva Pb in rats orally administered acute doses of Pb-acetate. Appropriate pharmacokinetic analyses have been used to quantitate systemic dosimetry based on determination of saliva Pb concentrations. In addition, saliva has recently been used to quantitate dosimetry following exposure to the organophosphate insecticide chlorpyrifos in a rodent model system by measuring the major metabolite, trichloropyridinol, and saliva cholinesterase inhibition following acute exposures. These results suggest that technology developed for non-invasive biomonitoring can provide a sensitive, and portable analytical tool capable of assessing exposure and risk in real-time. By coupling these non-invasive technologies with pharmacokinetic modeling it is feasible to rapidly quantitate acute exposure to a broad range of chemical agents. In summary, it is envisioned that once fully developed, these monitoring and modeling approaches will be useful for accessing acute exposure and health risk.« less
Topic model-based mass spectrometric data analysis in cancer biomarker discovery studies.
Wang, Minkun; Tsai, Tsung-Heng; Di Poto, Cristina; Ferrarini, Alessia; Yu, Guoqiang; Ressom, Habtom W
2016-08-18
A fundamental challenge in quantitation of biomolecules for cancer biomarker discovery is owing to the heterogeneous nature of human biospecimens. Although this issue has been a subject of discussion in cancer genomic studies, it has not yet been rigorously investigated in mass spectrometry based proteomic and metabolomic studies. Purification of mass spectometric data is highly desired prior to subsequent analysis, e.g., quantitative comparison of the abundance of biomolecules in biological samples. We investigated topic models to computationally analyze mass spectrometric data considering both integrated peak intensities and scan-level features, i.e., extracted ion chromatograms (EICs). Probabilistic generative models enable flexible representation in data structure and infer sample-specific pure resources. Scan-level modeling helps alleviate information loss during data preprocessing. We evaluated the capability of the proposed models in capturing mixture proportions of contaminants and cancer profiles on LC-MS based serum proteomic and GC-MS based tissue metabolomic datasets acquired from patients with hepatocellular carcinoma (HCC) and liver cirrhosis as well as synthetic data we generated based on the serum proteomic data. The results we obtained by analysis of the synthetic data demonstrated that both intensity-level and scan-level purification models can accurately infer the mixture proportions and the underlying true cancerous sources with small average error ratios (<7 %) between estimation and ground truth. By applying the topic model-based purification to mass spectrometric data, we found more proteins and metabolites with significant changes between HCC cases and cirrhotic controls. Candidate biomarkers selected after purification yielded biologically meaningful pathway analysis results and improved disease discrimination power in terms of the area under ROC curve compared to the results found prior to purification. We investigated topic model-based inference methods to computationally address the heterogeneity issue in samples analyzed by LC/GC-MS. We observed that incorporation of scan-level features have the potential to lead to more accurate purification results by alleviating the loss in information as a result of integrating peaks. We believe cancer biomarker discovery studies that use mass spectrometric analysis of human biospecimens can greatly benefit from topic model-based purification of the data prior to statistical and pathway analyses.
NASA Astrophysics Data System (ADS)
Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei
2017-12-01
Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 0.9998, 0.9915, 0.9895, and 0.9940 respectively). The proposed spline interpolation method exhibits better linear correlation and smaller error in the results of the quantitative analysis of Cu compared with polynomial fitting, Lorentz fitting and model-free methods, The simulation and quantitative experimental results show that the spline interpolation method can effectively detect and correct the continuous background.
Arku, Raphael E; Birch, Aaron; Shupler, Matthew; Yusuf, Salim; Hystad, Perry; Brauer, Michael
2018-05-01
Household air pollution (HAP) from combustion of solid fuels is an important contributor to disease burden in low- and middle-income countries (LIC, and MIC). However, current HAP disease burden estimates are based on integrated exposure response curves that are not currently informed by quantitative HAP studies in LIC and MIC. While there is adequate evidence supporting causal relationships between HAP and respiratory disease, large cohort studies specifically examining relationships between quantitative measures of HAP exposure with cardiovascular disease are lacking. We aim to improve upon exposure proxies based on fuel type, and to reduce exposure misclassification by quantitatively measuring exposure across varying cooking fuel types and conditions in diverse geographies and socioeconomic settings. We leverage technology advancements to estimate household and personal PM 2.5 (particles below 2.5 μm in aerodynamic diameter) exposure within the large (N~250,000) multi-country (N~26) Prospective Urban and Rural Epidemiological (PURE) cohort study. Here, we detail the study protocol and the innovative methodologies being used to characterize HAP exposures, and their application in epidemiologic analyses. This study characterizes HAP PM 2.5 exposures for participants in rural communities in ten PURE countries with >10% solid fuel use at baseline (Bangladesh, Brazil, Chile, China, Colombia, India, Pakistan, South Africa, Tanzania, and Zimbabwe). PM 2.5 monitoring includes 48-h cooking area measurements in 4500 households and simultaneous personal monitoring of male and female pairs from 20% of the selected households. Repeat measurements occur in 20% of households to assess impacts of seasonality. Monitoring began in 2017, and will continue through 2019. The Ultrasonic Personal Aerosol Sampler (UPAS), a novel, robust, and inexpensive filter based monitor that is programmable through a dedicated mobile phone application is used for sampling. Pilot study field evaluation of cooking area measurements indicated high correlation between the UPAS and reference Harvard Impactors (r = 0.91; 95% CI: 0.84, 0.95; slope = 0.95). To facilitate tracking and to minimize contamination and analytical error, the samplers utilize barcoded filters and filter cartridges that are weighed pre- and post-sampling using a fully automated weighing system. Pump flow and pressure measurements, temperature and RH, GPS coordinates and semi-quantitative continuous particle mass concentrations based on filter differential pressure are uploaded to a central server automatically whenever the mobile phone is connected to the internet, with sampled data automatically screened for quality control parameters. A short survey is administered during the 48-h monitoring period. Post-weighed filters are further analyzed to estimate black carbon concentrations through a semi-automated, rapid, cost-effective image analysis approach. The measured PM 2.5 data will then be combined with PURE survey information on household characteristics and behaviours collected at baseline and during follow-up to develop quantitative HAP models for PM 2.5 exposures for all rural PURE participants (~50,000) and across different cooking fuel types within the 10 index countries. Both the measured (in the subset) and the modelled exposures will be used in separate longitudinal epidemiologic analyses to assess associations with cardiopulmonary mortality, and disease incidence. The collected data and resulting characterization of cooking area and personal PM 2.5 exposures in multiple rural communities from 10 countries will better inform exposure assessment as well as future epidemiologic analyses assessing the relationships between quantitative estimates of chronic HAP exposure with adult mortality and incident cardiovascular and respiratory disease. This will provide refined and more accurate exposure estimates in global CVD related exposure-response analyses. Copyright © 2018 Elsevier Ltd. All rights reserved.
Beckett, Kate; Earthy, Sarah; Sleney, Jude; Barnes, Jo; Kellezi, Blerina; Barker, Marcus; Clarkson, Julie; Coffey, Frank; Elder, Georgina; Kendrick, Denise
2014-01-01
Objective To explore views of service providers caring for injured people on: the extent to which services meet patients’ needs and their perspectives on factors contributing to any identified gaps in service provision. Design Qualitative study nested within a quantitative multicentre longitudinal study assessing longer term impact of unintentional injuries in working age adults. Sampling frame for service providers was based on patient-reported service use in the quantitative study, patient interviews and advice of previously injured lay research advisers. Service providers’ views were elicited through semistructured interviews. Data were analysed using thematic analysis. Setting Participants were recruited from a range of settings and services in acute hospital trusts in four study centres (Bristol, Leicester, Nottingham and Surrey) and surrounding areas. Participants 40 service providers from a range of disciplines. Results Service providers described two distinct models of trauma care: an ‘ideal’ model, informed by professional knowledge of the impact of injury and awareness of best models of care, and a ‘real’ model based on the realities of National Health Service (NHS) practice. Participants’ ‘ideal’ model was consistent with standards of high-quality effective trauma care and while there were examples of services meeting the ideal model, ‘real’ care could also be fragmented and inequitable with major gaps in provision. Service provider accounts provide evidence of comprehensive understanding of patients’ needs, awareness of best practice, compassion and research but reveal significant organisational and resource barriers limiting implementation of knowledge in practice. Conclusions Service providers envisage an ‘ideal’ model of trauma care which is timely, equitable, effective and holistic, but this can differ from the care currently provided. Their experiences provide many suggestions for service improvements to bridge the gap between ‘real’ and ‘ideal’ care. Using service provider views to inform service design and delivery could enhance the quality, patient experience and outcomes of care. PMID:25005598
A Proposed Model of Retransformed Qualitative Data within a Mixed Methods Research Design
ERIC Educational Resources Information Center
Palladino, John M.
2009-01-01
Most models of mixed methods research design provide equal emphasis of qualitative and quantitative data analyses and interpretation. Other models stress one method more than the other. The present article is a discourse about the investigator's decision to employ a mixed method design to examine special education teachers' advocacy and…
NASA Astrophysics Data System (ADS)
Buss, S.; Wernli, H.; Peter, T.; Kivi, R.; Bui, T. P.; Kleinböhl, A.; Schiller, C.
Stratospheric winter temperatures play a key role in the chain of microphysical and chemical processes that lead to the formation of polar stratospheric clouds (PSCs), chlorine activation and eventually to stratospheric ozone depletion. Here the tempera- ture conditions during the Arctic winters 1999/2000 and 2000/2001 are quantitatively investigated using observed profiles of water vapour and nitric acid, and tempera- tures from high-resolution radiosondes and aircraft observations, global ECMWF and UKMO analyses and mesoscale model simulations over Scandinavia and Greenland. The ECMWF model resolves parts of the gravity wave activity and generally agrees well with the observations. However, for the very cold temperatures near the ice frost point the ECMWF analyses have a warm bias of 1-6 K compared to radiosondes. For the mesoscale model HRM, this bias is generally reduced due to a more accurate rep- resentation of gravity waves. Quantitative estimates of the impact of the mesoscale temperature perturbations indicates that over Scandinavia and Greenland the wave- induced stratospheric cooling (as simulated by the HRM) affects only moderately the estimated chlorine activation and homogeneous NAT particle formation, but strongly enhances the potential for ice formation.
Quantifying the effect of forests on frequency and intensity of rockfalls
NASA Astrophysics Data System (ADS)
Moos, Christine; Dorren, Luuk; Stoffel, Markus
2017-02-01
Forests serve as a natural means of protection against small rockfalls. Due to their barrier effect, they reduce the intensity and the propagation probability of falling rocks and thus reduce the occurrence frequency of a rockfall event for a given element at risk. However, despite established knowledge on the protective effect of forests, they are generally neglected in quantitative rockfall risk analyses. Their inclusion in quantitative rockfall risk assessment would, however, be necessary to express their efficiency in monetary terms and to allow comparison of forests with other protective measures, such as nets and dams. The goal of this study is to quantify the effect of forests on the occurrence frequency and intensity of rockfalls. We therefore defined an onset frequency of blocks based on a power-law magnitude-frequency distribution and determined their propagation probabilities on a virtual slope based on rockfall simulations. Simulations were run for different forest and non-forest scenarios under varying forest stand and terrain conditions. We analysed rockfall frequencies and intensities at five different distances from the release area. Based on two multivariate statistical prediction models, we investigated which of the terrain and forest characteristics predominantly drive the role of forest in reducing rockfall occurrence frequency and intensity and whether they are able to predict the effect of forest on rockfall risk. The rockfall occurrence frequency below forested slopes is reduced between approximately 10 and 90 % compared to non-forested slope conditions; whereas rockfall intensity is reduced by 10 to 70 %. This reduction increases with increasing slope length and decreases with decreasing tree density, tree diameter and increasing rock volume, as well as in cases of clustered or gappy forest structures. The statistical prediction models reveal that the cumulative basal area of trees, block volume and horizontal forest structure represent key variables for the prediction of the protective effect of forests. In order to validate these results, models have to be tested on real slopes with a wide variation of terrain and forest conditions.
Qualitative, semi-quantitative, and quantitative simulation of the osmoregulation system in yeast
Pang, Wei; Coghill, George M.
2015-01-01
In this paper we demonstrate how Morven, a computational framework which can perform qualitative, semi-quantitative, and quantitative simulation of dynamical systems using the same model formalism, is applied to study the osmotic stress response pathway in yeast. First the Morven framework itself is briefly introduced in terms of the model formalism employed and output format. We then built a qualitative model for the biophysical process of the osmoregulation in yeast, and a global qualitative-level picture was obtained through qualitative simulation of this model. Furthermore, we constructed a Morven model based on existing quantitative model of the osmoregulation system. This model was then simulated qualitatively, semi-quantitatively, and quantitatively. The obtained simulation results are presented with an analysis. Finally the future development of the Morven framework for modelling the dynamic biological systems is discussed. PMID:25864377
Analysis of selected data from the triservice missile data base
NASA Technical Reports Server (NTRS)
Allen, Jerry M.; Shaw, David S.; Sawyer, Wallace C.
1989-01-01
An extremely large, systematic, axisymmetric-body/tail-fin data base has been gathered through tests of an innovative missile model design which is described herein. These data were originally obtained for incorporation into a missile aerodynamics code based on engineering methods (Program MISSILE3), but these data are also valuable as diagnostic test cases for developing computational methods because of the individual-fin data included in the data base. Detailed analyses of four sample cases from these data are presented to illustrate interesting individual-fin force and moment trends. These samples quantitatively show how bow shock, fin orientation, fin deflection, and body vortices can produce strong, unusual, and computationally challenging effects on individual fin loads. Flow-visualization photographs are examined to provide physical insight into the cause of these effects.
A quantitative risk-based model for reasoning over critical system properties
NASA Technical Reports Server (NTRS)
Feather, M. S.
2002-01-01
This position paper suggests the use of a quantitative risk-based model to help support reeasoning and decision making that spans many of the critical properties such as security, safety, survivability, fault tolerance, and real-time.
The impact of injector-based contrast agent administration in time-resolved MRA.
Budjan, Johannes; Attenberger, Ulrike I; Schoenberg, Stefan O; Pietsch, Hubertus; Jost, Gregor
2018-05-01
Time-resolved contrast-enhanced MR angiography (4D-MRA), which allows the simultaneous visualization of the vasculature and blood-flow dynamics, is widely used in clinical routine. In this study, the impact of two different contrast agent injection methods on 4D-MRA was examined in a controlled, standardized setting in an animal model. Six anesthetized Goettingen minipigs underwent two identical 4D-MRA examinations at 1.5 T in a single session. The contrast agent (0.1 mmol/kg body weight gadobutrol, followed by 20 ml saline) was injected using either manual injection or an automated injection system. A quantitative comparison of vascular signal enhancement and quantitative renal perfusion analyses were performed. Analysis of signal enhancement revealed higher peak enhancements and shorter time to peak intervals for the automated injection. Significantly different bolus shapes were found: automated injection resulted in a compact first-pass bolus shape clearly separated from the recirculation while manual injection resulted in a disrupted first-pass bolus with two peaks. In the quantitative perfusion analyses, statistically significant differences in plasma flow values were found between the injection methods. The results of both qualitative and quantitative 4D-MRA depend on the contrast agent injection method, with automated injection providing more defined bolus shapes and more standardized examination protocols. • Automated and manual contrast agent injection result in different bolus shapes in 4D-MRA. • Manual injection results in an undefined and interrupted bolus with two peaks. • Automated injection provides more defined bolus shapes. • Automated injection can lead to more standardized examination protocols.
A quantitative dynamic systems model of health-related quality of life among older adults
Roppolo, Mattia; Kunnen, E Saskia; van Geert, Paul L; Mulasso, Anna; Rabaglietti, Emanuela
2015-01-01
Health-related quality of life (HRQOL) is a person-centered concept. The analysis of HRQOL is highly relevant in the aged population, which is generally suffering from health decline. Starting from a conceptual dynamic systems model that describes the development of HRQOL in individuals over time, this study aims to develop and test a quantitative dynamic systems model, in order to reveal the possible dynamic trends of HRQOL among older adults. The model is tested in different ways: first, with a calibration procedure to test whether the model produces theoretically plausible results, and second, with a preliminary validation procedure using empirical data of 194 older adults. This first validation tested the prediction that given a particular starting point (first empirical data point), the model will generate dynamic trajectories that lead to the observed endpoint (second empirical data point). The analyses reveal that the quantitative model produces theoretically plausible trajectories, thus providing support for the calibration procedure. Furthermore, the analyses of validation show a good fit between empirical and simulated data. In fact, no differences were found in the comparison between empirical and simulated final data for the same subgroup of participants, whereas the comparison between different subgroups of people resulted in significant differences. These data provide an initial basis of evidence for the dynamic nature of HRQOL during the aging process. Therefore, these data may give new theoretical and applied insights into the study of HRQOL and its development with time in the aging population. PMID:26604722
Contents of microscopic fungi in dusts coming from cereal analysis laboratories.
Szwajkowska-Michalek, Lidia; Stuper, Kinga; Lakomy, Piotr; Matysiak, Anna; Perkowski, Juliusz
2010-01-01
Microscopic fungi - components of bioaerosol found in the workplace environment of individuals employed in the agricultural sector - constitute a considerable hazard for their health. This study includes quantitative and qualitative analyses of mycobionta contained in 20 samples of dusts collected from laboratories conducting analyses of cereals. A total of 27 species of viable microscopic fungi were isolated. The most frequently isolated genera Penicillium and Aspergillus, accounting for 27 percent and 26 percent of analyzed isolates. The content of fungal biomass was determined quantitatively using a fungal marker, ergosterol (ERG). Concentrations of this metabolite for all samples ranged from 0.48 mg/kg-212.36 mg/kg. Based on the analyses, it may be stated that the concentration of microfungi in settled dust from laboratories conducting analyses of cereals was varied, and in several cases markedly exceeded admissible concentration levels.
A Coupled Simulation Architecture for Agent-Based/Geohydrological Modelling
NASA Astrophysics Data System (ADS)
Jaxa-Rozen, M.
2016-12-01
The quantitative modelling of social-ecological systems can provide useful insights into the interplay between social and environmental processes, and their impact on emergent system dynamics. However, such models should acknowledge the complexity and uncertainty of both of the underlying subsystems. For instance, the agent-based models which are increasingly popular for groundwater management studies can be made more useful by directly accounting for the hydrological processes which drive environmental outcomes. Conversely, conventional environmental models can benefit from an agent-based depiction of the feedbacks and heuristics which influence the decisions of groundwater users. From this perspective, this work describes a Python-based software architecture which couples the popular NetLogo agent-based platform with the MODFLOW/SEAWAT geohydrological modelling environment. This approach enables users to implement agent-based models in NetLogo's user-friendly platform, while benefiting from the full capabilities of MODFLOW/SEAWAT packages or reusing existing geohydrological models. The software architecture is based on the pyNetLogo connector, which provides an interface between the NetLogo agent-based modelling software and the Python programming language. This functionality is then extended and combined with Python's object-oriented features, to design a simulation architecture which couples NetLogo with MODFLOW/SEAWAT through the FloPy library (Bakker et al., 2016). The Python programming language also provides access to a range of external packages which can be used for testing and analysing the coupled models, which is illustrated for an application of Aquifer Thermal Energy Storage (ATES).
Assessing deep and shallow learning methods for quantitative prediction of acute chemical toxicity.
Liu, Ruifeng; Madore, Michael; Glover, Kyle P; Feasel, Michael G; Wallqvist, Anders
2018-05-02
Animal-based methods for assessing chemical toxicity are struggling to meet testing demands. In silico approaches, including machine-learning methods, are promising alternatives. Recently, deep neural networks (DNNs) were evaluated and reported to outperform other machine-learning methods for quantitative structure-activity relationship modeling of molecular properties. However, most of the reported performance evaluations relied on global performance metrics, such as the root mean squared error (RMSE) between the predicted and experimental values of all samples, without considering the impact of sample distribution across the activity spectrum. Here, we carried out an in-depth analysis of DNN performance for quantitative prediction of acute chemical toxicity using several datasets. We found that the overall performance of DNN models on datasets of up to 30,000 compounds was similar to that of random forest (RF) models, as measured by the RMSE and correlation coefficients between the predicted and experimental results. However, our detailed analyses demonstrated that global performance metrics are inappropriate for datasets with a highly uneven sample distribution, because they show a strong bias for the most populous compounds along the toxicity spectrum. For highly toxic compounds, DNN and RF models trained on all samples performed much worse than the global performance metrics indicated. Surprisingly, our variable nearest neighbor method, which utilizes only structurally similar compounds to make predictions, performed reasonably well, suggesting that information of close near neighbors in the training sets is a key determinant of acute toxicity predictions.
Quantitative Investigation of the Role of Intra-/Intercellular Dynamics in Bacterial Quorum Sensing.
Leaman, Eric J; Geuther, Brian Q; Behkam, Bahareh
2018-04-20
Bacteria utilize diffusible signals to regulate population density-dependent coordinated gene expression in a process called quorum sensing (QS). While the intracellular regulatory mechanisms of QS are well-understood, the effect of spatiotemporal changes in the population configuration on the sensitivity and robustness of the QS response remains largely unexplored. Using a microfluidic device, we quantitatively characterized the emergent behavior of a population of swimming E. coli bacteria engineered with the lux QS system and a GFP reporter. We show that the QS activation time follows a power law with respect to bacterial population density, but this trend is disrupted significantly by microscale variations in population configuration and genetic circuit noise. We then developed a computational model that integrates population dynamics with genetic circuit dynamics to enable accurate (less than 7% error) quantitation of the bacterial QS activation time. Through modeling and experimental analyses, we show that changes in spatial configuration of swimming bacteria can drastically alter the QS activation time, by up to 22%. The integrative model developed herein also enables examination of the performance robustness of synthetic circuits with respect to growth rate, circuit sensitivity, and the population's initial size and spatial structure. Our framework facilitates quantitative tuning of microbial systems performance through rational engineering of synthetic ribosomal binding sites. We have demonstrated this through modulation of QS activation time over an order of magnitude. Altogether, we conclude that predictive engineering of QS-based bacterial systems requires not only the precise temporal modulation of gene expression (intracellular dynamics) but also accounting for the spatiotemporal changes in population configuration (intercellular dynamics).
Mabikwa, Onkabetse V; Greenwood, Darren C; Baxter, Paul D; Fleming, Sarah J
2017-03-14
One aspect to consider when reporting results of observational studies in epidemiology is how quantitative risk factors are analysed. The STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) guidelines recommend that researchers describe how they handle quantitative variables when analysing data. For categorised quantitative variables, the authors are required to provide reasons and justifications informing their practice. We investigated and assessed the practices and reporting of categorised quantitative variables in epidemiology. The assessment was based on five medical journals that publish epidemiological research. Observational studies published between April and June 2015 and investigating the relationships between quantitative exposures (or risk factors) and the outcomes were considered for assessment. A standard form was used to collect the data, and the reporting patterns amongst eligible studies were quantified and described. Out of 61 articles assessed for eligibility, 23 observational studies were included in the assessment. Categorisation of quantitative exposures occurred in 61% of these studies and reasons informing the practice were rarely provided. Only one article explained the choice of categorisation in the analysis. Transformation of quantitative exposures into four or five groups was common and dominant amongst studies using equally spaced categories. Dichotomisation was not popular; the practice featured in one article. Overall, the majority (86%) of the studies preferred ordered or arbitrary group categories. Other criterions used to decide categorical boundaries were based on established guidelines such as consensus statements and WHO standards. Categorisation of continuous variables remains a dominant practice in epidemiological studies. The reasons informing the practice of categorisation within published work are limited and remain unknown in most articles. The existing STROBE guidelines could provide stronger recommendations on reporting quantitative risk factors in epidemiology.
Jin, Min Jin; Kim, Ji Sun; Kim, Sungkean; Hyun, Myoung Ho; Lee, Seung-Hwan
2017-01-01
Childhood trauma is known to be related to emotional problems, quantitative electroencephalography (EEG) indices, and heart rate variability (HRV) indices in adulthood, whereas directions among these factors have not been reported yet. This study aimed to evaluate pathway models in young and healthy adults: (1) one with physiological factors first and emotional problems later in adulthood as results of childhood trauma and (2) one with emotional problems first and physiological factors later. A total of 103 non-clinical volunteers were included. Self-reported psychological scales, including the Childhood Trauma Questionnaire (CTQ), State-Trait Anxiety Inventory, Beck Depression Inventory, and Affective Lability Scale were administered. For physiological evaluation, EEG record was performed during resting eyes closed condition in addition to the resting-state HRV, and the quantitative power analyses of eight EEG bands and three HRV components were calculated in the frequency domain. After a normality test, Pearson's correlation analysis to make path models and path analyses to examine them were conducted. The CTQ score was significantly correlated with depression, state and trait anxiety, affective lability, and HRV low-frequency (LF) power. LF power was associated with beta2 (18-22 Hz) power that was related to affective lability. Affective lability was associated with state anxiety, trait anxiety, and depression. Based on the correlation and the hypothesis, two models were composed: a model with pathways from CTQ score to affective lability, and a model with pathways from CTQ score to LF power. The second model showed significantly better fit than the first model (AIC model1 = 63.403 > AIC model2 = 46.003), which revealed that child trauma could affect emotion, and then physiology. The specific directions of relationships among emotions, the EEG, and HRV in adulthood after childhood trauma was discussed.
Study of the method of water-injected meat identifying based on low-field nuclear magnetic resonance
NASA Astrophysics Data System (ADS)
Xu, Jianmei; Lin, Qing; Yang, Fang; Zheng, Zheng; Ai, Zhujun
2018-01-01
The aim of this study to apply low-field nuclear magnetic resonance technique was to study regular variation of the transverse relaxation spectral parameters of water-injected meat with the proportion of water injection. Based on this, the method of one-way ANOVA and discriminant analysis was used to analyse the differences between these parameters in the capacity of distinguishing water-injected proportion, and established a model for identifying water-injected meat. The results show that, except for T 21b, T 22e and T 23b, the other parameters of the T 2 relaxation spectrum changed regularly with the change of water-injected proportion. The ability of different parameters to distinguish water-injected proportion was different. Based on S, P 22 and T 23m as the prediction variable, the Fisher model and the Bayes model were established by discriminant analysis method, qualitative and quantitative classification of water-injected meat can be realized. The rate of correct discrimination of distinguished validation and cross validation were 88%, the model was stable.
Etzioni, Ruth; Gulati, Roman
2013-04-01
In our article about limitations of basing screening policy on screening trials, we offered several examples of ways in which modeling, using data from large screening trials and population trends, provided insights that differed somewhat from those based only on empirical trial results. In this editorial, we take a step back and consider the general question of whether randomized screening trials provide the strongest evidence for clinical guidelines concerning population screening programs. We argue that randomized trials provide a process that is designed to protect against certain biases but that this process does not guarantee that inferences based on empirical results from screening trials will be unbiased. Appropriate quantitative methods are key to obtaining unbiased inferences from screening trials. We highlight several studies in the statistical literature demonstrating that conventional survival analyses of screening trials can be misleading and list a number of key questions concerning screening harms and benefits that cannot be answered without modeling. Although we acknowledge the centrality of screening trials in the policy process, we maintain that modeling constitutes a powerful tool for screening trial interpretation and screening policy development.
ERIC Educational Resources Information Center
Lee, Young-Jin
2017-01-01
Purpose: The purpose of this paper is to develop a quantitative model of problem solving performance of students in the computer-based mathematics learning environment. Design/methodology/approach: Regularized logistic regression was used to create a quantitative model of problem solving performance of students that predicts whether students can…
Qualitative, semi-quantitative, and quantitative simulation of the osmoregulation system in yeast.
Pang, Wei; Coghill, George M
2015-05-01
In this paper we demonstrate how Morven, a computational framework which can perform qualitative, semi-quantitative, and quantitative simulation of dynamical systems using the same model formalism, is applied to study the osmotic stress response pathway in yeast. First the Morven framework itself is briefly introduced in terms of the model formalism employed and output format. We then built a qualitative model for the biophysical process of the osmoregulation in yeast, and a global qualitative-level picture was obtained through qualitative simulation of this model. Furthermore, we constructed a Morven model based on existing quantitative model of the osmoregulation system. This model was then simulated qualitatively, semi-quantitatively, and quantitatively. The obtained simulation results are presented with an analysis. Finally the future development of the Morven framework for modelling the dynamic biological systems is discussed. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
García-Rodríguez, M. J.; Malpica, J. A.; Benito, B.
2009-04-01
In recent years, interest in landslide hazard assessment studies has increased substantially. They are appropriate for evaluation and mitigation plan development in landslide-prone areas. There are several techniques available for landslide hazard research at a regional scale. Generally, they can be classified in two groups: qualitative and quantitative methods. Most of qualitative methods tend to be subjective, since they depend on expert opinions and represent hazard levels in descriptive terms. On the other hand, quantitative methods are objective and they are commonly used due to the correlation between the instability factors and the location of the landslides. Within this group, statistical approaches and new heuristic techniques based on artificial intelligence (artificial neural network (ANN), fuzzy logic, etc.) provide rigorous analysis to assess landslide hazard over large regions. However, they depend on qualitative and quantitative data, scale, types of movements and characteristic factors used. We analysed and compared an approach for assessing earthquake-triggered landslides hazard using logistic regression (LR) and artificial neural networks (ANN) with a back-propagation learning algorithm. One application has been developed in El Salvador, a country of Central America where the earthquake-triggered landslides are usual phenomena. In a first phase, we analysed the susceptibility and hazard associated to the seismic scenario of the 2001 January 13th earthquake. We calibrated the models using data from the landslide inventory for this scenario. These analyses require input variables representing physical parameters to contribute to the initiation of slope instability, for example, slope gradient, elevation, aspect, mean annual precipitation, lithology, land use, and terrain roughness, while the occurrence or non-occurrence of landslides is considered as dependent variable. The results of the landslide susceptibility analysis are checked using landslide location data. These results show a high concordance between the landslide inventory and the high susceptibility estimated zone with an adjustment of 95.1 % for ANN model and 89.4% for LR model. In addition, we make a comparative analysis of both techniques using the Receiver Operating Characteristic (ROC) curve, a graphical plot of the sensitivity vs. (1 - specificity) for a binary classifier system in function of its discrimination threshold, and calculating the Area Under the ROC (AUROC) value for each model. Finally, the previous models are used for the developing a new probabilistic landslide hazard map for future events. They are obtained combining the expected triggering factor (calculated earthquake ground motion) for a return period of 475 years with the susceptibility map.
Probabilistic flood damage modelling at the meso-scale
NASA Astrophysics Data System (ADS)
Kreibich, Heidi; Botto, Anna; Schröter, Kai; Merz, Bruno
2014-05-01
Decisions on flood risk management and adaptation are usually based on risk analyses. Such analyses are associated with significant uncertainty, even more if changes in risk due to global change are expected. Although uncertainty analysis and probabilistic approaches have received increased attention during the last years, they are still not standard practice for flood risk assessments. Most damage models have in common that complex damaging processes are described by simple, deterministic approaches like stage-damage functions. Novel probabilistic, multi-variate flood damage models have been developed and validated on the micro-scale using a data-mining approach, namely bagging decision trees (Merz et al. 2013). In this presentation we show how the model BT-FLEMO (Bagging decision Tree based Flood Loss Estimation MOdel) can be applied on the meso-scale, namely on the basis of ATKIS land-use units. The model is applied in 19 municipalities which were affected during the 2002 flood by the River Mulde in Saxony, Germany. The application of BT-FLEMO provides a probability distribution of estimated damage to residential buildings per municipality. Validation is undertaken on the one hand via a comparison with eight other damage models including stage-damage functions as well as multi-variate models. On the other hand the results are compared with official damage data provided by the Saxon Relief Bank (SAB). The results show, that uncertainties of damage estimation remain high. Thus, the significant advantage of this probabilistic flood loss estimation model BT-FLEMO is that it inherently provides quantitative information about the uncertainty of the prediction. Reference: Merz, B.; Kreibich, H.; Lall, U. (2013): Multi-variate flood damage assessment: a tree-based data-mining approach. NHESS, 13(1), 53-64.
Information measures for terrain visualization
NASA Astrophysics Data System (ADS)
Bonaventura, Xavier; Sima, Aleksandra A.; Feixas, Miquel; Buckley, Simon J.; Sbert, Mateu; Howell, John A.
2017-02-01
Many quantitative and qualitative studies in geoscience research are based on digital elevation models (DEMs) and 3D surfaces to aid understanding of natural and anthropogenically-influenced topography. As well as their quantitative uses, the visual representation of DEMs can add valuable information for identifying and interpreting topographic features. However, choice of viewpoints and rendering styles may not always be intuitive, especially when terrain data are augmented with digital image texture. In this paper, an information-theoretic framework for object understanding is applied to terrain visualization and terrain view selection. From a visibility channel between a set of viewpoints and the component polygons of a 3D terrain model, we obtain three polygonal information measures. These measures are used to visualize the information associated with each polygon of the terrain model. In order to enhance the perception of the terrain's shape, we explore the effect of combining the calculated information measures with the supplementary digital image texture. From polygonal information, we also introduce a method to select a set of representative views of the terrain model. Finally, we evaluate the behaviour of the proposed techniques using example datasets. A publicly available framework for both the visualization and the view selection of a terrain has been created in order to provide the possibility to analyse any terrain model.
Nguyen Hoang, Anh Thu; Chen, Puran; Björnfot, Sofia; Högstrand, Kari; Lock, John G.; Grandien, Alf; Coles, Mark; Svensson, Mattias
2014-01-01
This manuscript describes technical advances allowing manipulation and quantitative analyses of human DC migratory behavior in lung epithelial tissue. DCs are hematopoietic cells essential for the maintenance of tissue homeostasis and the induction of tissue-specific immune responses. Important functions include cytokine production and migration in response to infection for the induction of proper immune responses. To design appropriate strategies to exploit human DC functional properties in lung tissue for the purpose of clinical evaluation, e.g., candidate vaccination and immunotherapy strategies, we have developed a live-imaging assay based on our previously described organotypic model of the human lung. This assay allows provocations and subsequent quantitative investigations of DC functional properties under conditions mimicking morphological and functional features of the in vivo parental tissue. We present protocols to set up and prepare tissue models for 4D (x, y, z, time) fluorescence-imaging analysis that allow spatial and temporal studies of human DCs in live epithelial tissue, followed by flow cytometry analysis of DCs retrieved from digested tissue models. This model system can be useful for elucidating incompletely defined pathways controlling DC functional responses to infection and inflammation in lung epithelial tissue, as well as the efficacy of locally administered candidate interventions. PMID:24899587
The U.S. EPA is currently evaluating rapid, real-time quantitative PCR (qPCR) methods for determining recreational water quality based on measurements of fecal indicator bacteria DNA sequences. In order to potentially use qPCR for other Clean Water Act needs, such as updating cri...
Assimilation of ZDR Columns for Improving the Spin-Up and Forecasts of Convective Storms
NASA Astrophysics Data System (ADS)
Carlin, J.; Gao, J.; Snyder, J.; Ryzhkov, A.
2017-12-01
A primary motivation for assimilating radar reflectivity data is the reduction of spin-up time for modeled convection. To accomplish this, cloud analysis techniques seek to induce and sustain convective updrafts in storm-scale models by inserting temperature and moisture increments and hydrometeor mixing ratios into the model analysis from simple relations with reflectivity. Polarimetric radar data provide additional insight into the microphysical and dynamic structure of convection. In particular, the radar meteorology community has known for decades that convective updrafts cause, and are typically co-located with, differential reflectivity (ZDR) columns - vertical protrusions of enhanced ZDR above the environmental 0˚C level. Despite these benefits, limited work has been done thus far to assimilate dual-polarization radar data into numerical weather prediction models. In this study, we explore the utility of assimilating ZDR columns to improve storm-scale model analyses and forecasts of convection. We modify the existing Advanced Regional Prediction System's (ARPS) cloud analysis routine to adjust model temperature and moisture state variables using detected ZDR columns as proxies for convective updrafts, and compare the resultant cycled analyses and forecasts with those from the original reflectivity-based cloud analysis formulation. Results indicate qualitative and quantitative improvements from assimilating ZDR columns, including more coherent analyzed updrafts, forecast updraft helicity swaths that better match radar-derived rotation tracks, more realistic forecast reflectivity fields, and larger equitable threat scores. These findings support the use of dual-polarization radar signatures to improve storm-scale model analyses and forecasts.
Pulverer, Walter; Hofner, Manuela; Preusser, Matthias; Dirnberger, Elisabeth; Hainfellner, Johannes A; Weinhaeusel, Andreas
2014-01-01
MGMT promoter methylation is associated with favorable prognosis and chemosensitivity in glioblastoma multiforme (GBM), especially in elderly patients. We aimed to develop a simple methylation-sensitive restriction enzyme (MSRE)-based quantitative PCR (qPCR) assay, allowing the quantification of MGMT promoter methylation. DNA was extracted from non-neoplastic brain (n = 24) and GBM samples (n = 20) upon 3 different sample conservation conditions (-80 °C, formalin-fixed and paraffin-embedded (FFPE); RCL2-fixed). We evaluated the suitability of each fixation method with respect to the MSRE-coupled qPCR methylation analyses. Methylation data were validated by MALDITOF. qPCR was used for evaluation of alternative tissue conservation procedures. DNA from FFPE tissue failed reliable testing; DNA from both RCL2-fixed and fresh frozen tissues performed equally well and was further used for validation of the quantitative MGMT methylation assay (limit of detection (LOD): 19.58 pg), using individual's undigested sample DNA for calibration. MGMT methylation analysis in non-neoplastic brain identified a background methylation of 0.10 ± 11% which we used for defining a cut-off of 0.32% for patient stratification. Of GBM patients 9 were MGMT methylationpositive (range: 0.56 - 91.95%), and 11 tested negative. MALDI-TOF measurements resulted in a concordant classification of 94% of GBM samples in comparison to qPCR. The presented methodology allows quantitative MGMT promoter methylation analyses. An amount of 200 ng DNA is sufficient for triplicate analyses including control reactions and individual calibration curves, thus excluding any DNA qualityderived bias. The combination of RCL2-fixation and quantitative methylation analyses improves pathological routine examination when histological and molecular analyses on limited amounts of tumor samples are necessary for patient stratification.
Determination of fat and total protein content in milk using conventional digital imaging.
Kucheryavskiy, Sergey; Melenteva, Anastasiia; Bogomolov, Andrey
2014-04-01
The applicability of conventional digital imaging to quantitative determination of fat and total protein in cow's milk, based on the phenomenon of light scatter, has been proved. A new algorithm for extracting features from digital images of milk samples has been developed. The algorithm takes into account spatial distribution of light, diffusely transmitted through a sample. The proposed method has been tested on two sample sets prepared from industrial raw milk standards, with variable fat and protein content. Partial Least-Squares (PLS) regression on the features calculated from images of monochromatically illuminated milk samples resulted in models with high prediction performance when analysed the sets separately (best models with cross-validated R(2)=0.974 for protein and R(2)=0.973 for fat content). However when analysed the sets jointly with the obtained results were significantly worse (best models with cross-validated R(2)=0.890 for fat content and R(2)=0.720 for protein content). The results have been compared with previously published Vis/SW-NIR spectroscopic study of similar samples. Copyright © 2013 Elsevier B.V. All rights reserved.
Analyses of Mobilization Manpower Supply and Demand.
1982-03-01
7AD-AI30 148 ANALYSES OF MOBIL ZATION MANPOWER SUPPLY AND DEMAND U) l1 . ADMINISTRATIVE SCIENCES CORP SPRINOFIELD VA BREAU EAL MAR82 ASCR134...79-C-0527 for use in identifying and quantifying issues in the CPAM process, and to employ the model for selected quantitative ard qualitative analyses...nurses and corpsmen) to operate on a Commander FX Microcomputer, to be used by 2 the Bureau of Medicine and Surgery to develop inputs for Navy-wide
Han, Xianlin; Yang, Kui; Gross, Richard W.
2011-01-01
Since our last comprehensive review on multi-dimensional mass spectrometry-based shotgun lipidomics (Mass Spectrom. Rev. 24 (2005), 367), many new developments in the field of lipidomics have occurred. These developments include new strategies and refinements for shotgun lipidomic approaches that use direct infusion, including novel fragmentation strategies, identification of multiple new informative dimensions for mass spectrometric interrogation, and the development of new bioinformatic approaches for enhanced identification and quantitation of the individual molecular constituents that comprise each cell’s lipidome. Concurrently, advances in liquid chromatography-based platforms and novel strategies for quantitative matrix-assisted laser desorption/ionization mass spectrometry for lipidomic analyses have been developed. Through the synergistic use of this repertoire of new mass spectrometric approaches, the power and scope of lipidomics has been greatly expanded to accelerate progress toward the comprehensive understanding of the pleiotropic roles of lipids in biological systems. PMID:21755525
75 FR 29537 - Draft Transportation Conformity Guidance for Quantitative Hot-spot Analyses in PM2.5
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-26
... Quantitative Hot- spot Analyses in PM 2.5 and PM 10 Nonattainment and Maintenance Areas AGENCY: Environmental... finalized, this guidance would help state and local agencies complete quantitative PM 2.5 and PM 10 hot-spot...), EPA stated that quantitative PM 2.5 and PM 10 hot-spot analyses would not be required until EPA...
Lassiter, Jonathan M; Parsons, Jeffrey T
2016-02-01
This paper presents a systematic review of the quantitative HIV research that assessed the relationships between religion, spirituality, HIV syndemics, and individual HIV syndemics-related health conditions (e.g. depression, substance abuse, HIV risk) among men who have sex with men (MSM) in the United States. No quantitative studies were found that assessed the relationships between HIV syndemics, religion, and spirituality. Nine studies, with 13 statistical analyses, were found that examined the relationships between individual HIV syndemics-related health conditions, religion, and spirituality. Among the 13 analyses, religion and spirituality were found to have mixed relationships with HIV syndemics-related health conditions (6 nonsignificant associations; 5 negative associations; 2 positive associations). Given the overall lack of inclusion of religion and spirituality in HIV syndemics research, a conceptual model that hypothesizes the potential interactions of religion and spirituality with HIV syndemics-related health conditions is presented. The implications of the model for MSM's health are outlined.
Parsons, Jeffrey T.
2015-01-01
This paper presents a systematic review of the quantitative HIV research that assessed the relationships between religion, spirituality, HIV syndemics, and individual HIV syndemics-related health conditions (e.g. depression, substance abuse, HIV risk) among men who have sex with men (MSM) in the United States. No quantitative studies were found that assessed the relationships between HIV syndemics, religion, and spirituality. Nine studies, with 13 statistical analyses, were found that examined the relationships between individual HIV syndemics-related health conditions, religion, and spirituality. Among the 13 analyses, religion and spirituality were found to have mixed relationships with HIV syndemics-related health conditions (6 nonsignificant associations; 5 negative associations; 2 positive associations). Given the overall lack of inclusion of religion and spirituality in HIV syndemics research, a conceptual model that hypothesizes the potential interactions of religion and spirituality with HIV syndemics-related health conditions is presented. The implications of the model for MSM’s health are outlined. PMID:26319130
Enhancing population pharmacokinetic modeling efficiency and quality using an integrated workflow.
Schmidt, Henning; Radivojevic, Andrijana
2014-08-01
Population pharmacokinetic (popPK) analyses are at the core of Pharmacometrics and need to be performed regularly. Although these analyses are relatively standard, a large variability can be observed in both the time (efficiency) and the way they are performed (quality). Main reasons for this variability include the level of experience of a modeler, personal preferences and tools. This paper aims to examine how the process of popPK model building can be supported in order to increase its efficiency and quality. The presented approach to the conduct of popPK analyses is centered around three key components: (1) identification of most common and important popPK model features, (2) required information content and formatting of the data for modeling, and (3) methodology, workflow and workflow supporting tools. This approach has been used in several popPK modeling projects and a documented example is provided in the supplementary material. Efficiency of model building is improved by avoiding repetitive coding and other labor-intensive tasks and by putting the emphasis on a fit-for-purpose model. Quality is improved by ensuring that the workflow and tools are in alignment with a popPK modeling guidance which is established within an organization. The main conclusion of this paper is that workflow based approaches to popPK modeling are feasible and have significant potential to ameliorate its various aspects. However, the implementation of such an approach in a pharmacometric organization requires openness towards innovation and change-the key ingredient for evolution of integrative and quantitative drug development in the pharmaceutical industry.
Kellie, John F; Kehler, Jonathan R; Karlinsey, Molly Z; Summerfield, Scott G
2017-12-01
Typically, quantitation of biotherapeutics from biological matrices by LC-MS is based on a surrogate peptide approach to determine molecule concentration. Recent efforts have focused on quantitation of the intact protein molecules or larger mass subunits of monoclonal antibodies. To date, there has been limited guidance for large or intact protein mass quantitation for quantitative bioanalysis. Intact- and subunit-level analyses of biotherapeutics from biological matrices are performed at 12-25 kDa mass range with quantitation data presented. Linearity, bias and other metrics are presented along with recommendations made on the viability of existing quantitation approaches. This communication is intended to start a discussion around intact protein data analysis and processing, recognizing that other published contributions will be required.
Valdez-Flores, Ciriaco; Sielken, Robert L; Teta, M Jane
2010-04-01
The most recent epidemiological data on individual workers in the NIOSH and updated UCC occupational studies have been used to characterize the potential excess cancer risks of environmental exposure to ethylene oxide (EO). In addition to refined analyses of the separate cohorts, power has been increased by analyzing the combined cohorts. In previous SMR analyses of the separate studies and the present analyses of the updated and pooled studies of over 19,000 workers, none of the SMRs for any combination of the 12 cancer endpoints and six sub-cohorts analyzed were statistically significantly greater than one including the ones of greatest previous interest: leukemia, lymphohematopoietic tissue, lymphoid tumors, NHL, and breast cancer. In our study, no evidence of a positive cumulative exposure-response relationship was found. Fitted Cox proportional hazards models with cumulative EO exposure do not have statistically significant positive slopes. The lack of increasing trends was corroborated by categorical analyses. Cox model estimates of the concentrations corresponding to a 1-in-a-million extra environmental cancer risk are all greater than approximately 1ppb and are more than 1500-fold greater than the 0.4ppt estimate in the 2006 EPA draft IRIS risk assessment. The reasons for this difference are identified and discussed. Copyright 2009 Elsevier Inc. All rights reserved.
Linkage disequilibrium interval mapping of quantitative trait loci.
Boitard, Simon; Abdallah, Jihad; de Rochambeau, Hubert; Cierco-Ayrolles, Christine; Mangin, Brigitte
2006-03-16
For many years gene mapping studies have been performed through linkage analyses based on pedigree data. Recently, linkage disequilibrium methods based on unrelated individuals have been advocated as powerful tools to refine estimates of gene location. Many strategies have been proposed to deal with simply inherited disease traits. However, locating quantitative trait loci is statistically more challenging and considerable research is needed to provide robust and computationally efficient methods. Under a three-locus Wright-Fisher model, we derived approximate expressions for the expected haplotype frequencies in a population. We considered haplotypes comprising one trait locus and two flanking markers. Using these theoretical expressions, we built a likelihood-maximization method, called HAPim, for estimating the location of a quantitative trait locus. For each postulated position, the method only requires information from the two flanking markers. Over a wide range of simulation scenarios it was found to be more accurate than a two-marker composite likelihood method. It also performed as well as identity by descent methods, whilst being valuable in a wider range of populations. Our method makes efficient use of marker information, and can be valuable for fine mapping purposes. Its performance is increased if multiallelic markers are available. Several improvements can be developed to account for more complex evolution scenarios or provide robust confidence intervals for the location estimates.
Relationship between nurses' leadership styles and power bases.
García García, Inmaculada; Santa-Bárbara, Emilio Sánchez
2009-01-01
This quantitative study aimed to empirically evidence the relationship between the power bases of the leader and the leadership styles of nurses. The random sample consisted of 204 nursing professionals from a public hospital. The following measurement instruments were used: the SBDQ (Supervisory Behavior Description Questionnaire) to identify leadership styles and the Power Perception Profile to determine the types of power used by leaders. Descriptive, bivariate and multivariate analyses were used. Based on the results, two relationships proposed by the SLT (Situational Leadership Theory) were verified: between coercive power and S1 leadership style (telling), and between referent power and S3 leadership style (participating). In other cases, results have been opposite to expectations: the use of power proposed by the model decreases the probability of performing the prescribed leadership style.
Recent Advances in Analytical Pyrolysis to Investigate Organic Materials in Heritage Science.
Degano, Ilaria; Modugno, Francesca; Bonaduce, Ilaria; Ribechini, Erika; Colombini, Maria Perla
2018-06-18
The molecular characterization of organic materials in samples from artworks and historical objects traditionally entailed qualitative and quantitative analyses by HPLC and GC. Today innovative approaches based on analytical pyrolysis enable samples to be analysed without any chemical pre-treatment. Pyrolysis, which is often considered as a screening technique, shows previously unexplored potential thanks to recent instrumental developments. Organic materials that are macromolecular in nature, or undergo polymerization upon curing and ageing can now be better investigated. Most constituents of paint layers and archaeological organic substances contain major insoluble and chemically non-hydrolysable fractions that are inaccessible to GC or HPLC. To date, molecular scientific investigations of the organic constituents of artworks and historical objects have mostly focused on the minor constituents of the sample. This review presents recent advances in the qualitative and semi-quantitative analyses of organic materials in heritage objects based on analytical pyrolysis coupled with mass spectrometry. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Rudnick, Paul A.; Clauser, Karl R.; Kilpatrick, Lisa E.; Tchekhovskoi, Dmitrii V.; Neta, Pedatsur; Blonder, Nikša; Billheimer, Dean D.; Blackman, Ronald K.; Bunk, David M.; Cardasis, Helene L.; Ham, Amy-Joan L.; Jaffe, Jacob D.; Kinsinger, Christopher R.; Mesri, Mehdi; Neubert, Thomas A.; Schilling, Birgit; Tabb, David L.; Tegeler, Tony J.; Vega-Montoto, Lorenzo; Variyath, Asokan Mulayath; Wang, Mu; Wang, Pei; Whiteaker, Jeffrey R.; Zimmerman, Lisa J.; Carr, Steven A.; Fisher, Susan J.; Gibson, Bradford W.; Paulovich, Amanda G.; Regnier, Fred E.; Rodriguez, Henry; Spiegelman, Cliff; Tempst, Paul; Liebler, Daniel C.; Stein, Stephen E.
2010-01-01
A major unmet need in LC-MS/MS-based proteomics analyses is a set of tools for quantitative assessment of system performance and evaluation of technical variability. Here we describe 46 system performance metrics for monitoring chromatographic performance, electrospray source stability, MS1 and MS2 signals, dynamic sampling of ions for MS/MS, and peptide identification. Applied to data sets from replicate LC-MS/MS analyses, these metrics displayed consistent, reasonable responses to controlled perturbations. The metrics typically displayed variations less than 10% and thus can reveal even subtle differences in performance of system components. Analyses of data from interlaboratory studies conducted under a common standard operating procedure identified outlier data and provided clues to specific causes. Moreover, interlaboratory variation reflected by the metrics indicates which system components vary the most between laboratories. Application of these metrics enables rational, quantitative quality assessment for proteomics and other LC-MS/MS analytical applications. PMID:19837981
NASA Technical Reports Server (NTRS)
Alexander, J. Iwan D.; Lizee, Arnaud
1996-01-01
The object of this work, started in March of 1995, is to approach the problem of determining the transport conditions (and effects of residual acceleration) during the plane-front directional solidification of a tin-bismuth alloy under low gravity conditions. The work involves using a combination of 2- and 3-D numerical models, scaling analyses, 1-D models and the results of ground-based and low-gravity experiments. The experiments conducted in the MEPHISTO furnace facility during the USMP-3 spaceflight which took place earlier this year (22 Feb. - 6 Mar. 1996). This experiment represents an unprecedented opportunity to make a quantitative correlation between residual accelerations and the response of an actual experimental solidification system
Jin, Min Jin; Kim, Ji Sun; Kim, Sungkean; Hyun, Myoung Ho; Lee, Seung-Hwan
2018-01-01
Childhood trauma is known to be related to emotional problems, quantitative electroencephalography (EEG) indices, and heart rate variability (HRV) indices in adulthood, whereas directions among these factors have not been reported yet. This study aimed to evaluate pathway models in young and healthy adults: (1) one with physiological factors first and emotional problems later in adulthood as results of childhood trauma and (2) one with emotional problems first and physiological factors later. A total of 103 non-clinical volunteers were included. Self-reported psychological scales, including the Childhood Trauma Questionnaire (CTQ), State–Trait Anxiety Inventory, Beck Depression Inventory, and Affective Lability Scale were administered. For physiological evaluation, EEG record was performed during resting eyes closed condition in addition to the resting-state HRV, and the quantitative power analyses of eight EEG bands and three HRV components were calculated in the frequency domain. After a normality test, Pearson’s correlation analysis to make path models and path analyses to examine them were conducted. The CTQ score was significantly correlated with depression, state and trait anxiety, affective lability, and HRV low-frequency (LF) power. LF power was associated with beta2 (18–22 Hz) power that was related to affective lability. Affective lability was associated with state anxiety, trait anxiety, and depression. Based on the correlation and the hypothesis, two models were composed: a model with pathways from CTQ score to affective lability, and a model with pathways from CTQ score to LF power. The second model showed significantly better fit than the first model (AICmodel1 = 63.403 > AICmodel2 = 46.003), which revealed that child trauma could affect emotion, and then physiology. The specific directions of relationships among emotions, the EEG, and HRV in adulthood after childhood trauma was discussed. PMID:29403401
NASA Astrophysics Data System (ADS)
Iskandar, Ismed; Satria Gondokaryono, Yudi
2016-02-01
In reliability theory, the most important problem is to determine the reliability of a complex system from the reliability of its components. The weakness of most reliability theories is that the systems are described and explained as simply functioning or failed. In many real situations, the failures may be from many causes depending upon the age and the environment of the system and its components. Another problem in reliability theory is one of estimating the parameters of the assumed failure models. The estimation may be based on data collected over censored or uncensored life tests. In many reliability problems, the failure data are simply quantitatively inadequate, especially in engineering design and maintenance system. The Bayesian analyses are more beneficial than the classical one in such cases. The Bayesian estimation analyses allow us to combine past knowledge or experience in the form of an apriori distribution with life test data to make inferences of the parameter of interest. In this paper, we have investigated the application of the Bayesian estimation analyses to competing risk systems. The cases are limited to the models with independent causes of failure by using the Weibull distribution as our model. A simulation is conducted for this distribution with the objectives of verifying the models and the estimators and investigating the performance of the estimators for varying sample size. The simulation data are analyzed by using Bayesian and the maximum likelihood analyses. The simulation results show that the change of the true of parameter relatively to another will change the value of standard deviation in an opposite direction. For a perfect information on the prior distribution, the estimation methods of the Bayesian analyses are better than those of the maximum likelihood. The sensitivity analyses show some amount of sensitivity over the shifts of the prior locations. They also show the robustness of the Bayesian analysis within the range between the true value and the maximum likelihood estimated value lines.
THE PRACTICE OF STRUCTURE ACTIVITY RELATIONSHIPS (SAR) IN TOXICOLOGY
Both qualitative and quantitative modeling methods relating chemical structure to biological activity, called structure-activity relationship analyses or SAR, are applied to the prediction and characterization of chemical toxicity. This minireview will discuss some generic issue...
Automated feature extraction and spatial organization of seafloor pockmarks, Belfast Bay, Maine, USA
Andrews, Brian D.; Brothers, Laura L.; Barnhardt, Walter A.
2010-01-01
Seafloor pockmarks occur worldwide and may represent millions of m3 of continental shelf erosion, but few numerical analyses of their morphology and spatial distribution of pockmarks exist. We introduce a quantitative definition of pockmark morphology and, based on this definition, propose a three-step geomorphometric method to identify and extract pockmarks from high-resolution swath bathymetry. We apply this GIS-implemented approach to 25 km2 of bathymetry collected in the Belfast Bay, Maine USA pockmark field. Our model extracted 1767 pockmarks and found a linear pockmark depth-to-diameter ratio for pockmarks field-wide. Mean pockmark depth is 7.6 m and mean diameter is 84.8 m. Pockmark distribution is non-random, and nearly half of the field's pockmarks occur in chains. The most prominent chains are oriented semi-normal to the steepest gradient in Holocene sediment thickness. A descriptive model yields field-wide spatial statistics indicating that pockmarks are distributed in non-random clusters. Results enable quantitative comparison of pockmarks in fields worldwide as well as similar concave features, such as impact craters, dolines, or salt pools.
ERIC Educational Resources Information Center
van Zomeren, Martijn; Postmes, Tom; Spears, Russell
2008-01-01
An integrative social identity model of collective action (SIMCA) is developed that incorporates 3 socio-psychological perspectives on collective action. Three meta-analyses synthesized a total of 182 effects of perceived injustice, efficacy, and identity on collective action (corresponding to these socio-psychological perspectives). Results…
Model-driven meta-analyses for informing health care: a diabetes meta-analysis as an exemplar.
Brown, Sharon A; Becker, Betsy Jane; García, Alexandra A; Brown, Adama; Ramírez, Gilbert
2015-04-01
A relatively novel type of meta-analysis, a model-driven meta-analysis, involves the quantitative synthesis of descriptive, correlational data and is useful for identifying key predictors of health outcomes and informing clinical guidelines. Few such meta-analyses have been conducted and thus, large bodies of research remain unsynthesized and uninterpreted for application in health care. We describe the unique challenges of conducting a model-driven meta-analysis, focusing primarily on issues related to locating a sample of published and unpublished primary studies, extracting and verifying descriptive and correlational data, and conducting analyses. A current meta-analysis of the research on predictors of key health outcomes in diabetes is used to illustrate our main points. © The Author(s) 2014.
MODEL-DRIVEN META-ANALYSES FOR INFORMING HEALTH CARE: A DIABETES META-ANALYSIS AS AN EXEMPLAR
Brown, Sharon A.; Becker, Betsy Jane; García, Alexandra A.; Brown, Adama; Ramírez, Gilbert
2015-01-01
A relatively novel type of meta-analysis, a model-driven meta-analysis, involves the quantitative synthesis of descriptive, correlational data and is useful for identifying key predictors of health outcomes and informing clinical guidelines. Few such meta-analyses have been conducted and thus, large bodies of research remain unsynthesized and uninterpreted for application in health care. We describe the unique challenges of conducting a model-driven meta-analysis, focusing primarily on issues related to locating a sample of published and unpublished primary studies, extracting and verifying descriptive and correlational data, and conducting analyses. A current meta-analysis of the research on predictors of key health outcomes in diabetes is used to illustrate our main points. PMID:25142707
A Meta-Analysis of Interventions for Bereaved Children and Adolescents
ERIC Educational Resources Information Center
Rosner, Rita; Kruse, Joachim; Hagl, Maria
2010-01-01
The main objective of this review was to provide a quantitative and methodologically sound evaluation of existing treatments for bereavement and grief reactions in children and adolescents. Two meta-analyses were conducted: 1 on controlled studies and 1 on uncontrolled studies. The 2 meta-analyses were based on a total of 27 treatment studies…
Galofré-Vilà, Gregori
2018-02-01
This paper reviews the current wealth of anthropometric history since the early efforts of Robert Fogel in the 1970s. The survey is based on a quantitative systematic review of the literature and counts a total of 447 peer-reviewed articles being published in the main leading journals in economic history, economics and biology. Data are analysed using network analysis by journal and author and the main contributions of anthropometric history are highlighted, pointing to future areas of inquiry. The contributions of books and book chapters are also quantified and analysed. Copyright © 2017 Elsevier B.V. All rights reserved.
Good practices for quantitative bias analysis.
Lash, Timothy L; Fox, Matthew P; MacLehose, Richard F; Maldonado, George; McCandless, Lawrence C; Greenland, Sander
2014-12-01
Quantitative bias analysis serves several objectives in epidemiological research. First, it provides a quantitative estimate of the direction, magnitude and uncertainty arising from systematic errors. Second, the acts of identifying sources of systematic error, writing down models to quantify them, assigning values to the bias parameters and interpreting the results combat the human tendency towards overconfidence in research results, syntheses and critiques and the inferences that rest upon them. Finally, by suggesting aspects that dominate uncertainty in a particular research result or topic area, bias analysis can guide efficient allocation of sparse research resources. The fundamental methods of bias analyses have been known for decades, and there have been calls for more widespread use for nearly as long. There was a time when some believed that bias analyses were rarely undertaken because the methods were not widely known and because automated computing tools were not readily available to implement the methods. These shortcomings have been largely resolved. We must, therefore, contemplate other barriers to implementation. One possibility is that practitioners avoid the analyses because they lack confidence in the practice of bias analysis. The purpose of this paper is therefore to describe what we view as good practices for applying quantitative bias analysis to epidemiological data, directed towards those familiar with the methods. We focus on answering questions often posed to those of us who advocate incorporation of bias analysis methods into teaching and research. These include the following. When is bias analysis practical and productive? How does one select the biases that ought to be addressed? How does one select a method to model biases? How does one assign values to the parameters of a bias model? How does one present and interpret a bias analysis?. We hope that our guide to good practices for conducting and presenting bias analyses will encourage more widespread use of bias analysis to estimate the potential magnitude and direction of biases, as well as the uncertainty in estimates potentially influenced by the biases. © The Author 2014; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association.
Zhai, Hong Lin; Zhai, Yue Yuan; Li, Pei Zhen; Tian, Yue Li
2013-01-21
A very simple approach to quantitative analysis is proposed based on the technology of digital image processing using three-dimensional (3D) spectra obtained by high-performance liquid chromatography coupled with a diode array detector (HPLC-DAD). As the region-based shape features of a grayscale image, Zernike moments with inherently invariance property were employed to establish the linear quantitative models. This approach was applied to the quantitative analysis of three compounds in mixed samples using 3D HPLC-DAD spectra, and three linear models were obtained, respectively. The correlation coefficients (R(2)) for training and test sets were more than 0.999, and the statistical parameters and strict validation supported the reliability of established models. The analytical results suggest that the Zernike moment selected by stepwise regression can be used in the quantitative analysis of target compounds. Our study provides a new idea for quantitative analysis using 3D spectra, which can be extended to the analysis of other 3D spectra obtained by different methods or instruments.
Ochs, Matthias; Mühlfeld, Christian
2013-07-01
The growing awareness of the importance of accurate morphometry in lung research has recently motivated the publication of guidelines set forth by a combined task force of the American Thoracic Society and the European Respiratory Society (20). This official ATS/ERS Research Policy Statement provides general recommendations on which stereological methods are to be used in quantitative microscopy of the lung. However, to integrate stereology into a particular experimental study design, investigators are left with the problem of how to implement this in practice. Specifically, different animal models of human lung disease require the use of different stereological techniques and may determine the mode of lung fixation, tissue processing, preparation of sections, and other things. Therefore, the present companion articles were designed to allow a short practically oriented introduction into the concepts of design-based stereology (Part 1) and to provide recommendations for choosing the most appropriate methods to investigate a number of important disease models (Part 2). Worked examples with illustrative images will facilitate the practical performance of equivalent analyses. Study algorithms provide comprehensive surveys to ensure that no essential step gets lost during the multistage workflow. Thus, with this review, we hope to close the gap between theory and practice and enhance the use of stereological techniques in pulmonary research.
Quantitative Analysis of Critical Factors for the Climate Impact of Landfill Mining.
Laner, David; Cencic, Oliver; Svensson, Niclas; Krook, Joakim
2016-07-05
Landfill mining has been proposed as an innovative strategy to mitigate environmental risks associated with landfills, to recover secondary raw materials and energy from the deposited waste, and to enable high-valued land uses at the site. The present study quantitatively assesses the importance of specific factors and conditions for the net contribution of landfill mining to global warming using a novel, set-based modeling approach and provides policy recommendations for facilitating the development of projects contributing to global warming mitigation. Building on life-cycle assessment, scenario modeling and sensitivity analysis methods are used to identify critical factors for the climate impact of landfill mining. The net contributions to global warming of the scenarios range from -1550 (saving) to 640 (burden) kg CO2e per Mg of excavated waste. Nearly 90% of the results' total variation can be explained by changes in four factors, namely the landfill gas management in the reference case (i.e., alternative to mining the landfill), the background energy system, the composition of the excavated waste, and the applied waste-to-energy technology. Based on the analyses, circumstances under which landfill mining should be prioritized or not are identified and sensitive parameters for the climate impact assessment of landfill mining are highlighted.
Ribeiro, J S; Ferreira, M M C; Salva, T J G
2011-02-15
Mathematical models based on chemometric analyses of the coffee beverage sensory data and NIR spectra of 51 Arabica roasted coffee samples were generated aiming to predict the scores of acidity, bitterness, flavour, cleanliness, body and overall quality of coffee beverage. Partial least squares (PLS) were used to construct the models. The ordered predictor selection (OPS) algorithm was applied to select the wavelengths for the regression model of each sensory attribute in order to take only significant regions into account. The regions of the spectrum defined as important for sensory quality were closely related to the NIR spectra of pure caffeine, trigonelline, 5-caffeoylquinic acid, cellulose, coffee lipids, sucrose and casein. The NIR analyses sustained that the relationship between the sensory characteristics of the beverage and the chemical composition of the roasted grain were as listed below: 1 - the lipids and proteins were closely related to the attribute body; 2 - the caffeine and chlorogenic acids were related to bitterness; 3 - the chlorogenic acids were related to acidity and flavour; 4 - the cleanliness and overall quality were related to caffeine, trigonelline, chlorogenic acid, polysaccharides, sucrose and protein. Copyright © 2010 Elsevier B.V. All rights reserved.
Gianoncelli, Alessandra; Bonini, Sara A; Bertuzzi, Michela; Guarienti, Michela; Vezzoli, Sara; Kumar, Rajesh; Delbarba, Andrea; Mastinu, Andrea; Sigala, Sandra; Spano, Pierfranco; Pani, Luca; Pecorelli, Sergio; Memo, Maurizio
2015-08-01
Authorization to market a biosimilar product by the appropriate institutions is expected based on biosimilarity with its originator product. The analogy between the originator and its biosimilar(s) is assessed through safety, purity, and potency analyses. In this study, we proposed a useful quality control system for rapid and economic primary screening of potential biosimilar drugs. For this purpose, chemical and functional characterization of the originator rhEPO alfa and two of its biosimilars was discussed. Qualitative and quantitative analyses of the originator rhEPO alfa and its biosimilars were performed using reversed-phase high-performance liquid chromatography (RP-HPLC). The identification of proteins and the separation of isoforms were studied using matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS) and two-dimensional gel electrophoresis (2D-PAGE), respectively. Furthermore, the biological activity of these drugs was measured both in vitro, evaluating the TF-1 cell proliferation rate, and in vivo, using the innovative experimental animal model of the zebrafish embryos. Chemical analyses showed that the quantitative concentrations of rhEPO alfa were in agreement with the labeled claims by the corresponding manufacturers. The qualitative analyses performed demonstrated that the three drugs were pure and that they had the same amino acid sequence. Chemical differences were found only at the level of isoforms containing N-glycosylation; however, functional in vitro and in vivo studies did not show any significant differences from a biosimilar point of view. These rapid and economic structural and functional analyses were effective in the evaluation of the biosimilarity between the originator rhEPO alfa and the biosimilars analyzed.
Ligmann-Zielinska, Arika; Kramer, Daniel B; Spence Cheruvelil, Kendra; Soranno, Patricia A
2014-01-01
Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system.
Ligmann-Zielinska, Arika; Kramer, Daniel B.; Spence Cheruvelil, Kendra; Soranno, Patricia A.
2014-01-01
Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system. PMID:25340764
Nolan, John P.; Mandy, Francis
2008-01-01
While the term flow cytometry refers to the measurement of cells, the approach of making sensitive multiparameter optical measurements in a flowing sample stream is a very general analytical approach. The past few years have seen an explosion in the application of flow cytometry technology for molecular analysis and measurements using micro-particles as solid supports. While microsphere-based molecular analyses using flow cytometry date back three decades, the need for highly parallel quantitative molecular measurements that has arisen from various genomic and proteomic advances has driven the development in particle encoding technology to enable highly multiplexed assays. Multiplexed particle-based immunoassays are now common place, and new assays to study genes, protein function, and molecular assembly. Numerous efforts are underway to extend the multiplexing capabilities of microparticle-based assays through new approaches to particle encoding and analyte reporting. The impact of these developments will be seen in the basic research and clinical laboratories, as well as in drug development. PMID:16604537
A quantitative test of population genetics using spatiogenetic patterns in bacterial colonies.
Korolev, Kirill S; Xavier, João B; Nelson, David R; Foster, Kevin R
2011-10-01
It is widely accepted that population-genetics theory is the cornerstone of evolutionary analyses. Empirical tests of the theory, however, are challenging because of the complex relationships between space, dispersal, and evolution. Critically, we lack quantitative validation of the spatial models of population genetics. Here we combine analytics, on- and off-lattice simulations, and experiments with bacteria to perform quantitative tests of the theory. We study two bacterial species, the gut microbe Escherichia coli and the opportunistic pathogen Pseudomonas aeruginosa, and show that spatiogenetic patterns in colony biofilms of both species are accurately described by an extension of the one-dimensional stepping-stone model. We use one empirical measure, genetic diversity at the colony periphery, to parameterize our models and show that we can then accurately predict another key variable: the degree of short-range cell migration along an edge. Moreover, the model allows us to estimate other key parameters, including effective population size (density) at the expansion frontier. While our experimental system is a simplification of natural microbial community, we argue that it constitutes proof of principle that the spatial models of population genetics can quantitatively capture organismal evolution.
Model-Based Linkage Analysis of a Quantitative Trait.
Song, Yeunjoo E; Song, Sunah; Schnell, Audrey H
2017-01-01
Linkage Analysis is a family-based method of analysis to examine whether any typed genetic markers cosegregate with a given trait, in this case a quantitative trait. If linkage exists, this is taken as evidence in support of a genetic basis for the trait. Historically, linkage analysis was performed using a binary disease trait, but has been extended to include quantitative disease measures. Quantitative traits are desirable as they provide more information than binary traits. Linkage analysis can be performed using single-marker methods (one marker at a time) or multipoint (using multiple markers simultaneously). In model-based linkage analysis the genetic model for the trait of interest is specified. There are many software options for performing linkage analysis. Here, we use the program package Statistical Analysis for Genetic Epidemiology (S.A.G.E.). S.A.G.E. was chosen because it also includes programs to perform data cleaning procedures and to generate and test genetic models for a quantitative trait, in addition to performing linkage analysis. We demonstrate in detail the process of running the program LODLINK to perform single-marker analysis, and MLOD to perform multipoint analysis using output from SEGREG, where SEGREG was used to determine the best fitting statistical model for the trait.
From behavioural analyses to models of collective motion in fish schools
Lopez, Ugo; Gautrais, Jacques; Couzin, Iain D.; Theraulaz, Guy
2012-01-01
Fish schooling is a phenomenon of long-lasting interest in ethology and ecology, widely spread across taxa and ecological contexts, and has attracted much interest from statistical physics and theoretical biology as a case of self-organized behaviour. One topic of intense interest is the search of specific behavioural mechanisms at stake at the individual level and from which the school properties emerges. This is fundamental for understanding how selective pressure acting at the individual level promotes adaptive properties of schools and in trying to disambiguate functional properties from non-adaptive epiphenomena. Decades of studies on collective motion by means of individual-based modelling have allowed a qualitative understanding of the self-organization processes leading to collective properties at school level, and provided an insight into the behavioural mechanisms that result in coordinated motion. Here, we emphasize a set of paradigmatic modelling assumptions whose validity remains unclear, both from a behavioural point of view and in terms of quantitative agreement between model outcome and empirical data. We advocate for a specific and biologically oriented re-examination of these assumptions through experimental-based behavioural analysis and modelling. PMID:24312723
Kim, Young Kwan; Kameo, Yoshitaka; Tanaka, Sakae; Adachi, Taiji
2017-10-01
To understand Wolff's law, bone adaptation by remodeling at the cellular and tissue levels has been discussed extensively through experimental and simulation studies. For the clinical application of a bone remodeling simulation, it is significant to establish a macroscopic model that incorporates clarified microscopic mechanisms. In this study, we proposed novel macroscopic models based on the microscopic mechanism of osteocytic mechanosensing, in which the flow of fluid in the lacuno-canalicular porosity generated by fluid pressure gradients plays an important role, and theoretically evaluated the proposed models, taking biological rationales of bone adaptation into account. The proposed models were categorized into two groups according to whether the remodeling equilibrium state was defined globally or locally, i.e., the global or local uniformity models. Each remodeling stimulus in the proposed models was quantitatively evaluated through image-based finite element analyses of a swine cancellous bone, according to two introduced criteria associated with the trabecular volume and orientation at remodeling equilibrium based on biological rationales. The evaluation suggested that nonuniformity of the mean stress gradient in the local uniformity model, one of the proposed stimuli, has high validity. Furthermore, the adaptive potential of each stimulus was discussed based on spatial distribution of a remodeling stimulus on the trabecular surface. The theoretical consideration of a remodeling stimulus based on biological rationales of bone adaptation would contribute to the establishment of a clinically applicable and reliable simulation model of bone remodeling.
NASA Astrophysics Data System (ADS)
Schön, Peter; Prokop, Alexander; Naaim-Bouvet, Florence; Nishimura, Kouichi; Vionnet, Vincent; Guyomarc'h, Gilbert
2014-05-01
Wind and the associated snow drift are dominating factors determining the snow distribution and accumulation in alpine areas, resulting in a high spatial variability of snow depth that is difficult to evaluate and quantify. The terrain-based parameter Sx characterizes the degree of shelter or exposure of a grid point provided by the upwind terrain, without the computational complexity of numerical wind field models. The parameter has shown to qualitatively predict snow redistribution with good reproduction of spatial patterns, but has failed to quantitatively describe the snow redistribution, and correlations with measured snow heights were poor. The objective of our research was to a) identify the sources of poor correlations between predicted and measured snow re-distribution and b) improve the parameters ability to qualitatively and quantitatively describe snow redistribution in our research area, the Col du Lac Blanc in the French Alps. The area is at an elevation of 2700 m and particularly suited for our study due to its constant wind direction and the availability of data from a meteorological station. Our work focused on areas with terrain edges of approximately 10 m height, and we worked with 1-2 m resolution digital terrain and snow surface data. We first compared the results of the terrain-based parameter calculations to measured snow-depths, obtained by high-accuracy terrestrial laser scan measurements. The results were similar to previous studies: The parameter was able to reproduce observed patterns in snow distribution, but regression analyses showed poor correlations between terrain-based parameter and measured snow-depths. We demonstrate how the correlations between measured and calculated snow heights improve if the parameter is calculated based on a snow surface model instead of a digital terrain model. We show how changing the parameter's search distance and how raster re-sampling and raster smoothing improve the results. To improve the parameter's quantitative abilities, we modified the parameter, based on the comparisons with TLS data and the terrain and wind conditions specific to the research site. The modification is in a linear form f(x) = a * Sx, where a is a newly introduced parameter; f(x) yields the estimates for the snow height. We found that the parameter depends on the time period between the compared snow surfaces and the intensity of drifting snow events, which are linked to wind velocities. At the Col du Lac Blanc test side, blowing snow flux is recorded with snow particle counters (SPC). Snow flux is the number of drifting snow particles per time and area. Hence, the SPC provide data about the duration and intensity of drifting snow events, two important factors not accounted for by the terrain parameter Sx. We analyse how the SPC snow flux data can be used to estimate the magnitude of the new variable parameter a. We could improve the parameters' correlations with measured snow heights and its ability to quantitatively describe snow distribution in the Col du Lac Blanc area. We believe that our work is also a prerequisite to further improve the parameter's ability to describe snow redistribution.
Measurement of air and VOC vapor fluxes during gas-driven soil remediation: bench-scale experiments.
Kim, Heonki; Kim, Taeyun; Shin, Seungyeop; Annable, Michael D
2012-09-04
In this laboratory study, an experimental method was developed for the quantitative analyses of gas fluxes in soil during advective air flow. One-dimensional column and two- and three-dimensional flow chamber models were used in this study. For the air flux measurement, n-octane vapor was used as a tracer, and it was introduced in the air flow entering the physical models. The tracer (n-octane) in the gas effluent from the models was captured for a finite period of time using a pack of activated carbon, which then was analyzed for the mass of n-octane. The air flux was calculated based on the mass of n-octane captured by the activated carbon and the inflow concentration. The measured air fluxes are in good agreement with the actual values for one- and two-dimensional model experiments. Using both the two- and three-dimensional models, the distribution of the air flux at the soil surface was measured. The distribution of the air flux was found to be affected by the depth of the saturated zone. The flux and flux distribution of a volatile contaminant (perchloroethene) was also measured by using the two-dimensional model. Quantitative information of both air and contaminant flux may be very beneficial for analyzing the performance of gas-driven subsurface remediation processes including soil vapor extraction and air sparging.
NASA Astrophysics Data System (ADS)
Wang, Yunzhi; Qiu, Yuchen; Thai, Theresa; More, Kathleen; Ding, Kai; Liu, Hong; Zheng, Bin
2016-03-01
How to rationally identify epithelial ovarian cancer (EOC) patients who will benefit from bevacizumab or other antiangiogenic therapies is a critical issue in EOC treatments. The motivation of this study is to quantitatively measure adiposity features from CT images and investigate the feasibility of predicting potential benefit of EOC patients with or without receiving bevacizumab-based chemotherapy treatment using multivariate statistical models built based on quantitative adiposity image features. A dataset involving CT images from 59 advanced EOC patients were included. Among them, 32 patients received maintenance bevacizumab after primary chemotherapy and the remaining 27 patients did not. We developed a computer-aided detection (CAD) scheme to automatically segment subcutaneous fat areas (VFA) and visceral fat areas (SFA) and then extracted 7 adiposity-related quantitative features. Three multivariate data analysis models (linear regression, logistic regression and Cox proportional hazards regression) were performed respectively to investigate the potential association between the model-generated prediction results and the patients' progression-free survival (PFS) and overall survival (OS). The results show that using all 3 statistical models, a statistically significant association was detected between the model-generated results and both of the two clinical outcomes in the group of patients receiving maintenance bevacizumab (p<0.01), while there were no significant association for both PFS and OS in the group of patients without receiving maintenance bevacizumab. Therefore, this study demonstrated the feasibility of using quantitative adiposity-related CT image features based statistical prediction models to generate a new clinical marker and predict the clinical outcome of EOC patients receiving maintenance bevacizumab-based chemotherapy.
Cunningham, Michael R.; Baumeister, Roy F.
2016-01-01
The limited resource model states that self-control is governed by a relatively finite set of inner resources on which people draw when exerting willpower. Once self-control resources have been used up or depleted, they are less available for other self-control tasks, leading to a decrement in subsequent self-control success. The depletion effect has been studied for over 20 years, tested or extended in more than 600 studies, and supported in an independent meta-analysis (Hagger et al., 2010). Meta-analyses are supposed to reduce bias in literature reviews. Carter et al.’s (2015) meta-analysis, by contrast, included a series of questionable decisions involving sampling, methods, and data analysis. We provide quantitative analyses of key sampling issues: exclusion of many of the best depletion studies based on idiosyncratic criteria and the emphasis on mini meta-analyses with low statistical power as opposed to the overall depletion effect. We discuss two key methodological issues: failure to code for research quality, and the quantitative impact of weak studies by novice researchers. We discuss two key data analysis issues: questionable interpretation of the results of trim and fill and Funnel Plot Asymmetry test procedures, and the use and misinterpretation of the untested Precision Effect Test and Precision Effect Estimate with Standard Error (PEESE) procedures. Despite these serious problems, the Carter et al. (2015) meta-analysis results actually indicate that there is a real depletion effect – contrary to their title. PMID:27826272
Tu, Chengjian; Li, Jun; Sheng, Quanhu; Zhang, Ming; Qu, Jun
2014-04-04
Survey-scan-based label-free method have shown no compelling benefit over fragment ion (MS2)-based approaches when low-resolution mass spectrometry (MS) was used, the growing prevalence of high-resolution analyzers may have changed the game. This necessitates an updated, comparative investigation of these approaches for data acquired by high-resolution MS. Here, we compared survey scan-based (ion current, IC) and MS2-based abundance features including spectral-count (SpC) and MS2 total-ion-current (MS2-TIC), for quantitative analysis using various high-resolution LC/MS data sets. Key discoveries include: (i) study with seven different biological data sets revealed only IC achieved high reproducibility for lower-abundance proteins; (ii) evaluation with 5-replicate analyses of a yeast sample showed IC provided much higher quantitative precision and lower missing data; (iii) IC, SpC, and MS2-TIC all showed good quantitative linearity (R(2) > 0.99) over a >1000-fold concentration range; (iv) both MS2-TIC and IC showed good linear response to various protein loading amounts but not SpC; (v) quantification using a well-characterized CPTAC data set showed that IC exhibited markedly higher quantitative accuracy, higher sensitivity, and lower false-positives/false-negatives than both SpC and MS2-TIC. Therefore, IC achieved an overall superior performance than the MS2-based strategies in terms of reproducibility, missing data, quantitative dynamic range, quantitative accuracy, and biomarker discovery.
2015-01-01
Survey-scan-based label-free method have shown no compelling benefit over fragment ion (MS2)-based approaches when low-resolution mass spectrometry (MS) was used, the growing prevalence of high-resolution analyzers may have changed the game. This necessitates an updated, comparative investigation of these approaches for data acquired by high-resolution MS. Here, we compared survey scan-based (ion current, IC) and MS2-based abundance features including spectral-count (SpC) and MS2 total-ion-current (MS2-TIC), for quantitative analysis using various high-resolution LC/MS data sets. Key discoveries include: (i) study with seven different biological data sets revealed only IC achieved high reproducibility for lower-abundance proteins; (ii) evaluation with 5-replicate analyses of a yeast sample showed IC provided much higher quantitative precision and lower missing data; (iii) IC, SpC, and MS2-TIC all showed good quantitative linearity (R2 > 0.99) over a >1000-fold concentration range; (iv) both MS2-TIC and IC showed good linear response to various protein loading amounts but not SpC; (v) quantification using a well-characterized CPTAC data set showed that IC exhibited markedly higher quantitative accuracy, higher sensitivity, and lower false-positives/false-negatives than both SpC and MS2-TIC. Therefore, IC achieved an overall superior performance than the MS2-based strategies in terms of reproducibility, missing data, quantitative dynamic range, quantitative accuracy, and biomarker discovery. PMID:24635752
Receptor-based 3D-QSAR in Drug Design: Methods and Applications in Kinase Studies.
Fang, Cheng; Xiao, Zhiyan
2016-01-01
Receptor-based 3D-QSAR strategy represents a superior integration of structure-based drug design (SBDD) and three-dimensional quantitative structure-activity relationship (3D-QSAR) analysis. It combines the accurate prediction of ligand poses by the SBDD approach with the good predictability and interpretability of statistical models derived from the 3D-QSAR approach. Extensive efforts have been devoted to the development of receptor-based 3D-QSAR methods and two alternative approaches have been exploited. One associates with computing the binding interactions between a receptor and a ligand to generate structure-based descriptors for QSAR analyses. The other concerns the application of various docking protocols to generate optimal ligand poses so as to provide reliable molecular alignments for the conventional 3D-QSAR operations. This review highlights new concepts and methodologies recently developed in the field of receptorbased 3D-QSAR, and in particular, covers its application in kinase studies.
Regional groundwater flow model for C, K. L. and P reactor areas, Savannah River Site, Aiken, SC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flach, G.P.
2000-02-11
A regional groundwater flow model encompassing approximately 100 mi2 surrounding the C, K, L, and P reactor areas has been developed. The reactor flow model is designed to meet the planning objectives outlined in the General Groundwater Strategy for Reactor Area Projects by providing a common framework for analyzing groundwater flow, contaminant migration and remedial alternatives within the Reactor Projects team of the Environmental Restoration Department. The model provides a quantitative understanding of groundwater flow on a regional scale within the near surface aquifers and deeper semi-confined to confined aquifers. The model incorporates historical and current field characterization data upmore » through Spring 1999. Model preprocessing is automated so that future updates and modifications can be performed quickly and efficiently. The CKLP regional reactor model can be used to guide characterization, perform scoping analyses of contaminant transport, and serve as a common base for subsequent finer-scale transport and remedial/feasibility models for each reactor area.« less
Kölbl, Alexandra C; Hiller, Roman A; Ilmer, Mathias; Liesche, Friederike; Heublein, Sabine; Schröder, Lennard; Hutter, Stefan; Friese, Klaus; Jeschke, Udo; Andergassen, Ulrich
2015-08-01
Altered glycosylation is a predominant feature of tumour cells; it serves for cell adhesion and detachment, respectively, and facilitates the immune escape of these cells. Therefore changes in the expression of glycosyltransferase genes could help to identify circulating tumour cells (CTCs) in the blood samples of cancer patients using a quantitative polymerase chain reaction (PCR) approach. Blood samples of healthy donors were inoculated with certain numbers of established breast cancer cell line cells, thus creating a model system. These samples were analysed by quantitative PCR for the expression of six different glycosyltransferase genes. The three genes with the best results in the model system were consecutively applied to samples from adjuvant breast cancer patients and of healthy donors. FUT3 and GALNT6 showed the highest increase in relative expression, while GALNT6 and ST3GAL3 were the first to reach statistically significant different ∆CT-values comparing the sample with and without addition of tumour cells. These three genes were applied to patient samples, but did not show any significant results that may suggest the presence of CTCs in the blood. Although the relative expression of some of the glycosyltransferase genes exhibited reasonable results in the model system, their application to breast cancer patient samples will have to be further improved, e.g. by co-analysis of patient blood samples by gold-standard methods.
An empirical generative framework for computational modeling of language acquisition.
Waterfall, Heidi R; Sandbank, Ben; Onnis, Luca; Edelman, Shimon
2010-06-01
This paper reports progress in developing a computer model of language acquisition in the form of (1) a generative grammar that is (2) algorithmically learnable from realistic corpus data, (3) viable in its large-scale quantitative performance and (4) psychologically real. First, we describe new algorithmic methods for unsupervised learning of generative grammars from raw CHILDES data and give an account of the generative performance of the acquired grammars. Next, we summarize findings from recent longitudinal and experimental work that suggests how certain statistically prominent structural properties of child-directed speech may facilitate language acquisition. We then present a series of new analyses of CHILDES data indicating that the desired properties are indeed present in realistic child-directed speech corpora. Finally, we suggest how our computational results, behavioral findings, and corpus-based insights can be integrated into a next-generation model aimed at meeting the four requirements of our modeling framework.
Mechanisms underlying anomalous diffusion in the plasma membrane.
Krapf, Diego
2015-01-01
The plasma membrane is a complex fluid where lipids and proteins undergo diffusive motion critical to biochemical reactions. Through quantitative imaging analyses such as single-particle tracking, it is observed that diffusion in the cell membrane is usually anomalous in the sense that the mean squared displacement is not linear with time. This chapter describes the different models that are employed to describe anomalous diffusion, paying special attention to the experimental evidence that supports these models in the plasma membrane. We review models based on anticorrelated displacements, such as fractional Brownian motion and obstructed diffusion, and nonstationary models such as continuous time random walks. We also emphasize evidence for the formation of distinct compartments that transiently form on the cell surface. Finally, we overview heterogeneous diffusion processes in the plasma membrane, which have recently attracted considerable interest. Copyright © 2015. Published by Elsevier Inc.
Kalinowska, Barbara; Banach, Mateusz; Konieczny, Leszek; Marchewka, Damian; Roterman, Irena
2014-01-01
This work discusses the role of unstructured polypeptide chain fragments in shaping the protein's hydrophobic core. Based on the "fuzzy oil drop" model, which assumes an idealized distribution of hydrophobicity density described by the 3D Gaussian, we can determine which fragments make up the core and pinpoint residues whose location conflicts with theoretical predictions. We show that the structural influence of the water environment determines the positions of disordered fragments, leading to the formation of a hydrophobic core overlaid by a hydrophilic mantle. This phenomenon is further described by studying selected proteins which are known to be unstable and contain intrinsically disordered fragments. Their properties are established quantitatively, explaining the causative relation between the protein's structure and function and facilitating further comparative analyses of various structural models. © 2014 Elsevier Inc. All rights reserved.
Linking agent-based models and stochastic models of financial markets
Feng, Ling; Li, Baowen; Podobnik, Boris; Preis, Tobias; Stanley, H. Eugene
2012-01-01
It is well-known that financial asset returns exhibit fat-tailed distributions and long-term memory. These empirical features are the main objectives of modeling efforts using (i) stochastic processes to quantitatively reproduce these features and (ii) agent-based simulations to understand the underlying microscopic interactions. After reviewing selected empirical and theoretical evidence documenting the behavior of traders, we construct an agent-based model to quantitatively demonstrate that “fat” tails in return distributions arise when traders share similar technical trading strategies and decisions. Extending our behavioral model to a stochastic model, we derive and explain a set of quantitative scaling relations of long-term memory from the empirical behavior of individual market participants. Our analysis provides a behavioral interpretation of the long-term memory of absolute and squared price returns: They are directly linked to the way investors evaluate their investments by applying technical strategies at different investment horizons, and this quantitative relationship is in agreement with empirical findings. Our approach provides a possible behavioral explanation for stochastic models for financial systems in general and provides a method to parameterize such models from market data rather than from statistical fitting. PMID:22586086
Linking agent-based models and stochastic models of financial markets.
Feng, Ling; Li, Baowen; Podobnik, Boris; Preis, Tobias; Stanley, H Eugene
2012-05-29
It is well-known that financial asset returns exhibit fat-tailed distributions and long-term memory. These empirical features are the main objectives of modeling efforts using (i) stochastic processes to quantitatively reproduce these features and (ii) agent-based simulations to understand the underlying microscopic interactions. After reviewing selected empirical and theoretical evidence documenting the behavior of traders, we construct an agent-based model to quantitatively demonstrate that "fat" tails in return distributions arise when traders share similar technical trading strategies and decisions. Extending our behavioral model to a stochastic model, we derive and explain a set of quantitative scaling relations of long-term memory from the empirical behavior of individual market participants. Our analysis provides a behavioral interpretation of the long-term memory of absolute and squared price returns: They are directly linked to the way investors evaluate their investments by applying technical strategies at different investment horizons, and this quantitative relationship is in agreement with empirical findings. Our approach provides a possible behavioral explanation for stochastic models for financial systems in general and provides a method to parameterize such models from market data rather than from statistical fitting.
Ju, Jin Hyun; Crystal, Ronald G.
2017-01-01
Genome-wide expression Quantitative Trait Loci (eQTL) studies in humans have provided numerous insights into the genetics of both gene expression and complex diseases. While the majority of eQTL identified in genome-wide analyses impact a single gene, eQTL that impact many genes are particularly valuable for network modeling and disease analysis. To enable the identification of such broad impact eQTL, we introduce CONFETI: Confounding Factor Estimation Through Independent component analysis. CONFETI is designed to address two conflicting issues when searching for broad impact eQTL: the need to account for non-genetic confounding factors that can lower the power of the analysis or produce broad impact eQTL false positives, and the tendency of methods that account for confounding factors to model broad impact eQTL as non-genetic variation. The key advance of the CONFETI framework is the use of Independent Component Analysis (ICA) to identify variation likely caused by broad impact eQTL when constructing the sample covariance matrix used for the random effect in a mixed model. We show that CONFETI has better performance than other mixed model confounding factor methods when considering broad impact eQTL recovery from synthetic data. We also used the CONFETI framework and these same confounding factor methods to identify eQTL that replicate between matched twin pair datasets in the Multiple Tissue Human Expression Resource (MuTHER), the Depression Genes Networks study (DGN), the Netherlands Study of Depression and Anxiety (NESDA), and multiple tissue types in the Genotype-Tissue Expression (GTEx) consortium. These analyses identified both cis-eQTL and trans-eQTL impacting individual genes, and CONFETI had better or comparable performance to other mixed model confounding factor analysis methods when identifying such eQTL. In these analyses, we were able to identify and replicate a few broad impact eQTL although the overall number was small even when applying CONFETI. In light of these results, we discuss the broad impact eQTL that have been previously reported from the analysis of human data and suggest that considerable caution should be exercised when making biological inferences based on these reported eQTL. PMID:28505156
Ju, Jin Hyun; Shenoy, Sushila A; Crystal, Ronald G; Mezey, Jason G
2017-05-01
Genome-wide expression Quantitative Trait Loci (eQTL) studies in humans have provided numerous insights into the genetics of both gene expression and complex diseases. While the majority of eQTL identified in genome-wide analyses impact a single gene, eQTL that impact many genes are particularly valuable for network modeling and disease analysis. To enable the identification of such broad impact eQTL, we introduce CONFETI: Confounding Factor Estimation Through Independent component analysis. CONFETI is designed to address two conflicting issues when searching for broad impact eQTL: the need to account for non-genetic confounding factors that can lower the power of the analysis or produce broad impact eQTL false positives, and the tendency of methods that account for confounding factors to model broad impact eQTL as non-genetic variation. The key advance of the CONFETI framework is the use of Independent Component Analysis (ICA) to identify variation likely caused by broad impact eQTL when constructing the sample covariance matrix used for the random effect in a mixed model. We show that CONFETI has better performance than other mixed model confounding factor methods when considering broad impact eQTL recovery from synthetic data. We also used the CONFETI framework and these same confounding factor methods to identify eQTL that replicate between matched twin pair datasets in the Multiple Tissue Human Expression Resource (MuTHER), the Depression Genes Networks study (DGN), the Netherlands Study of Depression and Anxiety (NESDA), and multiple tissue types in the Genotype-Tissue Expression (GTEx) consortium. These analyses identified both cis-eQTL and trans-eQTL impacting individual genes, and CONFETI had better or comparable performance to other mixed model confounding factor analysis methods when identifying such eQTL. In these analyses, we were able to identify and replicate a few broad impact eQTL although the overall number was small even when applying CONFETI. In light of these results, we discuss the broad impact eQTL that have been previously reported from the analysis of human data and suggest that considerable caution should be exercised when making biological inferences based on these reported eQTL.
1990-05-30
phase HPLC using an IBM Instruments Inc. model LC 9533 ternary liquid chromatograph attached to a model F9522 fixed UV module and a model F9523...acid analyses are done by separation and quantitation of phenylthiocarbamyl amino acid derivatives using a second IBM model LC 9533 ternary liquid...computer which controls the HPLC and an IBM Instruments Inc. model LC 9505 automatic sampler. The hemoglobin present in the effluent from large
A Statistical Framework for Protein Quantitation in Bottom-Up MS-Based Proteomics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karpievitch, Yuliya; Stanley, Jeffrey R.; Taverner, Thomas
2009-08-15
Motivation: Quantitative mass spectrometry-based proteomics requires protein-level estimates and associated confidence measures. Challenges include the presence of low quality or incorrectly identified peptides and informative missingness. Furthermore, models are required for rolling peptide-level information up to the protein level. Results: We present a statistical model that carefully accounts for informative missingness in peak intensities and allows unbiased, model-based, protein-level estimation and inference. The model is applicable to both label-based and label-free quantitation experiments. We also provide automated, model-based, algorithms for filtering of proteins and peptides as well as imputation of missing values. Two LC/MS datasets are used to illustrate themore » methods. In simulation studies, our methods are shown to achieve substantially more discoveries than standard alternatives. Availability: The software has been made available in the opensource proteomics platform DAnTE (http://omics.pnl.gov/software/). Contact: adabney@stat.tamu.edu Supplementary information: Supplementary data are available at Bioinformatics online.« less
GLS-Finder: A Platform for Fast Profiling of Glucosinolates in Brassica Vegetables.
Sun, Jianghao; Zhang, Mengliang; Chen, Pei
2016-06-01
Mass spectrometry combined with related tandem techniques has become the most popular method for plant secondary metabolite characterization. We introduce a new strategy based on in-database searching, mass fragmentation behavior study, formula predicting for fast profiling of glucosinolates, a class of important compounds in brassica vegetables. A MATLAB script-based expert system computer program, "GLS-Finder", was developed. It is capable of qualitative and semi-quantitative analyses of glucosinolates in samples using data generated by ultrahigh-performance liquid chromatography-high-resolution accurate mass with multi-stage mass fragmentation (UHPLC-HRAM/MS(n)). A suite of bioinformatic tools was integrated into the "GLS-Finder" to perform raw data deconvolution, peak alignment, glucosinolate putative assignments, semi-quantitation, and unsupervised principal component analysis (PCA). GLS-Finder was successfully applied to identify intact glucosinolates in 49 commonly consumed Brassica vegetable samples in the United States. It is believed that this work introduces a new way of fast data processing and interpretation for qualitative and quantitative analyses of glucosinolates, where great efficacy was improved in comparison to identification manually.
Mapping human preictal and ictal haemodynamic networks using simultaneous intracranial EEG-fMRI
Chaudhary, Umair J.; Centeno, Maria; Thornton, Rachel C.; Rodionov, Roman; Vulliemoz, Serge; McEvoy, Andrew W.; Diehl, Beate; Walker, Matthew C.; Duncan, John S.; Carmichael, David W.; Lemieux, Louis
2016-01-01
Accurately characterising the brain networks involved in seizure activity may have important implications for our understanding of epilepsy. Intracranial EEG-fMRI can be used to capture focal epileptic events in humans with exquisite electrophysiological sensitivity and allows for identification of brain structures involved in this phenomenon over the entire brain. We investigated ictal BOLD networks using the simultaneous intracranial EEG-fMRI (icEEG-fMRI) in a 30 year-old male undergoing invasive presurgical evaluation with bilateral depth electrode implantations in amygdalae and hippocampi for refractory temporal lobe epilepsy. One spontaneous focal electrographic seizure was recorded. The aims of the data analysis were firstly to map BOLD changes related to the ictal activity identified on icEEG and secondly to compare different fMRI modelling approaches. Visual inspection of the icEEG showed an onset dominated by beta activity involving the right amygdala and hippocampus lasting 6.4 s (ictal onset phase), followed by gamma activity bilaterally lasting 14.8 s (late ictal phase). The fMRI data was analysed using SPM8 using two modelling approaches: firstly, purely based on the visually identified phases of the seizure and secondly, based on EEG spectral dynamics quantification. For the visual approach the two ictal phases were modelled as ‘ON’ blocks convolved with the haemodynamic response function; in addition the BOLD changes during the 30 s preceding the onset were modelled using a flexible basis set. For the quantitative fMRI modelling approach two models were evaluated: one consisting of the variations in beta and gamma bands power, thereby adding a quantitative element to the visually-derived models, and another based on principal components analysis of the entire spectrogram in attempt to reduce the bias associated with the visual appreciation of the icEEG. BOLD changes related to the visually defined ictal onset phase were revealed in the medial and lateral right temporal lobe. For the late ictal phase, the BOLD changes were remote from the SOZ and in deep brain areas (precuneus, posterior cingulate and others). The two quantitative models revealed BOLD changes involving the right hippocampus, amygdala and fusiform gyrus and in remote deep brain structures and the default mode network-related areas. In conclusion, icEEG-fMRI allowed us to reveal BOLD changes within and beyond the SOZ linked to very localised ictal fluctuations in beta and gamma activity measured in the amygdala and hippocampus. Furthermore, the BOLD changes within the SOZ structures were better captured by the quantitative models, highlighting the interest in considering seizure-related EEG fluctuations across the entire spectrum. PMID:27114897
Mapping human preictal and ictal haemodynamic networks using simultaneous intracranial EEG-fMRI.
Chaudhary, Umair J; Centeno, Maria; Thornton, Rachel C; Rodionov, Roman; Vulliemoz, Serge; McEvoy, Andrew W; Diehl, Beate; Walker, Matthew C; Duncan, John S; Carmichael, David W; Lemieux, Louis
2016-01-01
Accurately characterising the brain networks involved in seizure activity may have important implications for our understanding of epilepsy. Intracranial EEG-fMRI can be used to capture focal epileptic events in humans with exquisite electrophysiological sensitivity and allows for identification of brain structures involved in this phenomenon over the entire brain. We investigated ictal BOLD networks using the simultaneous intracranial EEG-fMRI (icEEG-fMRI) in a 30 year-old male undergoing invasive presurgical evaluation with bilateral depth electrode implantations in amygdalae and hippocampi for refractory temporal lobe epilepsy. One spontaneous focal electrographic seizure was recorded. The aims of the data analysis were firstly to map BOLD changes related to the ictal activity identified on icEEG and secondly to compare different fMRI modelling approaches. Visual inspection of the icEEG showed an onset dominated by beta activity involving the right amygdala and hippocampus lasting 6.4 s (ictal onset phase), followed by gamma activity bilaterally lasting 14.8 s (late ictal phase). The fMRI data was analysed using SPM8 using two modelling approaches: firstly, purely based on the visually identified phases of the seizure and secondly, based on EEG spectral dynamics quantification. For the visual approach the two ictal phases were modelled as 'ON' blocks convolved with the haemodynamic response function; in addition the BOLD changes during the 30 s preceding the onset were modelled using a flexible basis set. For the quantitative fMRI modelling approach two models were evaluated: one consisting of the variations in beta and gamma bands power, thereby adding a quantitative element to the visually-derived models, and another based on principal components analysis of the entire spectrogram in attempt to reduce the bias associated with the visual appreciation of the icEEG. BOLD changes related to the visually defined ictal onset phase were revealed in the medial and lateral right temporal lobe. For the late ictal phase, the BOLD changes were remote from the SOZ and in deep brain areas (precuneus, posterior cingulate and others). The two quantitative models revealed BOLD changes involving the right hippocampus, amygdala and fusiform gyrus and in remote deep brain structures and the default mode network-related areas. In conclusion, icEEG-fMRI allowed us to reveal BOLD changes within and beyond the SOZ linked to very localised ictal fluctuations in beta and gamma activity measured in the amygdala and hippocampus. Furthermore, the BOLD changes within the SOZ structures were better captured by the quantitative models, highlighting the interest in considering seizure-related EEG fluctuations across the entire spectrum.
Using Inequality Measures to Incorporate Environmental Justice into Regulatory Analyses
Harper, Sam; Ruder, Eric; Roman, Henry A.; Geggel, Amelia; Nweke, Onyemaechi; Payne-Sturges, Devon; Levy, Jonathan I.
2013-01-01
Formally evaluating how specific policy measures influence environmental justice is challenging, especially in the context of regulatory analyses in which quantitative comparisons are the norm. However, there is a large literature on developing and applying quantitative measures of health inequality in other settings, and these measures may be applicable to environmental regulatory analyses. In this paper, we provide information to assist policy decision makers in determining the viability of using measures of health inequality in the context of environmental regulatory analyses. We conclude that quantification of the distribution of inequalities in health outcomes across social groups of concern, considering both within-group and between-group comparisons, would be consistent with both the structure of regulatory analysis and the core definition of environmental justice. Appropriate application of inequality indicators requires thorough characterization of the baseline distribution of exposures and risks, leveraging data generally available within regulatory analyses. Multiple inequality indicators may be applicable to regulatory analyses, and the choice among indicators should be based on explicit value judgments regarding the dimensions of environmental justice of greatest interest. PMID:23999551
Time-series analyses of air pollution and mortality in the United States: a subsampling approach.
Moolgavkar, Suresh H; McClellan, Roger O; Dewanji, Anup; Turim, Jay; Luebeck, E Georg; Edwards, Melanie
2013-01-01
Hierarchical Bayesian methods have been used in previous papers to estimate national mean effects of air pollutants on daily deaths in time-series analyses. We obtained maximum likelihood estimates of the common national effects of the criteria pollutants on mortality based on time-series data from ≤ 108 metropolitan areas in the United States. We used a subsampling bootstrap procedure to obtain the maximum likelihood estimates and confidence bounds for common national effects of the criteria pollutants, as measured by the percentage increase in daily mortality associated with a unit increase in daily 24-hr mean pollutant concentration on the previous day, while controlling for weather and temporal trends. We considered five pollutants [PM10, ozone (O3), carbon monoxide (CO), nitrogen dioxide (NO2), and sulfur dioxide (SO2)] in single- and multipollutant analyses. Flexible ambient concentration-response models for the pollutant effects were considered as well. We performed limited sensitivity analyses with different degrees of freedom for time trends. In single-pollutant models, we observed significant associations of daily deaths with all pollutants. The O3 coefficient was highly sensitive to the degree of smoothing of time trends. Among the gases, SO2 and NO2 were most strongly associated with mortality. The flexible ambient concentration-response curve for O3 showed evidence of nonlinearity and a threshold at about 30 ppb. Differences between the results of our analyses and those reported from using the Bayesian approach suggest that estimates of the quantitative impact of pollutants depend on the choice of statistical approach, although results are not directly comparable because they are based on different data. In addition, the estimate of the O3-mortality coefficient depends on the amount of smoothing of time trends.
Krejci, Caroline C; Stone, Richard T; Dorneich, Michael C; Gilbert, Stephen B
2016-02-01
Factors influencing long-term viability of an intermediated regional food supply network (food hub) were modeled using agent-based modeling techniques informed by interview data gathered from food hub participants. Previous analyses of food hub dynamics focused primarily on financial drivers rather than social factors and have not used mathematical models. Based on qualitative and quantitative data gathered from 22 customers and 11 vendors at a midwestern food hub, an agent-based model (ABM) was created with distinct consumer personas characterizing the range of consumer priorities. A comparison study determined if the ABM behaved differently than a model based on traditional economic assumptions. Further simulation studies assessed the effect of changes in parameters, such as producer reliability and the consumer profiles, on long-term food hub sustainability. The persona-based ABM model produced different and more resilient results than the more traditional way of modeling consumers. Reduced producer reliability significantly reduced trade; in some instances, a modest reduction in reliability threatened the sustainability of the system. Finally, a modest increase in price-driven consumers at the outset of the simulation quickly resulted in those consumers becoming a majority of the overall customer base. Results suggest that social factors, such as desire to support the community, can be more important than financial factors. An ABM of food hub dynamics, based on human factors data gathered from the field, can be a useful tool for policy decisions. Similar approaches can be used for modeling customer dynamics with other sustainable organizations. © 2015, Human Factors and Ergonomics Society.
NASA Astrophysics Data System (ADS)
Kawata, Y.; Niki, N.; Ohmatsu, H.; Satake, M.; Kusumoto, M.; Tsuchida, T.; Aokage, K.; Eguchi, K.; Kaneko, M.; Moriyama, N.
2014-03-01
In this work, we investigate a potential usefulness of a topic model-based categorization of lung cancers as quantitative CT biomarkers for predicting the recurrence risk after curative resection. The elucidation of the subcategorization of a pulmonary nodule type in CT images is an important preliminary step towards developing the nodule managements that are specific to each patient. We categorize lung cancers by analyzing volumetric distributions of CT values within lung cancers via a topic model such as latent Dirichlet allocation. Through applying our scheme to 3D CT images of nonsmall- cell lung cancer (maximum lesion size of 3 cm) , we demonstrate the potential usefulness of the topic model-based categorization of lung cancers as quantitative CT biomarkers.
Goya Jorge, Elizabeth; Rayar, Anita Maria; Barigye, Stephen J; Jorge Rodríguez, María Elisa; Sylla-Iyarreta Veitía, Maité
2016-06-07
A quantitative structure-activity relationship (QSAR) study of the 2,2-diphenyl-l-picrylhydrazyl (DPPH•) radical scavenging ability of 1373 chemical compounds, using DRAGON molecular descriptors (MD) and the neural network technique, a technique based on the multilayer multilayer perceptron (MLP), was developed. The built model demonstrated a satisfactory performance for the training ( R 2 = 0.713 ) and test set ( Q ext 2 = 0.654 ) , respectively. To gain greater insight on the relevance of the MD contained in the MLP model, sensitivity and principal component analyses were performed. Moreover, structural and mechanistic interpretation was carried out to comprehend the relationship of the variables in the model with the modeled property. The constructed MLP model was employed to predict the radical scavenging ability for a group of coumarin-type compounds. Finally, in order to validate the model's predictions, an in vitro assay for one of the compounds (4-hydroxycoumarin) was performed, showing a satisfactory proximity between the experimental and predicted pIC50 values.
Hybrid Modeling for Testing Intelligent Software for Lunar-Mars Closed Life Support
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Nicholson, Leonard S. (Technical Monitor)
1999-01-01
Intelligent software is being developed for closed life support systems with biological components, for human exploration of the Moon and Mars. The intelligent software functions include planning/scheduling, reactive discrete control and sequencing, management of continuous control, and fault detection, diagnosis, and management of failures and errors. Four types of modeling information have been essential to system modeling and simulation to develop and test the software and to provide operational model-based what-if analyses: discrete component operational and failure modes; continuous dynamic performance within component modes, modeled qualitatively or quantitatively; configuration of flows and power among components in the system; and operations activities and scenarios. CONFIG, a multi-purpose discrete event simulation tool that integrates all four types of models for use throughout the engineering and operations life cycle, has been used to model components and systems involved in the production and transfer of oxygen and carbon dioxide in a plant-growth chamber and between that chamber and a habitation chamber with physicochemical systems for gas processing.
Calibration factors handbook : safety prediction models calibrated with Texas highway system.
DOT National Transportation Integrated Search
2009-10-01
Highway safety is an ongoing concern to the Texas Department of Transportation (TxDOT). As part of its : proactive commitment to improving highway safety, TxDOT is moving toward including quantitative safety : analyses earlier in the project developm...
High Temperature Degradation Mechanisms in Polymer Matrix Composites
NASA Technical Reports Server (NTRS)
Cunningham, Ronan A.
1996-01-01
Polymer matrix composites are increasingly used in demanding structural applications in which they may be exposed to harsh environments. The durability of such materials is a major concern, potentially limiting both the integrity of the structures and their useful lifetimes. The goal of the current investigation is to develop a mechanism-based model of the chemical degradation which occurs, such that given the external chemical environment and temperatures throughout the laminate, laminate geometry, and ply and/or constituent material properties, we can calculate the concentration of diffusing substances and extent of chemical degradation as functions of time and position throughout the laminate. This objective is met through the development and use of analytical models, coupled to an analysis-driven experimental program which offers both quantitative and qualitative information on the degradation mechanism. Preliminary analyses using a coupled diffusion/reaction model are used to gain insight into the physics of the degradation mechanisms and to identify crucial material parameters. An experimental program is defined based on the results of the preliminary analysis which allows the determination of the necessary material coefficients. Thermogravimetric analyses are carried out in nitrogen, air, and oxygen to provide quantitative information on thermal and oxidative reactions. Powdered samples are used to eliminate diffusion effects. Tests in both inert and oxidative environments allow the separation of thermal and oxidative contributions to specimen mass loss. The concentration dependency of the oxidative reactions is determined from the tests in pure oxygen. Short term isothermal tests at different temperatures are carried out on neat resin and unidirectional macroscopic specimens to identify diffusion effects. Mass loss, specimen shrinkage, the formation of degraded surface layers and surface cracking are recorded as functions of exposure time. Geometry effects in the neat resin, and anisotropic diffusion effects in the composites, are identified through the use of specimens with different aspect ratios. The data is used with the model to determine reaction coefficients and effective diffusion coefficients. The empirical and analytical correlations confirm the preliminary model results which suggest that mass loss at lower temperatures is dominated by oxidative reactions and that these reaction are limited by diffusion of oxygen from the surface. The mechanism-based model is able to successfully capture the basic physics of the degradation phenomena under a wide range of test conditions. The analysis-based test design is successful in separating out oxidative, thermal, and diffusion effects to allow the determination of material coefficients. This success confirms the basic picture of the process; however, a more complete understanding of some aspects of the physics are required before truly predictive capability can be achieved.
Beckett, Kate; Earthy, Sarah; Sleney, Jude; Barnes, Jo; Kellezi, Blerina; Barker, Marcus; Clarkson, Julie; Coffey, Frank; Elder, Georgina; Kendrick, Denise
2014-07-08
To explore views of service providers caring for injured people on: the extent to which services meet patients' needs and their perspectives on factors contributing to any identified gaps in service provision. Qualitative study nested within a quantitative multicentre longitudinal study assessing longer term impact of unintentional injuries in working age adults. Sampling frame for service providers was based on patient-reported service use in the quantitative study, patient interviews and advice of previously injured lay research advisers. Service providers' views were elicited through semistructured interviews. Data were analysed using thematic analysis. Participants were recruited from a range of settings and services in acute hospital trusts in four study centres (Bristol, Leicester, Nottingham and Surrey) and surrounding areas. 40 service providers from a range of disciplines. Service providers described two distinct models of trauma care: an 'ideal' model, informed by professional knowledge of the impact of injury and awareness of best models of care, and a 'real' model based on the realities of National Health Service (NHS) practice. Participants' 'ideal' model was consistent with standards of high-quality effective trauma care and while there were examples of services meeting the ideal model, 'real' care could also be fragmented and inequitable with major gaps in provision. Service provider accounts provide evidence of comprehensive understanding of patients' needs, awareness of best practice, compassion and research but reveal significant organisational and resource barriers limiting implementation of knowledge in practice. Service providers envisage an 'ideal' model of trauma care which is timely, equitable, effective and holistic, but this can differ from the care currently provided. Their experiences provide many suggestions for service improvements to bridge the gap between 'real' and 'ideal' care. Using service provider views to inform service design and delivery could enhance the quality, patient experience and outcomes of care. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Development of year 2020 goals for the National HIV/AIDS Strategy for the United States.
Holtgrave, David R
2014-04-01
In July, 2010, President Barack Obama released the National HIV/AIDS Strategy (NHAS). The NHAS set forth ambitious goals for the year 2015. These goals were potentially achievable had the appropriate level of resources been invested; however, investment at the necessary scale has not been made and the 2015 goals now may well be out of reach. Therefore, we propose that an updated NHAS be developed with goals for the year 2020 clearly articulated. For the purposes of fostering discussion on this important topic, we propose bold yet achievable quantitative 2020 goals based on previously published economic and mathematical modeling analyses.
Machine learning methods for classifying human physical activity from on-body accelerometers.
Mannini, Andrea; Sabatini, Angelo Maria
2010-01-01
The use of on-body wearable sensors is widespread in several academic and industrial domains. Of great interest are their applications in ambulatory monitoring and pervasive computing systems; here, some quantitative analysis of human motion and its automatic classification are the main computational tasks to be pursued. In this paper, we discuss how human physical activity can be classified using on-body accelerometers, with a major emphasis devoted to the computational algorithms employed for this purpose. In particular, we motivate our current interest for classifiers based on Hidden Markov Models (HMMs). An example is illustrated and discussed by analysing a dataset of accelerometer time series.
Using Movies to Analyse Gene Circuit Dynamics in Single Cells
Locke, James CW; Elowitz, Michael B
2010-01-01
Preface Many bacterial systems rely on dynamic genetic circuits to control critical processes. A major goal of systems biology is to understand these behaviours in terms of individual genes and their interactions. However, traditional techniques based on population averages wash out critical dynamics that are either unsynchronized between cells or driven by fluctuations, or ‘noise,’ in cellular components. Recently, the combination of time-lapse microscopy, quantitative image analysis, and fluorescent protein reporters has enabled direct observation of multiple cellular components over time in individual cells. In conjunction with mathematical modelling, these techniques are now providing powerful insights into genetic circuit behaviour in diverse microbial systems. PMID:19369953
Li, Weiguo; Zhang, Zhuoli; Gordon, Andrew C.; Chen, Jeane; Nicolai, Jodi; Lewandowski, Robert J.; Omary, Reed A.
2016-01-01
Purpose To investigate the qualitative and quantitative impacts of labeling yttrium microspheres with increasing amounts of superparamagnetic iron oxide (SPIO) material for magnetic resonance (MR) imaging in phantom and rodent models. Materials and Methods Animal model studies were approved by the institutional Animal Care and Use Committee. The r2* relaxivity for each of four microsphere SPIO compositions was determined from 32 phantoms constructed with agarose gel and in eight concentrations from each of the four compositions. Intrahepatic transcatheter infusion procedures were performed in rats by using each of the four compositions before MR imaging to visualize distributions within the liver. For quantitative studies, doses of 5, 10, 15, or 20 mg 2% SPIO-labeled yttrium microspheres were infused into 24 rats (six rats per group). MR imaging R2* measurements were used to quantify the dose delivered to each liver. Pearson correlation, analysis of variance, and intraclass correlation analyses were performed to compare MR imaging measurements in phantoms and animal models. Results Increased r2* relaxivity was observed with incremental increases of SPIO microsphere content. R2* measurements of the 2% SPIO–labeled yttrium microsphere concentration were well correlated with known phantom concentrations (R2 = 1.00, P < .001) over a broader linear range than observed for the other three compositions. Microspheres were heterogeneously distributed within each liver; increasing microsphere SPIO content produced marked signal voids. R2*-based measurements of 2% SPIO–labeled yttrium microsphere delivery were well correlated with infused dose (intraclass correlation coefficient, 0.98; P < .001). Conclusion MR imaging R2* measurements of yttrium microspheres labeled with 2% SPIO can quantitatively depict in vivo intrahepatic biodistribution in a rat model. © RSNA, 2015 Online supplemental material is available for this article. PMID:26313619
Pleiotropy Analysis of Quantitative Traits at Gene Level by Multivariate Functional Linear Models
Wang, Yifan; Liu, Aiyi; Mills, James L.; Boehnke, Michael; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Xiong, Momiao; Wu, Colin O.; Fan, Ruzong
2015-01-01
In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai–Bartlett trace, Hotelling–Lawley trace, and Wilks’s Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. PMID:25809955
Pleiotropy analysis of quantitative traits at gene level by multivariate functional linear models.
Wang, Yifan; Liu, Aiyi; Mills, James L; Boehnke, Michael; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao; Wu, Colin O; Fan, Ruzong
2015-05-01
In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai-Bartlett trace, Hotelling-Lawley trace, and Wilks's Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. © 2015 WILEY PERIODICALS, INC.
Warmerdam, P G; de Koning, H J; Boer, R; Beemsterboer, P M; Dierks, M L; Swart, E; Robra, B P
1997-01-01
STUDY OBJECTIVE: To estimate quantitatively the impact of the quality of mammographic screening (in terms of sensitivity and specificity) on the effects and costs of nationwide breast cancer screening. DESIGN: Three plausible "quality" scenarios for a biennial breast cancer screening programme for women aged 50-69 in Germany were analysed in terms of costs and effects using the Microsimulation Screening Analysis model on breast cancer screening and the natural history of breast cancer. Firstly, sensitivity and specificity in the expected situation (or "baseline" scenario) were estimated from a model based analysis of empirical data from 35,000 screening examinations in two German pilot projects. In the second "high quality" scenario, these properties were based on the more favourable diagnostic results from breast cancer screening projects and the nationwide programme in The Netherlands. Thirdly, a worst case, "low quality" hypothetical scenario with a 25% lower sensitivity than that experienced in The Netherlands was analysed. SETTING: The epidemiological and social situation in Germany in relation to mass screening for breast cancer. RESULTS: In the "baseline" scenario, an 11% reduction in breast cancer mortality was expected in the total German female population, ie 2100 breast cancer deaths would be prevented per year. It was estimated that the "high quality" scenario, based on Dutch experience, would lead to the prevention of an additional 200 deaths per year and would also cut the number of false positive biopsy results by half. The cost per life year gained varied from Deutsche mark (DM) 15,000 on the "high quality" scenario to DM 21,000 in the "low quality" setting. CONCLUSIONS: Up to 20% of the total costs of a screening programme can be spent on quality improvement in order to achieve a substantially higher reduction in mortality and reduce undesirable side effects while retaining the same cost effectiveness ratio as that estimated from the German data. PMID:9196649
Burroughs, Nigel J.; Köhler, Karsten; Miloserdov, Vladimir; Dustin, Michael L.; van der Merwe, P. Anton; Davis, Daniel M.
2011-01-01
Immune synapses formed by T and NK cells both show segregation of the integrin ICAM1 from other proteins such as CD2 (T cell) or KIR (NK cell). However, the mechanism by which these proteins segregate remains unclear; one key hypothesis is a redistribution based on protein size. Simulations of this mechanism qualitatively reproduce observed segregation patterns, but only in certain parameter regimes. Verifying that these parameter constraints in fact hold has not been possible to date, this requiring a quantitative coupling of theory to experimental data. Here, we address this challenge, developing a new methodology for analysing and quantifying image data and its integration with biophysical models. Specifically we fit a binding kinetics model to 2 colour fluorescence data for cytoskeleton independent synapses (2 and 3D) and test whether the observed inverse correlation between fluorophores conforms to size dependent exclusion, and further, whether patterned states are predicted when model parameters are estimated on individual synapses. All synapses analysed satisfy these conditions demonstrating that the mechanisms of protein redistribution have identifiable signatures in their spatial patterns. We conclude that energy processes implicit in protein size based segregation can drive the patternation observed in individual synapses, at least for the specific examples tested, such that no additional processes need to be invoked. This implies that biophysical processes within the membrane interface have a crucial impact on cell∶cell communication and cell signalling, governing protein interactions and protein aggregation. PMID:21829338
Quantitative Thermochemical Measurements in High-Pressure Gaseous Combustion
NASA Technical Reports Server (NTRS)
Kojima, Jun J.; Fischer, David G.
2012-01-01
We present our strategic experiment and thermochemical analyses on combustion flow using a subframe burst gating (SBG) Raman spectroscopy. This unconventional laser diagnostic technique has promising ability to enhance accuracy of the quantitative scalar measurements in a point-wise single-shot fashion. In the presentation, we briefly describe an experimental methodology that generates transferable calibration standard for the routine implementation of the diagnostics in hydrocarbon flames. The diagnostic technology was applied to simultaneous measurements of temperature and chemical species in a swirl-stabilized turbulent flame with gaseous methane fuel at elevated pressure (17 atm). Statistical analyses of the space-/time-resolved thermochemical data provide insights into the nature of the mixing process and it impact on the subsequent combustion process in the model combustor.
Hasse, Katelyn; Neylon, John; Sheng, Ke; Santhanam, Anand P
2016-03-01
Breast elastography is a critical tool for improving the targeted radiotherapy treatment of breast tumors. Current breast radiotherapy imaging protocols only involve prone and supine CT scans. There is a lack of knowledge on the quantitative accuracy with which breast elasticity can be systematically measured using only prone and supine CT datasets. The purpose of this paper is to describe a quantitative elasticity estimation technique for breast anatomy using only these supine/prone patient postures. Using biomechanical, high-resolution breast geometry obtained from CT scans, a systematic assessment was performed in order to determine the feasibility of this methodology for clinically relevant elasticity distributions. A model-guided inverse analysis approach is presented in this paper. A graphics processing unit (GPU)-based linear elastic biomechanical model was employed as a forward model for the inverse analysis with the breast geometry in a prone position. The elasticity estimation was performed using a gradient-based iterative optimization scheme and a fast-simulated annealing (FSA) algorithm. Numerical studies were conducted to systematically analyze the feasibility of elasticity estimation. For simulating gravity-induced breast deformation, the breast geometry was anchored at its base, resembling the chest-wall/breast tissue interface. Ground-truth elasticity distributions were assigned to the model, representing tumor presence within breast tissue. Model geometry resolution was varied to estimate its influence on convergence of the system. A priori information was approximated and utilized to record the effect on time and accuracy of convergence. The role of the FSA process was also recorded. A novel error metric that combined elasticity and displacement error was used to quantify the systematic feasibility study. For the authors' purposes, convergence was set to be obtained when each voxel of tissue was within 1 mm of ground-truth deformation. The authors' analyses showed that a ∼97% model convergence was systematically observed with no-a priori information. Varying the model geometry resolution showed no significant accuracy improvements. The GPU-based forward model enabled the inverse analysis to be completed within 10-70 min. Using a priori information about the underlying anatomy, the computation time decreased by as much as 50%, while accuracy improved from 96.81% to 98.26%. The use of FSA was observed to allow the iterative estimation methodology to converge more precisely. By utilizing a forward iterative approach to solve the inverse elasticity problem, this work indicates the feasibility and potential of the fast reconstruction of breast tissue elasticity using supine/prone patient postures.
NASA Astrophysics Data System (ADS)
Blanke, Bruno; Speich, Sabrina; Rusciano, Emanuela
2015-01-01
We use the tracer and velocity fields of a climatological ocean model to investigate the ability of Argo-like data to estimate accurately water mass movements and transformations, in the style of analyses commonly applied to the output of ocean general circulation model. To this end, we introduce an algorithm for the reconstruction of a fully non-divergent three-dimensional velocity field from the simple knowledge of the model vertical density profiles and 1000-m horizontal velocity components. The validation of the technique consists in comparing the resulting pathways for Antarctic Intermediate Water in the South Atlantic Ocean to equivalent reference results based on the full model information available for velocity and tracers. We show that the inclusion of a wind-induced Ekman pumping and of a well-thought-out expression for vertical velocity at the level of the intermediate waters is essential for the reliable reproduction of quantitative Lagrangian analyses. Neglecting the seasonal variability of the velocity and tracer fields is not a significant source of errors, at least well below the permanent thermocline. These results give us confidence in the success of the adaptation of the algorithm to true gridded Argo data for investigating the dynamics of flows in the ocean interior.
A Comparison of Four Software Programs for Implementing Decision Analytic Cost-Effectiveness Models.
Hollman, Chase; Paulden, Mike; Pechlivanoglou, Petros; McCabe, Christopher
2017-08-01
The volume and technical complexity of both academic and commercial research using decision analytic modelling has increased rapidly over the last two decades. The range of software programs used for their implementation has also increased, but it remains true that a small number of programs account for the vast majority of cost-effectiveness modelling work. We report a comparison of four software programs: TreeAge Pro, Microsoft Excel, R and MATLAB. Our focus is on software commonly used for building Markov models and decision trees to conduct cohort simulations, given their predominance in the published literature around cost-effectiveness modelling. Our comparison uses three qualitative criteria as proposed by Eddy et al.: "transparency and validation", "learning curve" and "capability". In addition, we introduce the quantitative criterion of processing speed. We also consider the cost of each program to academic users and commercial users. We rank the programs based on each of these criteria. We find that, whilst Microsoft Excel and TreeAge Pro are good programs for educational purposes and for producing the types of analyses typically required by health technology assessment agencies, the efficiency and transparency advantages of programming languages such as MATLAB and R become increasingly valuable when more complex analyses are required.
Nguyen Hoang, Anh Thu; Chen, Puran; Björnfot, Sofia; Högstrand, Kari; Lock, John G; Grandien, Alf; Coles, Mark; Svensson, Mattias
2014-09-01
This manuscript describes technical advances allowing manipulation and quantitative analyses of human DC migratory behavior in lung epithelial tissue. DCs are hematopoietic cells essential for the maintenance of tissue homeostasis and the induction of tissue-specific immune responses. Important functions include cytokine production and migration in response to infection for the induction of proper immune responses. To design appropriate strategies to exploit human DC functional properties in lung tissue for the purpose of clinical evaluation, e.g., candidate vaccination and immunotherapy strategies, we have developed a live-imaging assay based on our previously described organotypic model of the human lung. This assay allows provocations and subsequent quantitative investigations of DC functional properties under conditions mimicking morphological and functional features of the in vivo parental tissue. We present protocols to set up and prepare tissue models for 4D (x, y, z, time) fluorescence-imaging analysis that allow spatial and temporal studies of human DCs in live epithelial tissue, followed by flow cytometry analysis of DCs retrieved from digested tissue models. This model system can be useful for elucidating incompletely defined pathways controlling DC functional responses to infection and inflammation in lung epithelial tissue, as well as the efficacy of locally administered candidate interventions. © 2014 Society for Leukocyte Biology.
Quantitative studies on structure-DPPH• scavenging activity relationships of food phenolic acids.
Jing, Pu; Zhao, Shu-Juan; Jian, Wen-Jie; Qian, Bing-Jun; Dong, Ying; Pang, Jie
2012-11-01
Phenolic acids are potent antioxidants, yet the quantitative structure-activity relationships of phenolic acids remain unclear. The purpose of this study was to establish 3D-QSAR models able to predict phenolic acids with high DPPH• scavenging activity and understand their structure-activity relationships. The model has been established by using a training set of compounds with cross-validated q2 = 0.638/0.855, non-cross-validated r2 = 0.984/0.986, standard error of estimate = 0.236/0.216, and F = 139.126/208.320 for the best CoMFA/CoMSIA models. The predictive ability of the models was validated with the correlation coefficient r2(pred) = 0.971/0.996 (>0.6) for each model. Additionally, the contour map results suggested that structural characteristics of phenolics acids favorable for the high DPPH• scavenging activity might include: (1) bulky and/or electron-donating substituent groups on the phenol ring; (2) electron-donating groups at the meta-position and/or hydrophobic groups at the meta-/ortho-position; (3) hydrogen-bond donor/electron-donating groups at the ortho-position. The results have been confirmed based on structural analyses of phenolic acids and their DPPH• scavenging data from eight recent publications. The findings may provide deeper insight into the antioxidant mechanisms and provide useful information for selecting phenolic acids for free radical scavenging properties.
Hou, Chen; Amunugama, Kaushalya
2015-07-01
The relationship between energy expenditure and longevity has been a central theme in aging studies. Empirical studies have yielded controversial results, which cannot be reconciled by existing theories. In this paper, we present a simple theoretical model based on first principles of energy conservation and allometric scaling laws. The model takes into considerations the energy tradeoffs between life history traits and the efficiency of the energy utilization, and offers quantitative and qualitative explanations for a set of seemingly contradictory empirical results. We show that oxidative metabolism can affect cellular damage and longevity in different ways in animals with different life histories and under different experimental conditions. Qualitative data and the linearity between energy expenditure, cellular damage, and lifespan assumed in previous studies are not sufficient to understand the complexity of the relationships. Our model provides a theoretical framework for quantitative analyses and predictions. The model is supported by a variety of empirical studies, including studies on the cellular damage profile during ontogeny; the intra- and inter-specific correlations between body mass, metabolic rate, and lifespan; and the effects on lifespan of (1) diet restriction and genetic modification of growth hormone, (2) the cold and exercise stresses, and (3) manipulations of antioxidant. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Ma, Gao; Xu, Xiao-Quan; Hu, Hao; Su, Guo-Yi; Shen, Jie; Shi, Hai-Bin; Wu, Fei-Yun
2018-01-01
To compare the diagnostic performance of readout-segmented echo-planar imaging (RS-EPI)-based diffusion kurtosis imaging (DKI) and that of diffusion-weighted imaging (DWI) for differentiating malignant from benign masses in head and neck region. Between December 2014 and April 2016, we retrospectively enrolled 72 consecutive patients with head and neck masses who had undergone RS-EPI-based DKI scan (b value of 0, 500, 1000, and 1500 s/mm 2 ) for pretreatment evaluation. Imaging data were post-processed by using monoexponential and diffusion kurtosis (DK) model for quantitation of apparent diffusion coefficient (ADC), apparent diffusion for Gaussian distribution (D app ), and apparent kurtosis coefficient (K app ). Unpaired t test and Mann-Whitney U test were used to compare differences of quantitative parameters between malignant and benign groups. Receiver operating characteristic curve analyses were performed to determine and compare the diagnostic ability of quantitative parameters in predicting malignancy. Malignant group demonstrated significantly lower ADC (0.754 ± 0.167 vs. 1.222 ± 0.420, p < 0.001) and D app (1.029 ± 0.226 vs. 1.640 ± 0.445, p < 0.001) while higher K app (1.344 ± 0.309 vs. 0.715 ± 0.249, p < 0.001) than benign group. Using a combination of D app and K app as diagnostic index, significantly better differentiating performance was achieved than using ADC alone (area under curve: 0.956 vs. 0.876, p = 0.042). Compared to DWI, DKI could provide additional data related to tumor heterogeneity with significantly better differentiating performance. Its derived quantitative metrics could serve as a promising imaging biomarker for differentiating malignant from benign masses in head and neck region.
QTest: Quantitative Testing of Theories of Binary Choice.
Regenwetter, Michel; Davis-Stober, Clintin P; Lim, Shiau Hong; Guo, Ying; Popova, Anna; Zwilling, Chris; Cha, Yun-Shil; Messner, William
2014-01-01
The goal of this paper is to make modeling and quantitative testing accessible to behavioral decision researchers interested in substantive questions. We provide a novel, rigorous, yet very general, quantitative diagnostic framework for testing theories of binary choice. This permits the nontechnical scholar to proceed far beyond traditionally rather superficial methods of analysis, and it permits the quantitatively savvy scholar to triage theoretical proposals before investing effort into complex and specialized quantitative analyses. Our theoretical framework links static algebraic decision theory with observed variability in behavioral binary choice data. The paper is supplemented with a custom-designed public-domain statistical analysis package, the QTest software. We illustrate our approach with a quantitative analysis using published laboratory data, including tests of novel versions of "Random Cumulative Prospect Theory." A major asset of the approach is the potential to distinguish decision makers who have a fixed preference and commit errors in observed choices from decision makers who waver in their preferences.
[New method of mixed gas infrared spectrum analysis based on SVM].
Bai, Peng; Xie, Wen-Jun; Liu, Jun-Hua
2007-07-01
A new method of infrared spectrum analysis based on support vector machine (SVM) for mixture gas was proposed. The kernel function in SVM was used to map the seriously overlapping absorption spectrum into high-dimensional space, and after transformation, the high-dimensional data could be processed in the original space, so the regression calibration model was established, then the regression calibration model with was applied to analyze the concentration of component gas. Meanwhile it was proved that the regression calibration model with SVM also could be used for component recognition of mixture gas. The method was applied to the analysis of different data samples. Some factors such as scan interval, range of the wavelength, kernel function and penalty coefficient C that affect the model were discussed. Experimental results show that the component concentration maximal Mean AE is 0.132%, and the component recognition accuracy is higher than 94%. The problems of overlapping absorption spectrum, using the same method for qualitative and quantitative analysis, and limit number of training sample, were solved. The method could be used in other mixture gas infrared spectrum analyses, promising theoretic and application values.
Multiplexed MRM-based assays for the quantitation of proteins in mouse plasma and heart tissue.
Percy, Andrew J; Michaud, Sarah A; Jardim, Armando; Sinclair, Nicholas J; Zhang, Suping; Mohammed, Yassene; Palmer, Andrea L; Hardie, Darryl B; Yang, Juncong; LeBlanc, Andre M; Borchers, Christoph H
2017-04-01
The mouse is the most commonly used laboratory animal, with more than 14 million mice being used for research each year in North America alone. The number and diversity of mouse models is increasing rapidly through genetic engineering strategies, but detailed characterization of these models is still challenging because most phenotypic information is derived from time-consuming histological and biochemical analyses. To expand the biochemists' toolkit, we generated a set of targeted proteomic assays for mouse plasma and heart tissue, utilizing bottom-up LC/MRM-MS with isotope-labeled peptides as internal standards. Protein quantitation was performed using reverse standard curves, with LC-MS platform and curve performance evaluated by quality control standards. The assays comprising the final panel (101 peptides for 81 proteins in plasma; 227 peptides for 159 proteins in heart tissue) have been rigorously developed under a fit-for-purpose approach and utilize stable-isotope labeled peptides for every analyte to provide high-quality, precise relative quantitation. In addition, the peptides have been tested to be interference-free and the assay is highly multiplexed, with reproducibly determined protein concentrations spanning >4 orders of magnitude. The developed assays have been used in a small pilot study to demonstrate their application to molecular phenotyping or biomarker discovery/verification studies. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Sampling Strategies and Processing of Biobank Tissue Samples from Porcine Biomedical Models.
Blutke, Andreas; Wanke, Rüdiger
2018-03-06
In translational medical research, porcine models have steadily become more popular. Considering the high value of individual animals, particularly of genetically modified pig models, and the often-limited number of available animals of these models, establishment of (biobank) collections of adequately processed tissue samples suited for a broad spectrum of subsequent analyses methods, including analyses not specified at the time point of sampling, represent meaningful approaches to take full advantage of the translational value of the model. With respect to the peculiarities of porcine anatomy, comprehensive guidelines have recently been established for standardized generation of representative, high-quality samples from different porcine organs and tissues. These guidelines are essential prerequisites for the reproducibility of results and their comparability between different studies and investigators. The recording of basic data, such as organ weights and volumes, the determination of the sampling locations and of the numbers of tissue samples to be generated, as well as their orientation, size, processing and trimming directions, are relevant factors determining the generalizability and usability of the specimen for molecular, qualitative, and quantitative morphological analyses. Here, an illustrative, practical, step-by-step demonstration of the most important techniques for generation of representative, multi-purpose biobank specimen from porcine tissues is presented. The methods described here include determination of organ/tissue volumes and densities, the application of a volume-weighted systematic random sampling procedure for parenchymal organs by point-counting, determination of the extent of tissue shrinkage related to histological embedding of samples, and generation of randomly oriented samples for quantitative stereological analyses, such as isotropic uniform random (IUR) sections generated by the "Orientator" and "Isector" methods, and vertical uniform random (VUR) sections.
Socioscientific Argumentation: The effects of content knowledge and morality
NASA Astrophysics Data System (ADS)
Sadler, Troy D.; Donnelly, Lisa A.
2006-10-01
Broad support exists within the science education community for the incorporation of socioscientific issues (SSI) and argumentation in the science curriculum. This study investigates how content knowledge and morality contribute to the quality of SSI argumentation among high school students. We employed a mixed-methods approach: 56 participants completed tests of content knowledge and moral reasoning as well as interviews, related to SSI topics, which were scored based on a rubric for argumentation quality. Multiple regression analyses revealed no statistically significant relationships among content knowledge, moral reasoning, and argumentation quality. Qualitative analyses of the interview transcripts supported the quantitative results in that participants very infrequently revealed patterns of content knowledge application. However, most of the participants did perceive the SSI as moral problems. We propose a “Threshold Model of Knowledge Transfer” to account for the relationship between content knowledge and argumentation quality. Implications for science education are discussed.
DigitalHuman (DH): An Integrative Mathematical Model ofHuman Physiology
NASA Technical Reports Server (NTRS)
Hester, Robert L.; Summers, Richard L.; lIescu, Radu; Esters, Joyee; Coleman, Thomas G.
2010-01-01
Mathematical models and simulation are important tools in discovering the key causal relationships governing physiological processes and improving medical intervention when physiological complexity is a central issue. We have developed a model of integrative human physiology called DigitalHuman (DH) consisting of -5000 variables modeling human physiology describing cardiovascular, renal, respiratory, endocrine, neural and metabolic physiology. Users can view time-dependent solutions and interactively introduce perturbations by altering numerical parameters to investigate new hypotheses. The variables, parameters and quantitative relationships as well as all other model details are described in XML text files. All aspects of the model, including the mathematical equations describing the physiological processes are written in XML open source, text-readable files. Model structure is based upon empirical data of physiological responses documented within the peer-reviewed literature. The model can be used to understand proposed physiological mechanisms and physiological interactions that may not be otherwise intUitively evident. Some of the current uses of this model include the analyses of renal control of blood pressure, the central role of the liver in creating and maintaining insulin resistance, and the mechanisms causing orthostatic hypotension in astronauts. Additionally the open source aspect of the modeling environment allows any investigator to add detailed descriptions of human physiology to test new concepts. The model accurately predicts both qualitative and more importantly quantitative changes in clinically and experimentally observed responses. DigitalHuman provides scientists a modeling environment to understand the complex interactions of integrative physiology. This research was supported by.NIH HL 51971, NSF EPSCoR, and NASA
NASA Astrophysics Data System (ADS)
Mandayam Doddamane, Prabha
2011-12-01
Considerable research, policy, and programmatic efforts have been dedicated to addressing the participation of particular populations in STEM for decades. Each of these efforts claims equity-related goals; yet, they heavily frame the problem, through pervasive STEM pipeline model discourse, in terms of national needs, workforce supply, and competitiveness. This particular framing of the problem may, indeed, be counter to equity goals, especially when paired with policy that largely relies on statistical significance and broad aggregation of data over exploring the identities and experiences of the populations targeted for equitable outcomes in that policy. In this study, I used the mixed-methods approach of critical discourse and critical quantitative analyses to understand how the pipeline model ideology has become embedded within academic discourse, research, and data surrounding STEM education and work and to provide alternatives for quantitative analysis. Using critical theory as a lens, I first conducted a critical discourse analysis of contemporary STEM workforce studies with a particular eye to pipeline ideology. Next, I used that analysis to inform logistic regression analyses of the 2006 SESTAT data. This quantitative analysis compared and contrasted different ways of thinking about identity and retention. Overall, the findings of this study show that many subjective choices are made in the construction of the large-scale datasets used to inform much national science and engineering policy and that these choices greatly influence likelihood of retention outcomes.
Asynchronous adaptive time step in quantitative cellular automata modeling
Zhu, Hao; Pang, Peter YH; Sun, Yan; Dhar, Pawan
2004-01-01
Background The behaviors of cells in metazoans are context dependent, thus large-scale multi-cellular modeling is often necessary, for which cellular automata are natural candidates. Two related issues are involved in cellular automata based multi-cellular modeling: how to introduce differential equation based quantitative computing to precisely describe cellular activity, and upon it, how to solve the heavy time consumption issue in simulation. Results Based on a modified, language based cellular automata system we extended that allows ordinary differential equations in models, we introduce a method implementing asynchronous adaptive time step in simulation that can considerably improve efficiency yet without a significant sacrifice of accuracy. An average speedup rate of 4–5 is achieved in the given example. Conclusions Strategies for reducing time consumption in simulation are indispensable for large-scale, quantitative multi-cellular models, because even a small 100 × 100 × 100 tissue slab contains one million cells. Distributed and adaptive time step is a practical solution in cellular automata environment. PMID:15222901
Burns, Darren K; Jones, Andrew P; Suhrcke, Marc
2016-03-01
Markets throughout the world have been reducing barriers to international trade and investment in recent years. The resulting increases in levels of international trade and investment have subsequently generated research interest into the potential population health impact. We present a systematic review of quantitative studies investigating the relationship between international trade, foreign direct investment and non-nutritional health outcomes. Articles were systematically collected from the SCOPUS, PubMed, EconLit and Web of Science databases. Due to the heterogeneous nature of the evidence considered, the 16 included articles were subdivided into individual level data analyses, selected country analyses and international panel analyses. Articles were then quality assessed using a tool developed as part of the project. Nine of the studies were assessed to be high quality, six as medium quality, and one as low quality. The evidence from the quantitative literature suggests that overall, there appears to be a beneficial association between international trade and population health. There was also evidence of the importance of foreign direct investment, yet a lack of research considering the direction of causality. Taken together, quantitative research into the relationship between trade and non-nutritional health indicates trade to be beneficial, yet this body of research is still in its infancy. Future quantitative studies based on this foundation will provide a stronger basis on which to inform relevant national and international institutions about the health consequences of trade policies. Copyright © 2016 Elsevier Ltd. All rights reserved.
QUANTITATIVE PROCEDURES FOR NEUROTOXICOLOGY RISK ASSESSMENT
In this project, previously published information on biologically based dose-response model for brain development was used to quantitatively evaluate critical neurodevelopmental processes, and to assess potential chemical impacts on early brain development. This model has been ex...
How to make predictions about future infectious disease risks
Woolhouse, Mark
2011-01-01
Formal, quantitative approaches are now widely used to make predictions about the likelihood of an infectious disease outbreak, how the disease will spread, and how to control it. Several well-established methodologies are available, including risk factor analysis, risk modelling and dynamic modelling. Even so, predictive modelling is very much the ‘art of the possible’, which tends to drive research effort towards some areas and away from others which may be at least as important. Building on the undoubted success of quantitative modelling of the epidemiology and control of human and animal diseases such as AIDS, influenza, foot-and-mouth disease and BSE, attention needs to be paid to developing a more holistic framework that captures the role of the underlying drivers of disease risks, from demography and behaviour to land use and climate change. At the same time, there is still considerable room for improvement in how quantitative analyses and their outputs are communicated to policy makers and other stakeholders. A starting point would be generally accepted guidelines for ‘good practice’ for the development and the use of predictive models. PMID:21624924
Modeling with Young Students--Quantitative and Qualitative.
ERIC Educational Resources Information Center
Bliss, Joan; Ogborn, Jon; Boohan, Richard; Brosnan, Tim; Mellar, Harvey; Sakonidis, Babis
1999-01-01
A project created tasks and tools to investigate quality and nature of 11- to 14-year-old pupils' reasoning with quantitative and qualitative computer-based modeling tools. Tasks and tools were used in two innovative modes of learning: expressive, where pupils created their own models, and exploratory, where pupils investigated an expert's model.…
Rhoda, Anthea; Cunningham, Natalie; Azaria, Simon; Urimubenshi, Gerard
2015-09-28
The provision of rehabilitation differs between developed and developing countries, this could impact on the outcomes of post stroke rehabilitation. The aim of this paper is to present provision of in-patient stroke rehabilitation. In addition the challenges experienced by the individuals with participation post discharge are also presented. Qualitative and quantitative research methods were used to collect data. The quantitative data was collected using a retrospective survey of stroke patients admitted to hospitals over a three- to five-year period. Quantitative data was captured on a validated data capture sheet and analysed descriptively. The qualitative data was collected using interviews from a purposively and conveniently selected sample, audio-taped and analysed thematically. The qualitative data was presented within the participation model. A total of 168 medical folders were reviewed for a South African sample, 139 for a Rwandan sample and 145 for a Tanzanian sample. The mean age ranged from 62.6 (13.78) years in the South African sample to 56.0 (17.4) in the Rwandan sample. While a total of 98 % of South African stroke patients received physiotherapy, only 39.4 % of Rwandan patients received physiotherapy. From the qualitative interviews, it became clear that the stroke patients had participation restrictions. When conceptualised within the Participation Model participation restrictions experienced by the stroke patients were a lack of accomplishment, inability to engage in previous roles and a perception of having health problems. With the exception of Rwanda, stroke patients in the countries studied are admitted to settings early post stroke allowing for implementation of effective acute interventions. The participants were experiencing challenges which included a lack of transport and the physical geographic surroundings in the rural settings not being conducive to wheelchair use. Stroke patients admitted to hospitals in certain African countries could receive limited in-patient therapeutic interventions. With the exception of barriers in the physical environment, stroke patients in developing countries where resources are limited experience the same participation restrictions as their counterparts in developed countries where resources are more freely available. Rehabilitation interventions in these developing countries should therefore be community-based focussing on intervening in the physical environment.
Philip Ye, X; Liu, Lu; Hayes, Douglas; Womac, Alvin; Hong, Kunlun; Sokhansanj, Shahab
2008-10-01
The objectives of this research were to determine the variation of chemical composition across botanical fractions of cornstover, and to probe the potential of Fourier transform near-infrared (FT-NIR) techniques in qualitatively classifying separated cornstover fractions and in quantitatively analyzing chemical compositions of cornstover by developing calibration models to predict chemical compositions of cornstover based on FT-NIR spectra. Large variations of cornstover chemical composition for wide calibration ranges, which is required by a reliable calibration model, were achieved by manually separating the cornstover samples into six botanical fractions, and their chemical compositions were determined by conventional wet chemical analyses, which proved that chemical composition varies significantly among different botanical fractions of cornstover. Different botanic fractions, having total saccharide content in descending order, are husk, sheath, pith, rind, leaf, and node. Based on FT-NIR spectra acquired on the biomass, classification by Soft Independent Modeling of Class Analogy (SIMCA) was employed to conduct qualitative classification of cornstover fractions, and partial least square (PLS) regression was used for quantitative chemical composition analysis. SIMCA was successfully demonstrated in classifying botanical fractions of cornstover. The developed PLS model yielded root mean square error of prediction (RMSEP %w/w) of 0.92, 1.03, 0.17, 0.27, 0.21, 1.12, and 0.57 for glucan, xylan, galactan, arabinan, mannan, lignin, and ash, respectively. The results showed the potential of FT-NIR techniques in combination with multivariate analysis to be utilized by biomass feedstock suppliers, bioethanol manufacturers, and bio-power producers in order to better manage bioenergy feedstocks and enhance bioconversion.
Tracking boundary movement and exterior shape modelling in lung EIT imaging.
Biguri, A; Grychtol, B; Adler, A; Soleimani, M
2015-06-01
Electrical impedance tomography (EIT) has shown significant promise for lung imaging. One key challenge for EIT in this application is the movement of electrodes during breathing, which introduces artefacts in reconstructed images. Various approaches have been proposed to compensate for electrode movement, but no comparison of these approaches is available. This paper analyses boundary model mismatch and electrode movement in lung EIT. The aim is to evaluate the extent to which various algorithms tolerate movement, and to determine if a patient specific model is required for EIT lung imaging. Movement data are simulated from a CT-based model, and image analysis is performed using quantitative figures of merit. The electrode movement is modelled based on expected values of chest movement and an extended Jacobian method is proposed to make use of exterior boundary tracking. Results show that a dynamical boundary tracking is the most robust method against any movement, but is computationally more expensive. Simultaneous electrode movement and conductivity reconstruction algorithms show increased robustness compared to only conductivity reconstruction. The results of this comparative study can help develop a better understanding of the impact of shape model mismatch and electrode movement in lung EIT.
Evolution of epigenetic regulation in vertebrate genomes
Lowdon, Rebecca F.; Jang, Hyo Sik; Wang, Ting
2016-01-01
Empirical models of sequence evolution have spurred progress in the field of evolutionary genetics for decades. We are now realizing the importance and complexity of the eukaryotic epigenome. While epigenome analysis has been applied to genomes from single cell eukaryotes to human, comparative analyses are still relatively few, and computational algorithms to quantify epigenome evolution remain scarce. Accordingly, a quantitative model of epigenome evolution remains to be established. Here we review the comparative epigenomics literature and synthesize its overarching themes. We also suggest one mechanism, transcription factor binding site turnover, which relates sequence evolution to epigenetic conservation or divergence. Lastly, we propose a framework for how the field can move forward to build a coherent quantitative model of epigenome evolution. PMID:27080453
Low-frequency quantitative ultrasound imaging of cell death in vivo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sadeghi-Naini, Ali; Falou, Omar; Czarnota, Gregory J.
Purpose: Currently, no clinical imaging modality is used routinely to assess tumor response to cancer therapies within hours to days of the delivery of treatment. Here, the authors demonstrate the efficacy of ultrasound at a clinically relevant frequency to quantitatively detect changes in tumors in response to cancer therapies using preclinical mouse models.Methods: Conventional low-frequency and corresponding high-frequency ultrasound (ranging from 4 to 28 MHz) were used along with quantitative spectroscopic and signal envelope statistical analyses on data obtained from xenograft tumors treated with chemotherapy, x-ray radiation, as well as a novel vascular targeting microbubble therapy.Results: Ultrasound-based spectroscopic biomarkers indicatedmore » significant changes in cell-death associated parameters in responsive tumors. Specifically changes in the midband fit, spectral slope, and 0-MHz intercept biomarkers were investigated for different types of treatment and demonstrated cell-death related changes. The midband fit and 0-MHz intercept biomarker derived from low-frequency data demonstrated increases ranging approximately from 0 to 6 dBr and 0 to 8 dBr, respectively, depending on treatments administrated. These data paralleled results observed for high-frequency ultrasound data. Statistical analysis of ultrasound signal envelope was performed as an alternative method to obtain histogram-based biomarkers and provided confirmatory results. Histological analysis of tumor specimens indicated up to 61% cell death present in the tumors depending on treatments administered, consistent with quantitative ultrasound findings indicating cell death. Ultrasound-based spectroscopic biomarkers demonstrated a good correlation with histological morphological findings indicative of cell death (r{sup 2}= 0.71, 0.82; p < 0.001).Conclusions: In summary, the results provide preclinical evidence, for the first time, that quantitative ultrasound used at a clinically relevant frequency, in addition to high-frequency ultrasound, can detect tissue changes associated with cell death in vivo in response to cancer treatments.« less
Houssaye, Alexandra; Taverne, Maxime; Cornette, Raphaël
2018-05-01
Long bone inner structure and cross-sectional geometry display a strong functional signal, leading to convergences, and are widely analyzed in comparative anatomy at small and large taxonomic scales. Long bone microanatomical studies have essentially been conducted on transverse sections but also on a few longitudinal ones. Recent studies highlighted the interest in analyzing variations of the inner structure along the diaphysis using a qualitative as well as a quantitative approach. With the development of microtomography, it has become possible to study three-dimensional (3D) bone microanatomy and, in more detail, the form-function relationships of these features. This study focused on the selection of quantitative parameters to describe in detail the cross-sectional shape changes and distribution of the osseous tissue along the diaphysis. Two-dimensional (2D) virtual transverse sections were also performed in the two usual reference planes and results were compared with those obtained based on the whole diaphysis analysis. The sample consisted in 14 humeri and 14 femora of various mammalian taxa that are essentially terrestrial. Comparative quantitative analyses between different datasets made it possible to highlight the parameters that are strongly impacted by size and phylogeny and the redundant ones, and thus to estimate their relevance for use in form-function analyses. The analysis illustrated that results based on 2D transverse sections are similar for both sectional planes; thus if a strong bias exists when mixing sections from the two reference planes in the same analysis, it would not problematic to use either one plane or the other in comparative studies. However, this may no longer hold for taxa showing a much stronger variation in bone microstructure along the diaphysis. Finally, the analysis demonstrated the significant contribution of the parameters describing variations along the diaphysis, and thus the interest in performing 3D analyses; this should be even more fruitful for heterogeneous diaphyses. In addition, covariation analyses showed that there is a strong interest in removing the size effect to access the differences in the microstructure of the humerus and femur. This methodological study provides a reference for future quantitative analyses on long bone inner structure and should make it possible, through a detailed knowledge of each descriptive parameter, to better interpret results from the multivariate analyses associated with these studies. This will have direct implications for studies in vertebrate anatomy, but also in paleontology and anthropology. © 2018 Anatomical Society.
Yamasaki, Tomoteru; Maeda, Jun; Fujinaga, Masayuki; Nagai, Yuji; Hatori, Akiko; Yui, Joji; Xie, Lin; Nengaki, Nobuki; Zhang, Ming-Rong
2014-01-01
The metabotropic glutamate receptor type 1 (mGluR1) is a novel target protein for the development of new drugs against central nervous system disorders. Recently, we have developed 11C-labeled PET probes 11C-ITMM and 11C-ITDM, which demonstrate similar profiles, for imaging of mGluR1. In the present study, we compared 11C-ITMM and 11C-ITDM PET imaging and quantitative analysis in the monkey brain. Respective PET images showed similar distribution of uptake in the cerebellum, thalamus, and cingulate cortex. Slightly higher uptake was detected with 11C-ITDM than with 11C-ITMM. For the kinetic analysis using the two-tissue compartment model (2-TCM), the distribution volume (VT) in the cerebellum, an mGluR1-rich region in the brain, was 2.5 mL∙cm-3 for 11C-ITMM and 3.6 mL∙cm-3 for 11C-ITDM. By contrast, the VT in the pons, a region with negligible mGluR1 expression, was similarly low for both radiopharmaceuticals. Based on these results, we performed noninvasive PET quantitative analysis with general reference tissue models using the time-activity curve of the pons as a reference region. We confirmed the relationship and differences between the reference tissue models and 2-TCM using correlational scatter plots and Bland-Altman plots analyses. Although the scattergrams of both radiopharmaceuticals showed over- or underestimations of reference tissue model-based the binding potentials against 2-TCM, there were no significant differences between the two kinetic analysis models. In conclusion, we first demonstrated the potentials of 11C-ITMM and 11C-ITDM for noninvasive PET quantitative analysis using reference tissue models. In addition, our findings suggest that 11C-ITDM may be superior to 11C-ITMM as a PET probe for imaging of mGluR1, because regional VT values in PET with 11C-ITDM were higher than those of 11C-ITMM. Clinical studies of 11C-ITDM in humans will be necessary in the future. PMID:24795840
Yamasaki, Tomoteru; Maeda, Jun; Fujinaga, Masayuki; Nagai, Yuji; Hatori, Akiko; Yui, Joji; Xie, Lin; Nengaki, Nobuki; Zhang, Ming-Rong
2014-01-01
The metabotropic glutamate receptor type 1 (mGluR1) is a novel target protein for the development of new drugs against central nervous system disorders. Recently, we have developed (11)C-labeled PET probes (11)C-ITMM and (11)C-ITDM, which demonstrate similar profiles, for imaging of mGluR1. In the present study, we compared (11)C-ITMM and (11)C-ITDM PET imaging and quantitative analysis in the monkey brain. Respective PET images showed similar distribution of uptake in the cerebellum, thalamus, and cingulate cortex. Slightly higher uptake was detected with (11)C-ITDM than with (11)C-ITMM. For the kinetic analysis using the two-tissue compartment model (2-TCM), the distribution volume (VT) in the cerebellum, an mGluR1-rich region in the brain, was 2.5 mL∙cm(-3) for (11)C-ITMM and 3.6 mL∙cm(-3) for (11)C-ITDM. By contrast, the VT in the pons, a region with negligible mGluR1 expression, was similarly low for both radiopharmaceuticals. Based on these results, we performed noninvasive PET quantitative analysis with general reference tissue models using the time-activity curve of the pons as a reference region. We confirmed the relationship and differences between the reference tissue models and 2-TCM using correlational scatter plots and Bland-Altman plots analyses. Although the scattergrams of both radiopharmaceuticals showed over- or underestimations of reference tissue model-based the binding potentials against 2-TCM, there were no significant differences between the two kinetic analysis models. In conclusion, we first demonstrated the potentials of (11)C-ITMM and (11)C-ITDM for noninvasive PET quantitative analysis using reference tissue models. In addition, our findings suggest that (11)C-ITDM may be superior to (11)C-ITMM as a PET probe for imaging of mGluR1, because regional VT values in PET with (11)C-ITDM were higher than those of (11)C-ITMM. Clinical studies of (11)C-ITDM in humans will be necessary in the future.
ERIC Educational Resources Information Center
Hammad, Waheed; Hallinger, Philip
2017-01-01
This review of research analyzed topics, conceptual models and research methods employed in 62 EDLM studies from Arab societies published between 2000 and 2016. Systematic review methods were used to identify relevant studies published in nine core international EDLM journals. Quantitative analyses identified patterns within this set of Arab…
Scherr, Rachel E; Linnell, Jessica D; Smith, Martin H; Briggs, Marilyn; Bergman, Jacqueline; Brian, Kelley M; Dharmar, Madan; Feenstra, Gail; Hillhouse, Carol; Keen, Carl L; Nguyen, Lori M; Nicholson, Yvonne; Ontai, Lenna; Schaefer, Sara E; Spezzano, Theresa; Steinberg, Francene M; Sutter, Carolyn; Wright, Janel E; Young, Heather M; Zidenberg-Cherr, Sheri
2014-01-01
To provide a framework for implementation of multicomponent, school-based nutrition interventions. This article describes the research methods for the Shaping Healthy Choices Program, a model to improve nutrition and health-related knowledge and behaviors among school-aged children. Longitudinal, pretest/posttest, randomized, controlled intervention. Four elementary schools in California. Fourth-grade students at intervention (n = 252) and control (n = 238) schools and their parents and teachers. Power analyses demonstrate that a minimum of 159 students per group will be needed to achieve sufficient power. The sample size was determined using the variables of nutrition knowledge, vegetable preference score, and body mass index percentile. A multicomponent school-based nutrition education intervention over 1 academic year, followed by activities to support sustainability of the program. Dietary and nutrition knowledge and behavior, critical thinking skills, healthy food preferences and consumption, and physical activity will be measured using a nutrition knowledge questionnaire, a food frequency questionnaire, a vegetable preferences assessment tool, the Test of Basic Science Process Skills, digital photography of plate waste, PolarActive accelerometers, anthropometrics, a parent questionnaire, and the School and Community Actions for Nutrition survey. Evaluation will include quantitative and qualitative measures. Quantitative data will use paired t, chi-square, and Mann-Whitney U tests and regression modeling using P = .05 to determine statistical significance. Copyright © 2014 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.
Code of Federal Regulations, 2014 CFR
2014-07-01
... PM2.5 violations”) must be based on quantitative analysis using the applicable air quality models... either: (i) Quantitative methods that represent reasonable and common professional practice; or (ii) A...) The hot-spot demonstration required by § 93.116 must be based on quantitative analysis methods for the...
Multi-Scale Modeling to Improve Single-Molecule, Single-Cell Experiments
NASA Astrophysics Data System (ADS)
Munsky, Brian; Shepherd, Douglas
2014-03-01
Single-cell, single-molecule experiments are producing an unprecedented amount of data to capture the dynamics of biological systems. When integrated with computational models, observations of spatial, temporal and stochastic fluctuations can yield powerful quantitative insight. We concentrate on experiments that localize and count individual molecules of mRNA. These high precision experiments have large imaging and computational processing costs, and we explore how improved computational analyses can dramatically reduce overall data requirements. In particular, we show how analyses of spatial, temporal and stochastic fluctuations can significantly enhance parameter estimation results for small, noisy data sets. We also show how full probability distribution analyses can constrain parameters with far less data than bulk analyses or statistical moment closures. Finally, we discuss how a systematic modeling progression from simple to more complex analyses can reduce total computational costs by orders of magnitude. We illustrate our approach using single-molecule, spatial mRNA measurements of Interleukin 1-alpha mRNA induction in human THP1 cells following stimulation. Our approach could improve the effectiveness of single-molecule gene regulation analyses for many other process.
On the influence of tyre and structural properties on the stability of bicycles
NASA Astrophysics Data System (ADS)
Doria, Alberto; Roa Melo, Sergio Daniel
2018-06-01
In recent years the Whipple Carvallo Bicycle Model has been extended to analyse high speed stability of bicycles. Various researchers have developed models taking into account the effects of front frame compliance and tyre properties, nonetheless, a systematic analysis has not been yet carried out. This paper aims at analysing parametrically the influence of front frame compliance and tyre properties on the open loop stability of bicycles. Some indexes based on the eigenvalues of the dynamic system are defined to evaluate quantitatively bicycle stability. The parametric analysis is carried out with a factorial design approach to determine the most influential parameters. A commuting and a racing bicycle are considered and numerical results show different effects of the various parameters on each bicycle. In the commuting bicycle, the tyre properties have greater influence than front frame compliance, and the weave mode has the main effect on stability. Conversely, in the racing bicycle, the front frame compliance parameters have greater influence than tyre properties, and the wobble mode has the main effect on stability.
A Method for Quantifying, Visualising, and Analysing Gastropod Shell Form
Liew, Thor-Seng; Schilthuizen, Menno
2016-01-01
Quantitative analysis of organismal form is an important component for almost every branch of biology. Although generally considered an easily-measurable structure, the quantification of gastropod shell form is still a challenge because many shells lack homologous structures and have a spiral form that is difficult to capture with linear measurements. In view of this, we adopt the idea of theoretical modelling of shell form, in which the shell form is the product of aperture ontogeny profiles in terms of aperture growth trajectory that is quantified as curvature and torsion, and of aperture form that is represented by size and shape. We develop a workflow for the analysis of shell forms based on the aperture ontogeny profile, starting from the procedure of data preparation (retopologising the shell model), via data acquisition (calculation of aperture growth trajectory, aperture form and ontogeny axis), and data presentation (qualitative comparison between shell forms) and ending with data analysis (quantitative comparison between shell forms). We evaluate our methods on representative shells of the genera Opisthostoma and Plectostoma, which exhibit great variability in shell form. The outcome suggests that our method is a robust, reproducible, and versatile approach for the analysis of shell form. Finally, we propose several potential applications of our methods in functional morphology, theoretical modelling, taxonomy, and evolutionary biology. PMID:27280463
Metabolite profiling and quantitative genetics of natural variation for flavonoids in Arabidopsis
Routaboul, Jean-Marc; Dubos, Christian; Beck, Gilles; Marquis, Catherine; Bidzinski, Przemyslaw; Loudet, Olivier; Lepiniec, Loïc
2012-01-01
Little is known about the range and the genetic bases of naturally occurring variation for flavonoids. Using Arabidopsis thaliana seed as a model, the flavonoid content of 41 accessions and two recombinant inbred line (RIL) sets derived from divergent accessions (Cvi-0×Col-0 and Bay-0×Shahdara) were analysed. These accessions and RILs showed mainly quantitative rather than qualitative changes. To dissect the genetic architecture underlying these differences, a quantitative trait locus (QTL) analysis was performed on the two segregating populations. Twenty-two flavonoid QTLs were detected that accounted for 11–64% of the observed trait variations, only one QTL being common to both RIL sets. Sixteen of these QTLs were confirmed and coarsely mapped using heterogeneous inbred families (HIFs). Three genes, namely TRANSPARENT TESTA (TT)7, TT15, and MYB12, were proposed to underlie their variations since the corresponding mutants and QTLs displayed similar specific flavonoid changes. Interestingly, most loci did not co-localize with any gene known to be involved in flavonoid metabolism. This latter result shows that novel functions have yet to be characterized and paves the way for their isolation. PMID:22442426
Shi, Xu; Gao, Weimin; Chao, Shih-hui
2013-01-01
Directly monitoring the stress response of microbes to their environments could be one way to inspect the health of microorganisms themselves, as well as the environments in which the microorganisms live. The ultimate resolution for such an endeavor could be down to a single-cell level. In this study, using the diatom Thalassiosira pseudonana as a model species, we aimed to measure gene expression responses of this organism to various stresses at a single-cell level. We developed a single-cell quantitative real-time reverse transcription-PCR (RT-qPCR) protocol and applied it to determine the expression levels of multiple selected genes under nitrogen, phosphate, and iron depletion stress conditions. The results, for the first time, provided a quantitative measurement of gene expression at single-cell levels in T. pseudonana and demonstrated that significant gene expression heterogeneity was present within the cell population. In addition, different expression patterns between single-cell- and bulk-cell-based analyses were also observed for all genes assayed in this study, suggesting that cell response heterogeneity needs to be taken into consideration in order to obtain accurate information that indicates the environmental stress condition. PMID:23315741
Shi, Xu; Gao, Weimin; Chao, Shih-hui; Zhang, Weiwen; Meldrum, Deirdre R
2013-03-01
Directly monitoring the stress response of microbes to their environments could be one way to inspect the health of microorganisms themselves, as well as the environments in which the microorganisms live. The ultimate resolution for such an endeavor could be down to a single-cell level. In this study, using the diatom Thalassiosira pseudonana as a model species, we aimed to measure gene expression responses of this organism to various stresses at a single-cell level. We developed a single-cell quantitative real-time reverse transcription-PCR (RT-qPCR) protocol and applied it to determine the expression levels of multiple selected genes under nitrogen, phosphate, and iron depletion stress conditions. The results, for the first time, provided a quantitative measurement of gene expression at single-cell levels in T. pseudonana and demonstrated that significant gene expression heterogeneity was present within the cell population. In addition, different expression patterns between single-cell- and bulk-cell-based analyses were also observed for all genes assayed in this study, suggesting that cell response heterogeneity needs to be taken into consideration in order to obtain accurate information that indicates the environmental stress condition.
Patterns and processes: Subaerial lava flow morphologies: A review
NASA Astrophysics Data System (ADS)
Gregg, Tracy K. P.
2017-08-01
Most lava flows have been emplaced away from the watchful eyes of volcanologists, so there is a desire to use solidified lava-flow morphologies to reveal important information about the eruption that formed them. Our current understanding of the relationship between solidified basaltic lava morphology and the responsible eruption and emplacement processes is based on decades of fieldwork, laboratory analyses and simulations, and computer models. These studies have vastly improved our understanding of the complex interactions between the solids, liquids, and gases that comprise cooling lava flows. However, the complex interactions (at millimeter and sub-millimeter scales) between the temperature-dependent abundances of the distinct phases that comprise a lava flow and the final morphology remain challenging to model and to predict. Similarly, the complex behavior of an active pahoehoe flow, although almost ubiquitous on Earth, remains difficult to quantitatively model and precisely predict.
Systemic Analysis Approaches for Air Transportation
NASA Technical Reports Server (NTRS)
Conway, Sheila
2005-01-01
Air transportation system designers have had only limited success using traditional operations research and parametric modeling approaches in their analyses of innovations. They need a systemic methodology for modeling of safety-critical infrastructure that is comprehensive, objective, and sufficiently concrete, yet simple enough to be used with reasonable investment. The methodology must also be amenable to quantitative analysis so issues of system safety and stability can be rigorously addressed. However, air transportation has proven itself an extensive, complex system whose behavior is difficult to describe, no less predict. There is a wide range of system analysis techniques available, but some are more appropriate for certain applications than others. Specifically in the area of complex system analysis, the literature suggests that both agent-based models and network analysis techniques may be useful. This paper discusses the theoretical basis for each approach in these applications, and explores their historic and potential further use for air transportation analysis.
A Model-Based Approach for the Measurement of Eye Movements Using Image Processing
NASA Technical Reports Server (NTRS)
Sung, Kwangjae; Reschke, Millard F.
1997-01-01
This paper describes a video eye-tracking algorithm which searches for the best fit of the pupil modeled as a circular disk. The algorithm is robust to common image artifacts such as the droopy eyelids and light reflections while maintaining the measurement resolution available by the centroid algorithm. The presented algorithm is used to derive the pupil size and center coordinates, and can be combined with iris-tracking techniques to measure ocular torsion. A comparison search method of pupil candidates using pixel coordinate reference lookup tables optimizes the processing requirements for a least square fit of the circular disk model. This paper includes quantitative analyses and simulation results for the resolution and the robustness of the algorithm. The algorithm presented in this paper provides a platform for a noninvasive, multidimensional eye measurement system which can be used for clinical and research applications requiring the precise recording of eye movements in three-dimensional space.
A Quantitative Cost Effectiveness Model for Web-Supported Academic Instruction
ERIC Educational Resources Information Center
Cohen, Anat; Nachmias, Rafi
2006-01-01
This paper describes a quantitative cost effectiveness model for Web-supported academic instruction. The model was designed for Web-supported instruction (rather than distance learning only) characterizing most of the traditional higher education institutions. It is based on empirical data (Web logs) of students' and instructors' usage…
Assessing Student Openness to Inquiry-Based Learning in Precalculus
ERIC Educational Resources Information Center
Cooper, Thomas; Bailey, Brad; Briggs, Karen; Holliday, John
2017-01-01
The authors have completed a 2-year quasi-experimental study on the use of inquiry-based learning (IBL) in precalculus. This study included six traditional lecture-style courses and seven modified Moore method courses taught by three instructors. Both quantitative and qualitative analyses were used to investigate the attitudes and beliefs of the…
Wignall, Jessica A; Muratov, Eugene; Sedykh, Alexander; Guyton, Kathryn Z; Tropsha, Alexander; Rusyn, Ivan; Chiu, Weihsueh A
2018-05-01
Human health assessments synthesize human, animal, and mechanistic data to produce toxicity values that are key inputs to risk-based decision making. Traditional assessments are data-, time-, and resource-intensive, and they cannot be developed for most environmental chemicals owing to a lack of appropriate data. As recommended by the National Research Council, we propose a solution for predicting toxicity values for data-poor chemicals through development of quantitative structure-activity relationship (QSAR) models. We used a comprehensive database of chemicals with existing regulatory toxicity values from U.S. federal and state agencies to develop quantitative QSAR models. We compared QSAR-based model predictions to those based on high-throughput screening (HTS) assays. QSAR models for noncancer threshold-based values and cancer slope factors had cross-validation-based Q 2 of 0.25-0.45, mean model errors of 0.70-1.11 log 10 units, and applicability domains covering >80% of environmental chemicals. Toxicity values predicted from QSAR models developed in this study were more accurate and precise than those based on HTS assays or mean-based predictions. A publicly accessible web interface to make predictions for any chemical of interest is available at http://toxvalue.org. An in silico tool that can predict toxicity values with an uncertainty of an order of magnitude or less can be used to quickly and quantitatively assess risks of environmental chemicals when traditional toxicity data or human health assessments are unavailable. This tool can fill a critical gap in the risk assessment and management of data-poor chemicals. https://doi.org/10.1289/EHP2998.
Modeling noisy resonant system response
NASA Astrophysics Data System (ADS)
Weber, Patrick Thomas; Walrath, David Edwin
2017-02-01
In this paper, a theory-based model replicating empirical acoustic resonant signals is presented and studied to understand sources of noise present in acoustic signals. Statistical properties of empirical signals are quantified and a noise amplitude parameter, which models frequency and amplitude-based noise, is created, defined, and presented. This theory-driven model isolates each phenomenon and allows for parameters to be independently studied. Using seven independent degrees of freedom, this model will accurately reproduce qualitative and quantitative properties measured from laboratory data. Results are presented and demonstrate success in replicating qualitative and quantitative properties of experimental data.
McEvoy, Maureen P; Lewis, Lucy K; Luker, Julie
2018-05-11
Dedicated Evidence-Based Practice (EBP) courses are often included in health professional education programs. It is important to understand the effectiveness of this training. This study investigated EBP outcomes in entry-level physiotherapy students from baseline to completion of all EBP training (graduation). Mixed methods with an explanatory sequential design. Physiotherapy students completed two psychometrically-tested health professional EBP instruments at baseline and graduation. The Evidence-Based Practice Profile questionnaire collected self-reported data (Terminology, Confidence, Practice, Relevance, Sympathy), and the Knowledge of Research Evidence Competencies instrument collected objective data (Actual Knowledge). Focus groups with students were conducted at graduation to gain a deeper understanding of the factors impacting changes in students' EBP knowledge, attitudes, behaviour and competency. Descriptive statistics, paired t-tests, 95% CI and effect sizes (ES) were used to examine changes in outcome scores from baseline to graduation. Transcribed focus group data were analysed following a qualitative descriptive approach with thematic analysis. A second stage of merged data analysis for mixed methods studies was undertaken using side-by-side comparisons to explore quantitatively assessed EBP measures with participants' personal perceptions. Data were analysed from 56 participants who completed both instruments at baseline and graduation, and from 21 focus group participants. Large ES were reported across most outcomes: Relevance (ES 2.29, p ≤ 0.001), Practice (1.8, p ≤ 0.001), Confidence (1.67, p ≤ 0.001), Terminology (3.13, p ≤ 0.001) and Actual Knowledge (4.3, p ≤ 0.001). A medium ES was found for Sympathy (0.49, p = 0.008). Qualitative and quantitative findings mostly aligned but for statistical terminology, participants' self-reported understanding was disparate with focus group reported experiences. Qualitative findings highlighted the importance of providing relevant context and positive role models for students during EBP training. Following EBP training across an entry-level physiotherapy program, there were qualitative and significant quantitative changes in participants' knowledge and perceptions of EBP. The qualitative and quantitative findings were mainly well-aligned with the exception of the Terminology domain, where the qualitative findings did not support the strength of the effect reported quantitatively. The findings of this study have implications for the timing and content of EBP curricula in entry-level health professional programs.
Pozo, Oscar J; Van Eenoo, Peter; Deventer, Koen; Elbardissy, Hisham; Grimalt, Susana; Sancho, Juan V; Hernandez, Felix; Ventura, Rosa; Delbeke, Frans T
2011-01-17
Triple quadrupole (QqQ), time of flight (TOF) and quadrupole-time of flight (QTOF) analysers have been compared for the detection of anabolic steroids in human urine. Ten anabolic steroids were selected as model compounds based on their ionization and the presence of endogenous interferences. Both qualitative and quantitative analyses were evaluated. QqQ allowed for the detection of all analytes at the minimum required performance limit (MRPL) established by the World Anti-Doping Agency (between 2 and 10 ng mL(-1) in urine). TOF and QTOF approaches were not sensitive enough to detect some of the analytes (3'-hydroxy-stanozolol or the metabolites of boldenone and formebolone) at the established MRPL. Although a suitable accuracy was obtained, the precision was unsatisfactory (RSD typically higher than 20%) for quantitative purposes irrespective of the analyser used. The methods were applied to 30 real samples declared positives either for the misuse of boldenone, stanozolol and/or methandienone. Most of the compounds were detected by every technique, however QqQ was necessary for the detection of some metabolites in a few samples. Finally, the possibility to detect non-target steroids has been explored by the use of TOF and QTOF. The use of this approach revealed that the presence of boldenone and its metabolite in one sample was due to the intake of androsta-1,4,6-triene-3,17-dione. Additionally, the intake of methandienone was confirmed by the post-target detection of a long-term metabolite. Copyright © 2010 Elsevier B.V. All rights reserved.
A unifying theory for genetic epidemiological analysis of binary disease data
2014-01-01
Background Genetic selection for host resistance offers a desirable complement to chemical treatment to control infectious disease in livestock. Quantitative genetics disease data frequently originate from field studies and are often binary. However, current methods to analyse binary disease data fail to take infection dynamics into account. Moreover, genetic analyses tend to focus on host susceptibility, ignoring potential variation in infectiousness, i.e. the ability of a host to transmit the infection. This stands in contrast to epidemiological studies, which reveal that variation in infectiousness plays an important role in the progression and severity of epidemics. In this study, we aim at filling this gap by deriving an expression for the probability of becoming infected that incorporates infection dynamics and is an explicit function of both host susceptibility and infectiousness. We then validate this expression according to epidemiological theory and by simulating epidemiological scenarios, and explore implications of integrating this expression into genetic analyses. Results Our simulations show that the derived expression is valid for a range of stochastic genetic-epidemiological scenarios. In the particular case of variation in susceptibility only, the expression can be incorporated into conventional quantitative genetic analyses using a complementary log-log link function (rather than probit or logit). Similarly, if there is moderate variation in both susceptibility and infectiousness, it is possible to use a logarithmic link function, combined with an indirect genetic effects model. However, in the presence of highly infectious individuals, i.e. super-spreaders, the use of any model that is linear in susceptibility and infectiousness causes biased estimates. Thus, in order to identify super-spreaders, novel analytical methods using our derived expression are required. Conclusions We have derived a genetic-epidemiological function for quantitative genetic analyses of binary infectious disease data, which, unlike current approaches, takes infection dynamics into account and allows for variation in host susceptibility and infectiousness. PMID:24552188
A unifying theory for genetic epidemiological analysis of binary disease data.
Lipschutz-Powell, Debby; Woolliams, John A; Doeschl-Wilson, Andrea B
2014-02-19
Genetic selection for host resistance offers a desirable complement to chemical treatment to control infectious disease in livestock. Quantitative genetics disease data frequently originate from field studies and are often binary. However, current methods to analyse binary disease data fail to take infection dynamics into account. Moreover, genetic analyses tend to focus on host susceptibility, ignoring potential variation in infectiousness, i.e. the ability of a host to transmit the infection. This stands in contrast to epidemiological studies, which reveal that variation in infectiousness plays an important role in the progression and severity of epidemics. In this study, we aim at filling this gap by deriving an expression for the probability of becoming infected that incorporates infection dynamics and is an explicit function of both host susceptibility and infectiousness. We then validate this expression according to epidemiological theory and by simulating epidemiological scenarios, and explore implications of integrating this expression into genetic analyses. Our simulations show that the derived expression is valid for a range of stochastic genetic-epidemiological scenarios. In the particular case of variation in susceptibility only, the expression can be incorporated into conventional quantitative genetic analyses using a complementary log-log link function (rather than probit or logit). Similarly, if there is moderate variation in both susceptibility and infectiousness, it is possible to use a logarithmic link function, combined with an indirect genetic effects model. However, in the presence of highly infectious individuals, i.e. super-spreaders, the use of any model that is linear in susceptibility and infectiousness causes biased estimates. Thus, in order to identify super-spreaders, novel analytical methods using our derived expression are required. We have derived a genetic-epidemiological function for quantitative genetic analyses of binary infectious disease data, which, unlike current approaches, takes infection dynamics into account and allows for variation in host susceptibility and infectiousness.
Zeigler, Sara L; Che-Castaldo, Judy P; Neel, Maile C
2013-12-01
Use of population viability analyses (PVAs) in endangered species recovery planning has been met with both support and criticism. Previous reviews promote use of PVA for setting scientifically based, measurable, and objective recovery criteria and recommend improvements to increase the framework's utility. However, others have questioned the value of PVA models for setting recovery criteria and assert that PVAs are more appropriate for understanding relative trade-offs between alternative management actions. We reviewed 258 final recovery plans for 642 plants listed under the U.S. Endangered Species Act to determine the number of plans that used or recommended PVA in recovery planning. We also reviewed 223 publications that describe plant PVAs to assess how these models were designed and whether those designs reflected previous recommendations for improvement of PVAs. Twenty-four percent of listed species had recovery plans that used or recommended PVA. In publications, the typical model was a matrix population model parameterized with ≤5 years of demographic data that did not consider stochasticity, genetics, density dependence, seed banks, vegetative reproduction, dormancy, threats, or management strategies. Population growth rates for different populations of the same species or for the same population at different points in time were often statistically different or varied by >10%. Therefore, PVAs parameterized with underlying vital rates that vary to this degree may not accurately predict recovery objectives across a species' entire distribution or over longer time scales. We assert that PVA, although an important tool as part of an adaptive-management program, can help to determine quantitative recovery criteria only if more long-term data sets that capture spatiotemporal variability in vital rates become available. Lacking this, there is a strong need for viable and comprehensive methods for determining quantitative, science-based recovery criteria for endangered species with minimal data availability. Uso Actual y Potencial del Análisis de Viabilidad Poblacional para la Recuperación de Especies de Plantas Enlistadas en el Acta de Especies En Peligro de E.U.A. © 2013 Society for Conservation Biology.
NASA Astrophysics Data System (ADS)
Noh, S. J.; Lee, J. H.; Lee, S.; Zhang, Y.; Seo, D. J.
2017-12-01
Hurricane Harvey was one of the most extreme weather events in Texas history and left significant damages in the Houston and adjoining coastal areas. To understand better the relative impact to urban flooding of extreme amount and spatial extent of rainfall, unique geography, land use and storm surge, high-resolution water modeling is necessary such that natural and man-made components are fully resolved. In this presentation, we reconstruct spatiotemporal evolution of inundation during Hurricane Harvey using hyper-resolution modeling and quantitative image reanalysis. The two-dimensional urban flood model used is based on dynamic wave approximation and 10 m-resolution terrain data, and is forced by the radar-based multisensor quantitative precipitation estimates. The model domain includes Buffalo, Brays, Greens and White Oak Bayous in Houston. The model is simulated using hybrid parallel computing. To evaluate dynamic inundation mapping, we combine various qualitative crowdsourced images and video footages with LiDAR-based terrain data.
Chen, Ran; Zhang, Yuntao; Sahneh, Faryad Darabi; Scoglio, Caterina M; Wohlleben, Wendel; Haase, Andrea; Monteiro-Riviere, Nancy A; Riviere, Jim E
2014-09-23
Quantitative characterization of nanoparticle interactions with their surrounding environment is vital for safe nanotechnological development and standardization. A recent quantitative measure, the biological surface adsorption index (BSAI), has demonstrated promising applications in nanomaterial surface characterization and biological/environmental prediction. This paper further advances the approach beyond the application of five descriptors in the original BSAI to address the concentration dependence of the descriptors, enabling better prediction of the adsorption profile and more accurate categorization of nanomaterials based on their surface properties. Statistical analysis on the obtained adsorption data was performed based on three different models: the original BSAI, a concentration-dependent polynomial model, and an infinite dilution model. These advancements in BSAI modeling showed a promising development in the application of quantitative predictive modeling in biological applications, nanomedicine, and environmental safety assessment of nanomaterials.
NMR-based Metabolomics for Cancer Research
Metabolomics is considered as a complementary tool to other omics platforms to provide a snapshot of the cellular biochemistry and physiology taking place at any instant. Metabolmics approaches have been widely used to provide comprehensive and quantitative analyses of the metabo...
The internal structure of ZZ Cet stars using quantitative asteroseismology: The case of R548
NASA Astrophysics Data System (ADS)
Giammichele, N.; Fontaine, G.; Brassard, P.; Charpinet, S.
2014-02-01
We explore quantitatively the low but sufficient sensitivity of oscillation modes to probe both the core composition and the details of the chemical stratification of pulsating white dwarfs. Until recently, applications of asteroseismic methods to pulsating white dwarfs have been far and few, and have generally suffered from an insufficient exploration of parameter space. To remedy this situation, we apply to white dwarfs the same double-optimization technique that has been used quite successfully in the context of pulsating hot B subdwarfs. Based on the frequency spectrum of the pulsating white dwarf R548, we are able to unravel in a robust way the unique onion-like stratification and the chemical composition of the star. Independent confirmations from both spectroscopic analyses and detailed evolutionary calculations including diffusion provide crucial consistency checks and add to the credibility of the inferred seismic model. More importantly, these results boost our confidence in the reliability of the forward method for sounding white dwarf internal structure with asteroseismology.
Kimball, A M; Wong, K Y; Taneda, K
2005-12-01
When cholera broke out in Mozambique, Kenya, Tanzania and Uganda in 1997, an urgent measure was filed with the Sanitary and Phytosanitary Committee of the World Trade Organization, by the European Union, citing the protection of human health, to limit imports of fish products. The authors analysed import data on specified products over time to quantify the trade impact of this measure. Using previous specific trade trends, the authors modelled expected trade flows and compared observed imports with expected imports to calculate the potential cost of lost trade. The conclusion of this analysis was that the impact of European restrictions on fish exports from Mozambique, Kenya, Tanzania and Uganda on the economies of these African countries was at least US dollar 332,217,415 for the years 1998 to 2002. Insights from such quantitative studies will be important in making policy choices under the revised International Health Regulations of the World Health Organization and should inform the discussion about the adoption of these regulations.
Teaching optical phenomena with Tracker
NASA Astrophysics Data System (ADS)
Rodrigues, M.; Simeão Carvalho, P.
2014-11-01
Since the invention and dissemination of domestic laser pointers, observing optical phenomena is a relatively easy task. Any student can buy a laser and experience at home, in a qualitative way, the reflection, refraction and even diffraction phenomena of light. However, quantitative experiments need instruments of high precision that have a relatively complex setup. Fortunately, nowadays it is possible to analyse optical phenomena in a simple and quantitative way using the freeware video analysis software ‘Tracker’. In this paper, we show the advantages of video-based experimental activities for teaching concepts in optics. We intend to show: (a) how easy the study of such phenomena can be, even at home, because only simple materials are needed, and Tracker provides the necessary measuring instruments; and (b) how we can use Tracker to improve students’ understanding of some optical concepts. We give examples using video modelling to study the laws of reflection, Snell’s laws, focal distances in lenses and mirrors, and diffraction phenomena, which we hope will motivate teachers to implement it in their own classes and schools.
Kovas, Yulia; Haworth, Claire M. A.; Petrill, Stephen A.; Plomin, Robert
2009-01-01
The genetic and environmental etiologies of 3 aspects of low mathematical performance (math disability) and the full range of variability (math ability) were compared for boys and girls in a sample of 5,348 children age 10 years (members of 2,674 pairs of same-sex and opposite-sex twins) from the United Kingdom (UK). The measures, which we developed for Web-based testing, included problems from 3 domains of mathematics taught as part of the UK National Curriculum. Using quantitative genetic model-fitting analyses, similar results were found for math disabilities and abilities for all 3 measures: Moderate genetic influence and environmental influence were mainly due to nonshared environmental factors that were unique to the individual, with little influence from shared environment. No sex differences were found in the etiologies of math abilities and disabilities. We conclude that low mathematical performance is the quantitative extreme of the same genetic and environmental factors responsible for variation throughout the distribution. PMID:18064980
Bayesian B-spline mapping for dynamic quantitative traits.
Xing, Jun; Li, Jiahan; Yang, Runqing; Zhou, Xiaojing; Xu, Shizhong
2012-04-01
Owing to their ability and flexibility to describe individual gene expression at different time points, random regression (RR) analyses have become a popular procedure for the genetic analysis of dynamic traits whose phenotypes are collected over time. Specifically, when modelling the dynamic patterns of gene expressions in the RR framework, B-splines have been proved successful as an alternative to orthogonal polynomials. In the so-called Bayesian B-spline quantitative trait locus (QTL) mapping, B-splines are used to characterize the patterns of QTL effects and individual-specific time-dependent environmental errors over time, and the Bayesian shrinkage estimation method is employed to estimate model parameters. Extensive simulations demonstrate that (1) in terms of statistical power, Bayesian B-spline mapping outperforms the interval mapping based on the maximum likelihood; (2) for the simulated dataset with complicated growth curve simulated by B-splines, Legendre polynomial-based Bayesian mapping is not capable of identifying the designed QTLs accurately, even when higher-order Legendre polynomials are considered and (3) for the simulated dataset using Legendre polynomials, the Bayesian B-spline mapping can find the same QTLs as those identified by Legendre polynomial analysis. All simulation results support the necessity and flexibility of B-spline in Bayesian mapping of dynamic traits. The proposed method is also applied to a real dataset, where QTLs controlling the growth trajectory of stem diameters in Populus are located.
The application of remote sensing to the development and formulation of hydrologic planning models
NASA Technical Reports Server (NTRS)
Castruccio, P. A.; Loats, H. L., Jr.; Fowler, T. R.
1976-01-01
A hydrologic planning model is developed based on remotely sensed inputs. Data from LANDSAT 1 are used to supply the model's quantitative parameters and coefficients. The use of LANDSAT data as information input to all categories of hydrologic models requiring quantitative surface parameters for their effects functioning is also investigated.
Nursing students' evaluation of quality indicators during learning in clinical practice.
Jansson, Inger; Ene, Kerstin W
2016-09-01
A supportive clinical learning environment is important for nursing students' learning. In this study, a contract between a county and a university involving a preceptor model of clinical education for nursing students is described. The aim of this study was to describe nursing students' clinical education based on quality indicators and to describe the students' experiences of what facilitated or hindered the learning process during their clinical practice. During autumn 2012 and spring 2013, 269 student evaluations with quantitative and qualitative answers were filled out anonymously. Quantitative data from the questionnaires concerning the quality indicators: Administration/information, Assessments/examinations and Reflection were processed to generate descriptive statistics that revealed gaps in what the preceptor model demands and what the students reported. The answers from the qualitative questions concerning the quality indicator Learning were analysed using content analysis. Four categories emerged: Independence and responsibility, continuity of learning, time, and the competence and attitudes of the staff. The study underlines that reflection, continuity, communication and feedback were important for the students' learning process, whereas heavy workload among staff and being supervised by many different preceptors were experienced as stressful and hindering by students. Copyright © 2016 Elsevier Ltd. All rights reserved.
A Local Vision on Soil Hydrology (John Dalton Medal Lecture)
NASA Astrophysics Data System (ADS)
Roth, K.
2012-04-01
After shortly looking back to some research trails of the past decades, and touching on the role of soils in our environmental machinery, a vision on the future of soil hydrology is offered. It is local in the sense of being based on limited experience as well as in the sense of focussing on local spatial scales, from 1 m to 1 km. Cornerstones of this vision are (i) rapid developments of quantitative observation technology, illustrated with the example of ground-penetrating radar (GPR), and (ii) the availability of ever more powerful compute facilities which allow to simulate increasingly complicated model representations in unprecedented detail. Together, they open a powerful and flexible approach to the quantitative understanding of soil hydrology where two lines are fitted: (i) potentially diverse measurements of the system of interest and their analysis and (ii) a comprehensive model representation, including architecture, material properties, forcings, and potentially unknown aspects, together with the same analysis as for (i). This approach pushes traditional inversion to operate on analyses, not on the underlying state variables, and to become flexible with respect to architecture and unknown aspects. The approach will be demonstrated for simple situations at test sites.
Physical activity attitudes, intentions and behaviour among 18-25 year olds: a mixed method study.
Poobalan, Amudha S; Aucott, Lorna S; Clarke, Amanda; Smith, W Cairns S
2012-08-10
Young people (18-25 years) during the adolescence/adulthood transition are vulnerable to weight gain and notoriously hard to reach. Despite increased levels of overweight/obesity in this age group, physical activity behaviour, a major contributor to obesity, is poorly understood. The purpose of this study was to explore physical activity (PA) behaviour among 18-25 year olds with influential factors including attitudes, motivators and barriers. An explanatory mixed method study design, based on health Behaviour Change Theories was used. Those at university/college and in the community, including those Not in Education, Employment or Training (NEET) were included. An initial self reported quantitative questionnaire survey underpinned by the Theory of Planned Behaviour and Social Cognitive Theory was conducted. 1313 questionnaires were analysed. Results from this were incorporated into a qualitative phase also grounded in these theories. Seven focus groups were conducted among similar young people, varying in education and socioeconomic status. Exploratory univariate analysis was followed by multi staged modelling to analyse the quantitative data. 'Framework Analysis' was used to analyse the focus groups. Only 28% of 18-25 year olds achieved recommended levels of PA which decreased with age. Self-reported overweight/obesity prevalence was 22%, increasing with age, particularly in males. Based on the statistical modelling, positive attitudes toward PA were strong predictors of physical activity associated with being physically active and less sedentary. However, strong intentions to do exercise, was not associated with actual behaviour. Interactive discussions through focus groups unravelled attitudes and barriers influencing PA behaviour. Doing PA to feel good and to enjoy themselves was more important for young people than the common assumptions of 'winning' and 'pleasing others'. Further this age group saw traditional health promotion messages as 'empty' and 'fear of their future health' was not a motivating factor to change current behaviour. 18-25 year olds are a difficult group to reach and have low levels of PA. Factors such as, 'enjoyment', 'appearance 'and 'feeling good' were deemed important by this specific age group. A targeted intervention incorporating these crucial elements should be developed to improve and sustain PA levels.
Exploring and Harnessing Haplotype Diversity to Improve Yield Stability in Crops.
Qian, Lunwen; Hickey, Lee T; Stahl, Andreas; Werner, Christian R; Hayes, Ben; Snowdon, Rod J; Voss-Fels, Kai P
2017-01-01
In order to meet future food, feed, fiber, and bioenergy demands, global yields of all major crops need to be increased significantly. At the same time, the increasing frequency of extreme weather events such as heat and drought necessitates improvements in the environmental resilience of modern crop cultivars. Achieving sustainably increase yields implies rapid improvement of quantitative traits with a very complex genetic architecture and strong environmental interaction. Latest advances in genome analysis technologies today provide molecular information at an ultrahigh resolution, revolutionizing crop genomic research, and paving the way for advanced quantitative genetic approaches. These include highly detailed assessment of population structure and genotypic diversity, facilitating the identification of selective sweeps and signatures of directional selection, dissection of genetic variants that underlie important agronomic traits, and genomic selection (GS) strategies that not only consider major-effect genes. Single-nucleotide polymorphism (SNP) markers today represent the genotyping system of choice for crop genetic studies because they occur abundantly in plant genomes and are easy to detect. SNPs are typically biallelic, however, hence their information content compared to multiallelic markers is low, limiting the resolution at which SNP-trait relationships can be delineated. An efficient way to overcome this limitation is to construct haplotypes based on linkage disequilibrium, one of the most important features influencing genetic analyses of crop genomes. Here, we give an overview of the latest advances in genomics-based haplotype analyses in crops, highlighting their importance in the context of polyploidy and genome evolution, linkage drag, and co-selection. We provide examples of how haplotype analyses can complement well-established quantitative genetics frameworks, such as quantitative trait analysis and GS, ultimately providing an effective tool to equip modern crops with environment-tailored characteristics.
Skipper, Jeremy I; Devlin, Joseph T; Lametti, Daniel R
2017-01-01
Does "the motor system" play "a role" in speech perception? If so, where, how, and when? We conducted a systematic review that addresses these questions using both qualitative and quantitative methods. The qualitative review of behavioural, computational modelling, non-human animal, brain damage/disorder, electrical stimulation/recording, and neuroimaging research suggests that distributed brain regions involved in producing speech play specific, dynamic, and contextually determined roles in speech perception. The quantitative review employed region and network based neuroimaging meta-analyses and a novel text mining method to describe relative contributions of nodes in distributed brain networks. Supporting the qualitative review, results show a specific functional correspondence between regions involved in non-linguistic movement of the articulators, covertly and overtly producing speech, and the perception of both nonword and word sounds. This distributed set of cortical and subcortical speech production regions are ubiquitously active and form multiple networks whose topologies dynamically change with listening context. Results are inconsistent with motor and acoustic only models of speech perception and classical and contemporary dual-stream models of the organization of language and the brain. Instead, results are more consistent with complex network models in which multiple speech production related networks and subnetworks dynamically self-organize to constrain interpretation of indeterminant acoustic patterns as listening context requires. Copyright © 2016. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia
2017-05-01
A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.
Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia
2017-05-07
A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.
The Matching Relation and Situation-Specific Bias Modulation in Professional Football Play Selection
Stilling, Stephanie T; Critchfield, Thomas S
2010-01-01
The utility of a quantitative model depends on the extent to which its fitted parameters vary systematically with environmental events of interest. Professional football statistics were analyzed to determine whether play selection (passing versus rushing plays) could be accounted for with the generalized matching equation, and in particular whether variations in play selection across game situations would manifest as changes in the equation's fitted parameters. Statistically significant changes in bias were found for each of five types of game situations; no systematic changes in sensitivity were observed. Further analyses suggested relationships between play selection bias and both turnover probability (which can be described in terms of punishment) and yards-gained variance (which can be described in terms of variable-magnitude reinforcement schedules). The present investigation provides a useful demonstration of association between face-valid, situation-specific effects in a domain of everyday interest, and a theoretically important term of a quantitative model of behavior. Such associations, we argue, are an essential focus in translational extensions of quantitative models. PMID:21119855
Rocha, Ana Cristina; Duarte, Cidália
2015-02-01
To share Portugal's experience with school-based sexuality education, and to describe its implementation at a local level, following an ecological model and using a mixed methodology approach. The study also examines the impact of the latest policies put into effect, identifying potential weaknesses and strengths affecting the effectiveness of sexuality education enforcement. A representative sample of 296 schools in Portugal was analysed. Teachers representing the school completed a questionnaire and were asked to share any kind of official document from their sexuality education project (such as curriculum content). A subsample of these documents was analysed by two coders. Quantitative analysis was carried out using descriptive statistics. The majority of Portuguese schools delivered sexuality education, in line with Portuguese technical guidelines and international recommendations. There were common procedures in planning, implementation and evaluation of sexuality education. Some strengths and weaknesses were identified. Results highlighted the impact of the various systems on the planning, enforcement and evaluation of sexuality education in school. The latest policies introduced valuable changes in school-based sexuality education. A way of assessing effectiveness of sexuality education is still needed.
Assessing model uncertainty using hexavalent chromium and ...
Introduction: The National Research Council recommended quantitative evaluation of uncertainty in effect estimates for risk assessment. This analysis considers uncertainty across model forms and model parameterizations with hexavalent chromium [Cr(VI)] and lung cancer mortality as an example. The objective of this analysis is to characterize model uncertainty by evaluating the variance in estimates across several epidemiologic analyses.Methods: This analysis compared 7 publications analyzing two different chromate production sites in Ohio and Maryland. The Ohio cohort consisted of 482 workers employed from 1940-72, while the Maryland site employed 2,357 workers from 1950-74. Cox and Poisson models were the only model forms considered by study authors to assess the effect of Cr(VI) on lung cancer mortality. All models adjusted for smoking and included a 5-year exposure lag, however other latency periods and model covariates such as age and race were considered. Published effect estimates were standardized to the same units and normalized by their variances to produce a standardized metric to compare variability in estimates across and within model forms. A total of 7 similarly parameterized analyses were considered across model forms, and 23 analyses with alternative parameterizations were considered within model form (14 Cox; 9 Poisson). Results: Across Cox and Poisson model forms, adjusted cumulative exposure coefficients for 7 similar analyses ranged from 2.47
Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, an...
Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, and...
NASA Astrophysics Data System (ADS)
Qiu, Zeyang; Liang, Wei; Wang, Xue; Lin, Yang; Zhang, Meng
2017-05-01
As an important part of national energy supply system, transmission pipelines for natural gas are possible to cause serious environmental pollution, life and property loss in case of accident. The third party damage is one of the most significant causes for natural gas pipeline system accidents, and it is very important to establish an effective quantitative risk assessment model of the third party damage for reducing the number of gas pipelines operation accidents. Against the third party damage accident has the characteristics such as diversity, complexity and uncertainty, this paper establishes a quantitative risk assessment model of the third party damage based on Analytic Hierarchy Process (AHP) and Fuzzy Comprehensive Evaluation (FCE). Firstly, risk sources of third party damage should be identified exactly, and the weight of factors could be determined via improved AHP, finally the importance of each factor is calculated by fuzzy comprehensive evaluation model. The results show that the quantitative risk assessment model is suitable for the third party damage of natural gas pipelines and improvement measures could be put forward to avoid accidents based on the importance of each factor.
A statistical framework for protein quantitation in bottom-up MS-based proteomics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karpievitch, Yuliya; Stanley, Jeffrey R.; Taverner, Thomas
2009-08-15
ABSTRACT Motivation: Quantitative mass spectrometry-based proteomics requires protein-level estimates and confidence measures. Challenges include the presence of low-quality or incorrectly identified peptides and widespread, informative, missing data. Furthermore, models are required for rolling peptide-level information up to the protein level. Results: We present a statistical model for protein abundance in terms of peptide peak intensities, applicable to both label-based and label-free quantitation experiments. The model allows for both random and censoring missingness mechanisms and provides naturally for protein-level estimates and confidence measures. The model is also used to derive automated filtering and imputation routines. Three LC-MS datasets are used tomore » illustrate the methods. Availability: The software has been made available in the open-source proteomics platform DAnTE (Polpitiya et al. (2008)) (http://omics.pnl.gov/software/). Contact: adabney@stat.tamu.edu« less
Plant leaf chlorophyll content retrieval based on a field imaging spectroscopy system.
Liu, Bo; Yue, Yue-Min; Li, Ru; Shen, Wen-Jing; Wang, Ke-Lin
2014-10-23
A field imaging spectrometer system (FISS; 380-870 nm and 344 bands) was designed for agriculture applications. In this study, FISS was used to gather spectral information from soybean leaves. The chlorophyll content was retrieved using a multiple linear regression (MLR), partial least squares (PLS) regression and support vector machine (SVM) regression. Our objective was to verify the performance of FISS in a quantitative spectral analysis through the estimation of chlorophyll content and to determine a proper quantitative spectral analysis method for processing FISS data. The results revealed that the derivative reflectance was a more sensitive indicator of chlorophyll content and could extract content information more efficiently than the spectral reflectance, which is more significant for FISS data compared to ASD (analytical spectral devices) data, reducing the corresponding RMSE (root mean squared error) by 3.3%-35.6%. Compared with the spectral features, the regression methods had smaller effects on the retrieval accuracy. A multivariate linear model could be the ideal model to retrieve chlorophyll information with a small number of significant wavelengths used. The smallest RMSE of the chlorophyll content retrieved using FISS data was 0.201 mg/g, a relative reduction of more than 30% compared with the RMSE based on a non-imaging ASD spectrometer, which represents a high estimation accuracy compared with the mean chlorophyll content of the sampled leaves (4.05 mg/g). Our study indicates that FISS could obtain both spectral and spatial detailed information of high quality. Its image-spectrum-in-one merit promotes the good performance of FISS in quantitative spectral analyses, and it can potentially be widely used in the agricultural sector.
Plant Leaf Chlorophyll Content Retrieval Based on a Field Imaging Spectroscopy System
Liu, Bo; Yue, Yue-Min; Li, Ru; Shen, Wen-Jing; Wang, Ke-Lin
2014-01-01
A field imaging spectrometer system (FISS; 380–870 nm and 344 bands) was designed for agriculture applications. In this study, FISS was used to gather spectral information from soybean leaves. The chlorophyll content was retrieved using a multiple linear regression (MLR), partial least squares (PLS) regression and support vector machine (SVM) regression. Our objective was to verify the performance of FISS in a quantitative spectral analysis through the estimation of chlorophyll content and to determine a proper quantitative spectral analysis method for processing FISS data. The results revealed that the derivative reflectance was a more sensitive indicator of chlorophyll content and could extract content information more efficiently than the spectral reflectance, which is more significant for FISS data compared to ASD (analytical spectral devices) data, reducing the corresponding RMSE (root mean squared error) by 3.3%–35.6%. Compared with the spectral features, the regression methods had smaller effects on the retrieval accuracy. A multivariate linear model could be the ideal model to retrieve chlorophyll information with a small number of significant wavelengths used. The smallest RMSE of the chlorophyll content retrieved using FISS data was 0.201 mg/g, a relative reduction of more than 30% compared with the RMSE based on a non-imaging ASD spectrometer, which represents a high estimation accuracy compared with the mean chlorophyll content of the sampled leaves (4.05 mg/g). Our study indicates that FISS could obtain both spectral and spatial detailed information of high quality. Its image-spectrum-in-one merit promotes the good performance of FISS in quantitative spectral analyses, and it can potentially be widely used in the agricultural sector. PMID:25341439
Hou, Ying; Zhou, Shudong; Burkhard, Benjamin; Müller, Felix
2014-08-15
One focus of ecosystem service research is the connection between biodiversity, ecosystem services and human well-being as well as the socioeconomic influences on them. Despite existing investigations, exact impacts from the human system on the dynamics of biodiversity, ecosystem services and human well-being are still uncertain because of the insufficiency of the respective quantitative analyses. Our research aims are discerning the socioeconomic influences on biodiversity, ecosystem services and human well-being and demonstrating mutual impacts between these items. We propose a DPSIR framework coupling ecological integrity, ecosystem services as well as human well-being and suggest DPSIR indicators for the case study area Jiangsu, China. Based on available statistical and surveying data, we revealed the factors significantly impacting biodiversity, ecosystem services and human well-being in the research area through factor analysis and correlation analysis, using the 13 prefecture-level cities of Jiangsu as samples. The results show that urbanization and industrialization in the urban areas have predominant positive influences on regional biodiversity, agricultural productivity and tourism services as well as rural residents' living standards. Additionally, the knowledge, technology and finance inputs for agriculture also have generally positive impacts on these system components. Concerning regional carbon storage, non-cropland vegetation cover obviously plays a significant positive role. Contrarily, the expansion of farming land and the increase of total food production are two important negative influential factors of biodiversity, ecosystem's food provisioning service capacity, regional tourism income and the well-being of the rural population. Our study provides a promising approach based on the DPSIR model to quantitatively capture the socioeconomic influential factors of biodiversity, ecosystem services and human well-being for human-environmental systems at regional scales. Copyright © 2014 Elsevier B.V. All rights reserved.
Retrospective Analysis of a Classical Biological Control Programme
USDA-ARS?s Scientific Manuscript database
1. Classical biological control has been a key technology in the management of invasive arthropod pests globally for over 120 years, yet rigorous quantitative evaluations of programme success or failure are rare. Here, I used life table and matrix model analyses, and life table response experiments ...
Qiao, Xue; He, Wen-ni; Xiang, Cheng; Han, Jian; Wu, Li-jun; Guo, De-an; Ye, Min
2011-01-01
Spirodela polyrrhiza (L.) Schleid. is a traditional Chinese herbal medicine for the treatment of influenza. Despite its wide use in Chinese medicine, no report on quality control of this herb is available so far. To establish qualitative and quantitative analytical methods by high-performance liquid chromatography (HPLC) coupled with mass spectrometry (MS) for the quality control of S. polyrrhiza. The methanol extract of S. polyrrhiza was analysed by HPLC/ESI-MS(n). Flavonoids were identified by comparing with reference standards or according to their MS(n) (n = 2-4) fragmentation behaviours. Based on LC/MS data, a standardised HPLC fingerprint was established by analysing 15 batches of commercial herbal samples. Furthermore, quantitative analysis was conducted by determining five major flavonoids, namely luteolin 8-C-glucoside, apigenin 8-C-glucoside, luteolin 7-O-glucoside, apigenin 7-O-glucoside and luteolin. A total of 18 flavonoids were identified by LC/MS, and 14 of them were reported from this herb for the first time. The HPLC fingerprints contained 10 common peaks, and could differentiate good quality batches from counterfeits. The total contents of five major flavonoids in S. polyrrhiza varied significantly from 4.28 to 19.87 mg/g. Qualitative LC/MS and quantitative HPLC analytical methods were established for the comprehensive quality control of S. polyrrhiza. Copyright © 2011 John Wiley & Sons, Ltd.
Miao, Xinyang; Li, Hao; Bao, Rima; Feng, Chengjing; Wu, Hang; Zhan, Honglei; Li, Yizhang; Zhao, Kun
2017-02-01
Understanding the geological units of a reservoir is essential to the development and management of the resource. In this paper, drill cuttings from several depths from an oilfield were studied using terahertz time domain spectroscopy (THz-TDS). Cluster analysis (CA) and principal component analysis (PCA) were employed to classify and analyze the cuttings. The cuttings were clearly classified based on CA and PCA methods, and the results were in agreement with the lithology. Moreover, calcite and dolomite have stronger absorption of a THz pulse than any other minerals, based on an analysis of the PC1 scores. Quantitative analyses of minor minerals were also realized by building a series of linear and non-linear models between contents and PC2 scores. The results prove THz technology to be a promising means for determining reservoir lithology as well as other properties, which will be a significant supplementary method in oil fields.
NASA Astrophysics Data System (ADS)
Tsai, Chun-Yen; Jack, Brady Michael; Huang, Tai-Chu; Yang, Jin-Tan
2012-08-01
This study investigated how the instruction of argumentation skills could be promoted by using an online argumentation system. This system entitled `Cognitive Apprenticeship Web-based Argumentation' (CAWA) system was based on cognitive apprenticeship model. One hundred eighty-nine fifth grade students took part in this study. A quasi-experimental design was adopted and qualitative and quantitative analyses were used to evaluate the effectiveness of this online system in measuring students' progress in learning argumentation. The results of this study showed that different teaching strategies had effects on students' use of argumentation in the topics of daily life and the concept of `vision.' When the CAWA system was employed during the instruction and practice of argumentation on these two topics, the students' argumentation performance improved. Suggestions on how the CAWA system could be used to enhance the instruction of argumentation skills in science education were also discussed.
Classification of volcanic ash particles using a convolutional neural network and probability.
Shoji, Daigo; Noguchi, Rina; Otsuki, Shizuka; Hino, Hideitsu
2018-05-25
Analyses of volcanic ash are typically performed either by qualitatively classifying ash particles by eye or by quantitatively parameterizing its shape and texture. While complex shapes can be classified through qualitative analyses, the results are subjective due to the difficulty of categorizing complex shapes into a single class. Although quantitative analyses are objective, selection of shape parameters is required. Here, we applied a convolutional neural network (CNN) for the classification of volcanic ash. First, we defined four basal particle shapes (blocky, vesicular, elongated, rounded) generated by different eruption mechanisms (e.g., brittle fragmentation), and then trained the CNN using particles composed of only one basal shape. The CNN could recognize the basal shapes with over 90% accuracy. Using the trained network, we classified ash particles composed of multiple basal shapes based on the output of the network, which can be interpreted as a mixing ratio of the four basal shapes. Clustering of samples by the averaged probabilities and the intensity is consistent with the eruption type. The mixing ratio output by the CNN can be used to quantitatively classify complex shapes in nature without categorizing forcibly and without the need for shape parameters, which may lead to a new taxonomy.
Halogen Bonding versus Hydrogen Bonding: A Molecular Orbital Perspective
Wolters, Lando P; Bickelhaupt, F Matthias
2012-01-01
We have carried out extensive computational analyses of the structure and bonding mechanism in trihalides DX⋅⋅⋅A− and the analogous hydrogen-bonded complexes DH⋅⋅⋅A− (D, X, A=F, Cl, Br, I) using relativistic density functional theory (DFT) at zeroth-order regular approximation ZORA-BP86/TZ2P. One purpose was to obtain a set of consistent data from which reliable trends in structure and stability can be inferred over a large range of systems. The main objective was to achieve a detailed understanding of the nature of halogen bonds, how they resemble, and also how they differ from, the better understood hydrogen bonds. Thus, we present an accurate physical model of the halogen bond based on quantitative Kohn–Sham molecular orbital (MO) theory, energy decomposition analyses (EDA) and Voronoi deformation density (VDD) analyses of the charge distribution. It appears that the halogen bond in DX⋅⋅⋅A− arises not only from classical electrostatic attraction but also receives substantial stabilization from HOMO–LUMO interactions between the lone pair of A− and the σ* orbital of D–X. PMID:24551497
Identifying gene networks underlying the neurobiology of ethanol and alcoholism.
Wolen, Aaron R; Miles, Michael F
2012-01-01
For complex disorders such as alcoholism, identifying the genes linked to these diseases and their specific roles is difficult. Traditional genetic approaches, such as genetic association studies (including genome-wide association studies) and analyses of quantitative trait loci (QTLs) in both humans and laboratory animals already have helped identify some candidate genes. However, because of technical obstacles, such as the small impact of any individual gene, these approaches only have limited effectiveness in identifying specific genes that contribute to complex diseases. The emerging field of systems biology, which allows for analyses of entire gene networks, may help researchers better elucidate the genetic basis of alcoholism, both in humans and in animal models. Such networks can be identified using approaches such as high-throughput molecular profiling (e.g., through microarray-based gene expression analyses) or strategies referred to as genetical genomics, such as the mapping of expression QTLs (eQTLs). Characterization of gene networks can shed light on the biological pathways underlying complex traits and provide the functional context for identifying those genes that contribute to disease development.
NASA Astrophysics Data System (ADS)
Nijzink, Remko C.; Samaniego, Luis; Mai, Juliane; Kumar, Rohini; Thober, Stephan; Zink, Matthias; Schäfer, David; Savenije, Hubert H. G.; Hrachowitz, Markus
2016-03-01
Heterogeneity of landscape features like terrain, soil, and vegetation properties affects the partitioning of water and energy. However, it remains unclear to what extent an explicit representation of this heterogeneity at the sub-grid scale of distributed hydrological models can improve the hydrological consistency and the robustness of such models. In this study, hydrological process complexity arising from sub-grid topography heterogeneity was incorporated into the distributed mesoscale Hydrologic Model (mHM). Seven study catchments across Europe were used to test whether (1) the incorporation of additional sub-grid variability on the basis of landscape-derived response units improves model internal dynamics, (2) the application of semi-quantitative, expert-knowledge-based model constraints reduces model uncertainty, and whether (3) the combined use of sub-grid response units and model constraints improves the spatial transferability of the model. Unconstrained and constrained versions of both the original mHM and mHMtopo, which allows for topography-based sub-grid heterogeneity, were calibrated for each catchment individually following a multi-objective calibration strategy. In addition, four of the study catchments were simultaneously calibrated and their feasible parameter sets were transferred to the remaining three receiver catchments. In a post-calibration evaluation procedure the probabilities of model and transferability improvement, when accounting for sub-grid variability and/or applying expert-knowledge-based model constraints, were assessed on the basis of a set of hydrological signatures. In terms of the Euclidian distance to the optimal model, used as an overall measure of model performance with respect to the individual signatures, the model improvement achieved by introducing sub-grid heterogeneity to mHM in mHMtopo was on average 13 %. The addition of semi-quantitative constraints to mHM and mHMtopo resulted in improvements of 13 and 19 %, respectively, compared to the base case of the unconstrained mHM. Most significant improvements in signature representations were, in particular, achieved for low flow statistics. The application of prior semi-quantitative constraints further improved the partitioning between runoff and evaporative fluxes. In addition, it was shown that suitable semi-quantitative prior constraints in combination with the transfer-function-based regularization approach of mHM can be beneficial for spatial model transferability as the Euclidian distances for the signatures improved on average by 2 %. The effect of semi-quantitative prior constraints combined with topography-guided sub-grid heterogeneity on transferability showed a more variable picture of improvements and deteriorations, but most improvements were observed for low flow statistics.
USDA-ARS?s Scientific Manuscript database
Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, and human health effect...
Humphries, Stephen M; Yagihashi, Kunihiro; Huckleberry, Jason; Rho, Byung-Hak; Schroeder, Joyce D; Strand, Matthew; Schwarz, Marvin I; Flaherty, Kevin R; Kazerooni, Ella A; van Beek, Edwin J R; Lynch, David A
2017-10-01
Purpose To evaluate associations between pulmonary function and both quantitative analysis and visual assessment of thin-section computed tomography (CT) images at baseline and at 15-month follow-up in subjects with idiopathic pulmonary fibrosis (IPF). Materials and Methods This retrospective analysis of preexisting anonymized data, collected prospectively between 2007 and 2013 in a HIPAA-compliant study, was exempt from additional institutional review board approval. The extent of lung fibrosis at baseline inspiratory chest CT in 280 subjects enrolled in the IPF Network was evaluated. Visual analysis was performed by using a semiquantitative scoring system. Computer-based quantitative analysis included CT histogram-based measurements and a data-driven textural analysis (DTA). Follow-up CT images in 72 of these subjects were also analyzed. Univariate comparisons were performed by using Spearman rank correlation. Multivariate and longitudinal analyses were performed by using a linear mixed model approach, in which models were compared by using asymptotic χ 2 tests. Results At baseline, all CT-derived measures showed moderate significant correlation (P < .001) with pulmonary function. At follow-up CT, changes in DTA scores showed significant correlation with changes in both forced vital capacity percentage predicted (ρ = -0.41, P < .001) and diffusing capacity for carbon monoxide percentage predicted (ρ = -0.40, P < .001). Asymptotic χ 2 tests showed that inclusion of DTA score significantly improved fit of both baseline and longitudinal linear mixed models in the prediction of pulmonary function (P < .001 for both). Conclusion When compared with semiquantitative visual assessment and CT histogram-based measurements, DTA score provides additional information that can be used to predict diminished function. Automatic quantification of lung fibrosis at CT yields an index of severity that correlates with visual assessment and functional change in subjects with IPF. © RSNA, 2017.
Molecular design of new aggrecanases-2 inhibitors.
Shan, Zhi Jie; Zhai, Hong Lin; Huang, Xiao Yan; Li, Li Na; Zhang, Xiao Yun
2013-10-01
Aggrecanases-2 is a very important potential drug target for the treatment of osteoarthritis. In this study, a series of known aggrecanases-2 inhibitors was analyzed by the technologies of three-dimensional quantitative structure-activity relationships (3D-QSAR) and molecular docking. Two 3D-QSAR models, which based on comparative molecular field analysis (CoMFA) and comparative molecular similarity analysis (CoMSIA) methods, were established. Molecular docking was employed to explore the details of the interaction between inhibitors and aggrecanases-2 protein. According to the analyses for these models, several new potential inhibitors with higher activity predicted were designed, and were supported by the simulation of molecular docking. This work propose the fast and effective approach to design and prediction for new potential inhibitors, and the study of the interaction mechanism provide a better understanding for the inhibitors binding into the target protein, which will be useful for the structure-based drug design and modifications. Copyright © 2013 Elsevier Ltd. All rights reserved.
Applications of Pharmacometrics in the Clinical Development and Pharmacotherapy of Anti-Infectives
Trivedi, Ashit; Lee, Richard E; Meibohm, Bernd
2013-01-01
With the increased emergence of anti-infective resistance in recent years, much focus has recently been drawn to the development of new anti-infectives and the optimization of treatment regimens and combination therapies for established antimicrobials. In this context, the field of pharmacometrics using quantitative numerical modeling and simulation techniques has in recent years emerged as an invaluable tool in the pharmaceutical industry, academia and regulatory agencies to facilitate the integration of preclinical and clinical development data and to provide a scientifically based framework for rationale dosage regimen design and treatment optimization. This review highlights the usefulness of pharmacometric analyses in anti-infective drug development and applied pharmacotherapy with select examples. PMID:23473593
Comprehensive Logic Based Analyses of Toll-Like Receptor 4 Signal Transduction Pathway
Padwal, Mahesh Kumar; Sarma, Uddipan; Saha, Bhaskar
2014-01-01
Among the 13 TLRs in the vertebrate systems, only TLR4 utilizes both Myeloid differentiation factor 88 (MyD88) and Toll/Interleukin-1 receptor (TIR)-domain-containing adapter interferon-β-inducing Factor (TRIF) adaptors to transduce signals triggering host-protective immune responses. Earlier studies on the pathway combined various experimental data in the form of one comprehensive map of TLR signaling. But in the absence of adequate kinetic parameters quantitative mathematical models that reveal emerging systems level properties and dynamic inter-regulation among the kinases/phosphatases of the TLR4 network are not yet available. So, here we used reaction stoichiometry-based and parameter independent logical modeling formalism to build the TLR4 signaling network model that captured the feedback regulations, interdependencies between signaling kinases and phosphatases and the outcome of simulated infections. The analyses of the TLR4 signaling network revealed 360 feedback loops, 157 negative and 203 positive; of which, 334 loops had the phosphatase PP1 as an essential component. The network elements' interdependency (positive or negative dependencies) in perturbation conditions such as the phosphatase knockout conditions revealed interdependencies between the dual-specific phosphatases MKP-1 and MKP-3 and the kinases in MAPK modules and the role of PP2A in the auto-regulation of Calmodulin kinase-II. Our simulations under the specific kinase or phosphatase gene-deficiency or inhibition conditions corroborated with several previously reported experimental data. The simulations to mimic Yersinia pestis and E. coli infections identified the key perturbation in the network and potential drug targets. Thus, our analyses of TLR4 signaling highlights the role of phosphatases as key regulatory factors in determining the global interdependencies among the network elements; uncovers novel signaling connections; identifies potential drug targets for infections. PMID:24699232
Wellenberg, Ruud H H; Boomsma, Martijn F; van Osch, Jochen A C; Vlassenbroek, Alain; Milles, Julien; Edens, Mireille A; Streekstra, Geert J; Slump, Cornelis H; Maas, Mario
To quantify the combined use of iterative model-based reconstruction (IMR) and orthopaedic metal artefact reduction (O-MAR) in reducing metal artefacts and improving image quality in a total hip arthroplasty phantom. Scans acquired at several dose levels and kVps were reconstructed with filtered back-projection (FBP), iterative reconstruction (iDose) and IMR, with and without O-MAR. Computed tomography (CT) numbers, noise levels, signal-to-noise-ratios and contrast-to-noise-ratios were analysed. Iterative model-based reconstruction results in overall improved image quality compared to iDose and FBP (P < 0.001). Orthopaedic metal artefact reduction is most effective in reducing severe metal artefacts improving CT number accuracy by 50%, 60%, and 63% (P < 0.05) and reducing noise by 1%, 62%, and 85% (P < 0.001) whereas improving signal-to-noise-ratios by 27%, 47%, and 46% (P < 0.001) and contrast-to-noise-ratios by 16%, 25%, and 19% (P < 0.001) with FBP, iDose, and IMR, respectively. The combined use of IMR and O-MAR strongly improves overall image quality and strongly reduces metal artefacts in the CT imaging of a total hip arthroplasty phantom.
Raj, Vinit; Bhadauria, Archana S; Singh, Ashok K; Kumar, Umesh; Rai, Amit; Keshari, Amit K; Kumar, Pranesh; Kumar, Dinesh; Maity, Biswanath; Nath, Sneha; Prakash, Anand; Ansari, Kausar M; Jat, Jawahar L; Saha, Sudipta
2018-03-23
We attempted a preclinical study using DMH-induced CRC rat model to evaluate the antitumor potential of our recently synthesized 1,3,4-thiadiazoles. The molecular insights were confirmed through ELISA, qRT-PCR and western blot analyses. The CRC condition was produced in response to COX-2 and IL-6 induced activation of JAK2/STAT3 which, in turn, was due to the enhanced phosphorylation of JAK2 and STAT3. The treatment with 1,3,4-thiadiazole derivatives (VR24 and VR27) caused the significant blockade of this signaling pathway. The behavior of STAT3 populations in response to IL-6 and COX-2 stimulations was further confirmed through data-based mathematical modeling using the quantitative western blot data. Finally, VR24 and VR27 restored the perturbed metabolites associated to DMH-induced CRC as evidenced through 1 H NMR based serum metabolomics. The tumor protecting ability of VR24 and VR27 was found comparable or to some degree better than the marketed chemotherapeutics, 5-flurouracil. Copyright © 2018 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fan, Linlin; Wang, Hongrui; Wang, Cheng
Drought risk analysis is essential for regional water resource management. In this study, the probabilistic relationship between precipitation and meteorological drought in Beijing, China, was calculated under three different precipitation conditions (precipitation equal to, greater than, or less than a threshold) based on copulas. The Standardized Precipitation Evapotranspiration Index (SPEI) was calculated based on monthly total precipitation and monthly mean temperature data. The trends and variations in the SPEI were analysed using Hilbert-Huang Transform (HHT) and Mann-Kendall (MK) trend tests with a running approach. The results of the HHT and MK test indicated a significant decreasing trend in the SPEI.more » The copula-based conditional probability indicated that the probability of meteorological drought decreased as monthly precipitation increased and that 10 mm can be regarded as the threshold for triggering extreme drought. From a quantitative perspective, when R ≤ mm, the probabilities of moderate drought, severe drought, and extreme drought were 22.1%, 18%, and 13.6%, respectively. This conditional probability distribution not only revealed the occurrence of meteorological drought in Beijing but also provided a quantitative way to analyse the probability of drought under different precipitation conditions. Furthermore, the results provide a useful reference for future drought prediction.« less
Fan, Linlin; Wang, Hongrui; Wang, Cheng; ...
2017-05-16
Drought risk analysis is essential for regional water resource management. In this study, the probabilistic relationship between precipitation and meteorological drought in Beijing, China, was calculated under three different precipitation conditions (precipitation equal to, greater than, or less than a threshold) based on copulas. The Standardized Precipitation Evapotranspiration Index (SPEI) was calculated based on monthly total precipitation and monthly mean temperature data. The trends and variations in the SPEI were analysed using Hilbert-Huang Transform (HHT) and Mann-Kendall (MK) trend tests with a running approach. The results of the HHT and MK test indicated a significant decreasing trend in the SPEI.more » The copula-based conditional probability indicated that the probability of meteorological drought decreased as monthly precipitation increased and that 10 mm can be regarded as the threshold for triggering extreme drought. From a quantitative perspective, when R ≤ mm, the probabilities of moderate drought, severe drought, and extreme drought were 22.1%, 18%, and 13.6%, respectively. This conditional probability distribution not only revealed the occurrence of meteorological drought in Beijing but also provided a quantitative way to analyse the probability of drought under different precipitation conditions. Furthermore, the results provide a useful reference for future drought prediction.« less
Real medical benefit assessed by indirect comparison.
Falissard, Bruno; Zylberman, Myriam; Cucherat, Michel; Izard, Valérie; Meyer, François
2009-01-01
Frequently, in data packages submitted for Marketing Approval to the CHMP, there is a lack of relevant head-to-head comparisons of medicinal products that could enable national authorities responsible for the approval of reimbursement to assess the Added Therapeutic Value (ASMR) of new clinical entities or line extensions of existing therapies.Indirect or mixed treatment comparisons (MTC) are methods stemming from the field of meta-analysis that have been designed to tackle this problem. Adjusted indirect comparisons, meta-regressions, mixed models, Bayesian network analyses pool results of randomised controlled trials (RCTs), enabling a quantitative synthesis.The REAL procedure, recently developed by the HAS (French National Authority for Health), is a mixture of an MTC and effect model based on expert opinions. It is intended to translate the efficacy observed in the trials into effectiveness expected in day-to-day clinical practice in France.
A methodological investigation of hominoid craniodental morphology and phylogenetics.
Bjarnason, Alexander; Chamberlain, Andrew T; Lockwood, Charles A
2011-01-01
The evolutionary relationships of extant great apes and humans have been largely resolved by molecular studies, yet morphology-based phylogenetic analyses continue to provide conflicting results. In order to further investigate this discrepancy we present bootstrap clade support of morphological data based on two quantitative datasets, one dataset consisting of linear measurements of the whole skull from 5 hominoid genera and the second dataset consisting of 3D landmark data from the temporal bone of 5 hominoid genera, including 11 sub-species. Using similar protocols for both datasets, we were able to 1) compare distance-based phylogenetic methods to cladistic parsimony of quantitative data converted into discrete character states, 2) vary outgroup choice to observe its effect on phylogenetic inference, and 3) analyse male and female data separately to observe the effect of sexual dimorphism on phylogenies. Phylogenetic analysis was sensitive to methodological decisions, particularly outgroup selection, where designation of Pongo as an outgroup and removal of Hylobates resulted in greater congruence with the proposed molecular phylogeny. The performance of distance-based methods also justifies their use in phylogenetic analysis of morphological data. It is clear from our analyses that hominoid phylogenetics ought not to be used as an example of conflict between the morphological and molecular, but as an example of how outgroup and methodological choices can affect the outcome of phylogenetic analysis. Copyright © 2010 Elsevier Ltd. All rights reserved.
Nock, Charles A.; Lecigne, Bastien; Taugourdeau, Olivier; Greene, David F.; Dauzat, Jean; Delagrange, Sylvain; Messier, Christian
2016-01-01
Background and Aims Despite a longstanding interest in variation in tree species vulnerability to ice storm damage, quantitative analyses of the influence of crown structure on within-crown variation in ice accretion are rare. In particular, the effect of prior interception by higher branches on lower branch accumulation remains unstudied. The aim of this study was to test the hypothesis that intra-crown ice accretion can be predicted by a measure of the degree of sheltering by neighbouring branches. Methods Freezing rain was artificially applied to Acer platanoides L., and in situ branch-ice thickness was measured directly and from LiDAR point clouds. Two models of freezing rain interception were developed: ‘IceCube’, which uses point clouds to relate ice accretion to a voxel-based index (sheltering factor; SF) of the sheltering effect of branch elements above a measurement point; and ‘IceTree’, a simulation model for in silico evaluation of the interception pattern of freezing rain in virtual tree crowns. Key Results Intra-crown radial ice accretion varied strongly, declining from the tips to the bases of branches and from the top to the base of the crown. SF for branches varied strongly within the crown, and differences among branches were consistent for a range of model parameters. Intra-crown variation in ice accretion on branches was related to SF (R2 = 0·46), with in silico results from IceTree supporting empirical relationships from IceCube. Conclusions Empirical results and simulations confirmed a key role for crown architecture in determining intra-crown patterns of ice accretion. As suspected, the concentration of freezing rain droplets is attenuated by passage through the upper crown, and thus higher branches accumulate more ice than lower branches. This is the first step in developing a model that can provide a quantitative basis for investigating intra-crown and inter-specific variation in freezing rain damage. PMID:27107412
Quantitative self-assembly prediction yields targeted nanomedicines
NASA Astrophysics Data System (ADS)
Shamay, Yosi; Shah, Janki; Işık, Mehtap; Mizrachi, Aviram; Leibold, Josef; Tschaharganeh, Darjus F.; Roxbury, Daniel; Budhathoki-Uprety, Januka; Nawaly, Karla; Sugarman, James L.; Baut, Emily; Neiman, Michelle R.; Dacek, Megan; Ganesh, Kripa S.; Johnson, Darren C.; Sridharan, Ramya; Chu, Karen L.; Rajasekhar, Vinagolu K.; Lowe, Scott W.; Chodera, John D.; Heller, Daniel A.
2018-02-01
Development of targeted nanoparticle drug carriers often requires complex synthetic schemes involving both supramolecular self-assembly and chemical modification. These processes are generally difficult to predict, execute, and control. We describe herein a targeted drug delivery system that is accurately and quantitatively predicted to self-assemble into nanoparticles based on the molecular structures of precursor molecules, which are the drugs themselves. The drugs assemble with the aid of sulfated indocyanines into particles with ultrahigh drug loadings of up to 90%. We devised quantitative structure-nanoparticle assembly prediction (QSNAP) models to identify and validate electrotopological molecular descriptors as highly predictive indicators of nano-assembly and nanoparticle size. The resulting nanoparticles selectively targeted kinase inhibitors to caveolin-1-expressing human colon cancer and autochthonous liver cancer models to yield striking therapeutic effects while avoiding pERK inhibition in healthy skin. This finding enables the computational design of nanomedicines based on quantitative models for drug payload selection.
Butalid, Ligaya; Verhaak, Peter F M; Boeije, Hennie R; Bensing, Jozien M
2012-08-08
Doctor-patient communication has been influenced over time by factors such as the rise of evidence-based medicine and a growing emphasis on patient-centred care. Despite disputes in the literature on the tension between evidence-based medicine and patient-centered medicine, patients' views on what constitutes high quality of doctor-patient communication are seldom an explicit topic for research. The aim of this study is to examine whether analogue patients (lay people judging videotaped consultations) perceive shifts in the quality of doctor-patient communication over a twenty-year period. Analogue patients (N = 108) assessed 189 videotaped general practice consultations from two periods (1982-1984 and 2000-2001). They provided ratings on three dimensions (scale 1-10) and gave written feedback. With a mixed-methods research design, we examined these assessments quantitatively (in relation to observed communication coded with RIAS) and qualitatively. 1) The quantitative analyses showed that biomedical communication and rapport building were positively associated with the quality assessments of videotaped consultations from the first period, but not from the second. Psychosocial communication and personal remarks were related to positive quality assessments of both periods; 2) the qualitative analyses showed that in both periods, participants provided the same balance between positive and negative comments. Listening, giving support, and showing respect were considered equally important in both periods. We identified shifts in the participants' observations on how GPs explained things to the patient, the division of roles and responsibilities, and the emphasis on problem-focused communication (first period) versus solution-focused communication (last period). Analogue patients recognize shifts in the quality of doctor-patient communication from two different periods, including a shift from problem-focused communication to solution-focused communication, and they value an egalitarian doctor-patient relationship. The two research methods were complementary; based on the quantitative analyses we found shifts in communication, which we confirmed and specified in our qualitative analyses.
QTest: Quantitative Testing of Theories of Binary Choice
Regenwetter, Michel; Davis-Stober, Clintin P.; Lim, Shiau Hong; Guo, Ying; Popova, Anna; Zwilling, Chris; Cha, Yun-Shil; Messner, William
2014-01-01
The goal of this paper is to make modeling and quantitative testing accessible to behavioral decision researchers interested in substantive questions. We provide a novel, rigorous, yet very general, quantitative diagnostic framework for testing theories of binary choice. This permits the nontechnical scholar to proceed far beyond traditionally rather superficial methods of analysis, and it permits the quantitatively savvy scholar to triage theoretical proposals before investing effort into complex and specialized quantitative analyses. Our theoretical framework links static algebraic decision theory with observed variability in behavioral binary choice data. The paper is supplemented with a custom-designed public-domain statistical analysis package, the QTest software. We illustrate our approach with a quantitative analysis using published laboratory data, including tests of novel versions of “Random Cumulative Prospect Theory.” A major asset of the approach is the potential to distinguish decision makers who have a fixed preference and commit errors in observed choices from decision makers who waver in their preferences. PMID:24999495
Shafqat-Abbasi, Hamdah; Kowalewski, Jacob M; Kiss, Alexa; Gong, Xiaowei; Hernandez-Varas, Pablo; Berge, Ulrich; Jafari-Mamaghani, Mehrdad; Lock, John G; Strömblad, Staffan
2016-01-01
Mesenchymal (lamellipodial) migration is heterogeneous, although whether this reflects progressive variability or discrete, 'switchable' migration modalities, remains unclear. We present an analytical toolbox, based on quantitative single-cell imaging data, to interrogate this heterogeneity. Integrating supervised behavioral classification with multivariate analyses of cell motion, membrane dynamics, cell-matrix adhesion status and F-actin organization, this toolbox here enables the detection and characterization of two quantitatively distinct mesenchymal migration modes, termed 'Continuous' and 'Discontinuous'. Quantitative mode comparisons reveal differences in cell motion, spatiotemporal coordination of membrane protrusion/retraction, and how cells within each mode reorganize with changed cell speed. These modes thus represent distinctive migratory strategies. Additional analyses illuminate the macromolecular- and cellular-scale effects of molecular targeting (fibronectin, talin, ROCK), including 'adaptive switching' between Continuous (favored at high adhesion/full contraction) and Discontinuous (low adhesion/inhibited contraction) modes. Overall, this analytical toolbox now facilitates the exploration of both spontaneous and adaptive heterogeneity in mesenchymal migration. DOI: http://dx.doi.org/10.7554/eLife.11384.001 PMID:26821527
An evidential reasoning extension to quantitative model-based failure diagnosis
NASA Technical Reports Server (NTRS)
Gertler, Janos J.; Anderson, Kenneth C.
1992-01-01
The detection and diagnosis of failures in physical systems characterized by continuous-time operation are studied. A quantitative diagnostic methodology has been developed that utilizes the mathematical model of the physical system. On the basis of the latter, diagnostic models are derived each of which comprises a set of orthogonal parity equations. To improve the robustness of the algorithm, several models may be used in parallel, providing potentially incomplete and/or conflicting inferences. Dempster's rule of combination is used to integrate evidence from the different models. The basic probability measures are assigned utilizing quantitative information extracted from the mathematical model and from online computation performed therewith.
Bryce A. Richardson; Gerald E. Rehfeldt; Mee-Sook Kim
2009-01-01
Analyses of molecular and quantitative genetic data demonstrate the existence of congruent climate-related patterns in western white pine (Pinus monticola). Two independent studies allowed comparisons of amplified fragment length polymorphism (AFLP) markers with quantitative variation in adaptive traits. Principal component analyses...
NASA Astrophysics Data System (ADS)
Le Gall, C.; Geiger, E.; Gallais-During, A.; Pontillon, Y.; Lamontagne, J.; Hanus, E.; Ducros, G.
2017-11-01
Qualitative and quantitative analyses on the VERDON-1 sample made it possible to obtain valuable information on fission product behaviour in the fuel during the test. A promising methodology based on the quantitative results of post-test characterisations has been implemented to assess the release fraction of non γ-emitter fission products. The order of magnitude of the estimated release fractions for each fission product was consistent with their class of volatility.
Complex networks untangle competitive advantage in Australian football
NASA Astrophysics Data System (ADS)
Braham, Calum; Small, Michael
2018-05-01
We construct player-based complex network models of Australian football teams for the 2014 Australian Football League season; modelling the passes between players as weighted, directed edges. We show that analysis of these measures can give an insight into the underlying structure and strategy of Australian football teams, quantitatively distinguishing different playing styles. The relationships observed between network properties and match outcomes suggest that successful teams exhibit well-connected passing networks with the passes distributed between all 22 players as evenly as possible. Linear regression models of team scores and match margins show significant improvements in R2 and Bayesian information criterion when network measures are added to models that use conventional measures, demonstrating that network analysis measures contain useful, extra information. Several measures, particularly the mean betweenness centrality, are shown to be useful in predicting the outcomes of future matches, suggesting they measure some aspect of the intrinsic strength of teams. In addition, several local centrality measures are shown to be useful in analysing individual players' differing contributions to the team's structure.
Complex networks untangle competitive advantage in Australian football.
Braham, Calum; Small, Michael
2018-05-01
We construct player-based complex network models of Australian football teams for the 2014 Australian Football League season; modelling the passes between players as weighted, directed edges. We show that analysis of these measures can give an insight into the underlying structure and strategy of Australian football teams, quantitatively distinguishing different playing styles. The relationships observed between network properties and match outcomes suggest that successful teams exhibit well-connected passing networks with the passes distributed between all 22 players as evenly as possible. Linear regression models of team scores and match margins show significant improvements in R 2 and Bayesian information criterion when network measures are added to models that use conventional measures, demonstrating that network analysis measures contain useful, extra information. Several measures, particularly the mean betweenness centrality, are shown to be useful in predicting the outcomes of future matches, suggesting they measure some aspect of the intrinsic strength of teams. In addition, several local centrality measures are shown to be useful in analysing individual players' differing contributions to the team's structure.
Fiocco, Ugo; Stramare, Roberto; Martini, Veronica; Coran, Alessandro; Caso, Francesco; Costa, Luisa; Felicetti, Mara; Rizzo, Gaia; Tonietto, Matteo; Scanu, Anna; Oliviero, Francesca; Raffeiner, Bernd; Vezzù, Maristella; Lunardi, Francesca; Scarpa, Raffaele; Sacerdoti, David; Rubaltelli, Leopoldo; Punzi, Leonardo; Doria, Andrea; Grisan, Enrico
2017-02-01
To develop quantitative imaging biomarkers of synovial tissue perfusion by pixel-based contrast-enhanced ultrasound (CEUS), we studied the relationship between CEUS synovial vascular perfusion and the frequencies of pathogenic T helper (Th)-17 cells in psoriatic arthritis (PsA) joints. Eight consecutive patients with PsA were enrolled in this study. Gray scale CEUS evaluation was performed on the same joint immediately after joint aspiration, by automatic assessment perfusion data, using a new quantification approach of pixel-based analysis and the gamma-variate model. The set of perfusional parameters considered by the time intensity curve includes the maximum value (peak) of the signal intensity curve, the blood volume index or area under the curve, (BVI, AUC) and the contrast mean transit time (MTT). The direct ex vivo analysis of the frequencies of SF IL17A-F + CD161 + IL23 + CD4 + T cells subsets were quantified by fluorescence-activated cell sorter (FACS). In cross-sectional analyses, when tested for multiple comparison setting, a false discovery rate at 10%, a common pattern of correlations between CEUS Peak, AUC (BVI) and MTT parameters with the IL17A-F + IL23 + - IL17A-F + CD161 + - and IL17A-F + CD161 + IL23 + CD4 + T cells subsets, as well as lack of correlation between both peak and AUC values and both CD4 + T and CD4 + IL23 + T cells, was observed. The pixel-based CEUS assessment is a truly measure synovial inflammation, as a useful tool to develop quantitative imaging biomarker for monitoring target therapeutics in PsA.
Geomorphology and landscape organization of a northern peatland complex
NASA Astrophysics Data System (ADS)
Richardson, M. C.
2012-12-01
The geomorphic evolution of northern peatlands is governed by complex ecohydrological feedback mechanisms and associated hydro-climatic drivers. For example, prevailing models of bog development (i.e. Ingram's groundwater mounding hypothesis and variants) attempt to explicitly link bog dome characteristics to the regional climate based on analytical and numerical models of lateral groundwater flow and the first-order control of water table position on rates of peat accumulation. In this talk I will present new results from quantitative geomorphic analyses of a northern peatland complex at the De Beers Victor diamond mine site in the Hudson Bay Lowlands of northern Ontario. This work capitalizes on spatially-extensive, high-resolution topographic (LiDAR) data to rigorously test analytical and numerical models of bog dome development in this landscape. The analysis and discussion are then expanded beyond individual bog formations to more broadly consider ecohydrological drivers of landscape organization, with implications for understanding and modeling catchment-scale runoff response. Results show that in this landscape, drainage patterns exhibit relatively well-organized characteristics consistent with observed runoff responses in six gauged research catchments. Interpreted together, the results of these geomorphic and hydrologic analyses help refine our understanding of water balance partitioning among different landcover types within northern peatland complexes. These findings can be used to help guide the development of appropriate numerical model structures for hydrologic prediction in ungauged peatland basins of northern Canada.
Quantitative Adverse Outcome Pathways and Their Application to Predictive Toxicology
A quantitative adverse outcome pathway (qAOP) consists of one or more biologically based, computational models describing key event relationships linking a molecular initiating event (MIE) to an adverse outcome. A qAOP provides quantitative, dose–response, and time-course p...
Lawless, Craig; Hubbard, Simon J.; Fan, Jun; Bessant, Conrad; Hermjakob, Henning; Jones, Andrew R.
2012-01-01
Abstract New methods for performing quantitative proteome analyses based on differential labeling protocols or label-free techniques are reported in the literature on an almost monthly basis. In parallel, a correspondingly vast number of software tools for the analysis of quantitative proteomics data has also been described in the literature and produced by private companies. In this article we focus on the review of some of the most popular techniques in the field and present a critical appraisal of several software packages available to process and analyze the data produced. We also describe the importance of community standards to support the wide range of software, which may assist researchers in the analysis of data using different platforms and protocols. It is intended that this review will serve bench scientists both as a useful reference and a guide to the selection and use of different pipelines to perform quantitative proteomics data analysis. We have produced a web-based tool (http://www.proteosuite.org/?q=other_resources) to help researchers find appropriate software for their local instrumentation, available file formats, and quantitative methodology. PMID:22804616
McIlvane, William J; Kledaras, Joanne B; Gerard, Christophe J; Wilde, Lorin; Smelson, David
2018-07-01
A few noteworthy exceptions notwithstanding, quantitative analyses of relational learning are most often simple descriptive measures of study outcomes. For example, studies of stimulus equivalence have made much progress using measures such as percentage consistent with equivalence relations, discrimination ratio, and response latency. Although procedures may have ad hoc variations, they remain fairly similar across studies. Comparison studies of training variables that lead to different outcomes are few. Yet to be developed are tools designed specifically for dynamic and/or parametric analyses of relational learning processes. This paper will focus on recent studies to develop (1) quality computer-based programmed instruction for supporting relational learning in children with autism spectrum disorders and intellectual disabilities and (2) formal algorithms that permit ongoing, dynamic assessment of learner performance and procedure changes to optimize instructional efficacy and efficiency. Because these algorithms have a strong basis in evidence and in theories of stimulus control, they may have utility also for basic and translational research. We present an overview of the research program, details of algorithm features, and summary results that illustrate their possible benefits. It also presents arguments that such algorithm development may encourage parametric research, help in integrating new research findings, and support in-depth quantitative analyses of stimulus control processes in relational learning. Such algorithms may also serve to model control of basic behavioral processes that is important to the design of effective programmed instruction for human learners with and without functional disabilities. Copyright © 2018 Elsevier B.V. All rights reserved.
Ozaki, Yu-ichi; Uda, Shinsuke; Saito, Takeshi H; Chung, Jaehoon; Kubota, Hiroyuki; Kuroda, Shinya
2010-04-01
Modeling of cellular functions on the basis of experimental observation is increasingly common in the field of cellular signaling. However, such modeling requires a large amount of quantitative data of signaling events with high spatio-temporal resolution. A novel technique which allows us to obtain such data is needed for systems biology of cellular signaling. We developed a fully automatable assay technique, termed quantitative image cytometry (QIC), which integrates a quantitative immunostaining technique and a high precision image-processing algorithm for cell identification. With the aid of an automated sample preparation system, this device can quantify protein expression, phosphorylation and localization with subcellular resolution at one-minute intervals. The signaling activities quantified by the assay system showed good correlation with, as well as comparable reproducibility to, western blot analysis. Taking advantage of the high spatio-temporal resolution, we investigated the signaling dynamics of the ERK pathway in PC12 cells. The QIC technique appears as a highly quantitative and versatile technique, which can be a convenient replacement for the most conventional techniques including western blot, flow cytometry and live cell imaging. Thus, the QIC technique can be a powerful tool for investigating the systems biology of cellular signaling.
Quantitative non-destructive testing
NASA Technical Reports Server (NTRS)
Welch, C. S.
1985-01-01
The work undertaken during this period included two primary efforts. The first is a continuation of theoretical development from the previous year of models and data analyses for NDE using the Optical Thermal Infra-Red Measurement System (OPTITHIRMS) system, which involves heat injection with a laser and observation of the resulting thermal pattern with an infrared imaging system. The second is an investigation into the use of the thermoelastic effect as an effective tool for NDE. As in the past, the effort is aimed towards NDE techniques applicable to composite materials in structural applications. The theoretical development described produced several models of temperature patterns over several geometries and material types. Agreement between model data and temperature observations was obtained. A model study with one of these models investigated some fundamental difficulties with the proposed method (the primitive equation method) for obtaining diffusivity values in plates of thickness and supplied guidelines for avoiding these difficulties. A wide range of computing speeds was found among the various models, with a one-dimensional model based on Laplace's integral solution being both very fast and very accurate.
NASA Astrophysics Data System (ADS)
Wu, C.; Nittrouer, J. A.; Burmeister, K. C.
2017-12-01
River hydrodynamic conditions are modified where a system approaches its terminal basin, characterized by the onset of non-uniform "backwater" flow. A decrease in boundary shear stress in the backwater region reduces transport capacity and results in sediment deposition on the channel bed. Although such morphodynamic conditions are common in modern fluvial-deltaic channels, the extent to which these processes are prevalent in the stratigraphic record remains unclear. For example, a few studies documenting changes in fluvial sandstone channel dimensions and grain size distributions near a river terminus attributed this variability to backwater hydrodynamics. However, quantitative tests using morphodynamic models bolstered by a variety of field observations, which could then be linked to sediment depositional patterns and stratigraphy, have yet to be produced. Here we calibrate a one-dimensional river flow model with measurements of paleo-slope and channel depth, and use the output to constrain a sediment transport model, with data from the Tullig Sandstone in the Western Irish Namurian Basin. Based on the model results, our analyses indicate that: (1) backwater hydrodynamics influence the spatial variation of sandstone dimensions and grain size across the delta, and (2) backwater hydrodynamics drive channel bed aggradation and progradation of the river mouth for conditions of constant sea level. Field data indicate that the reach-average story thickness increases, and then decreases, progressing downstream over the backwater reach. Based on the inferred transport and depositional processes, the measured deltaic stratigraphy patterns shown here are assumed to be associated with backwater hydrodynamics, and are therefore largely autogenic in origin. These analyses indicate that non-uniform hydrodynamics can generate stratigraphic patterns that could be conflated as arising due to allogenic effects, based on traditional geometric or diffusion-based depositional models. Moreover, the signals of river hydrodynamics preserved in the stratigraphic record can be a useful tool for differentiating between short-term autogenic and long-term allogenic processes.
Murumkar, Prashant R; Giridhar, Rajani; Yadav, Mange Ram
2008-04-01
A set of 29 benzothiadiazepine hydroxamates having selective tumor necrosis factor-alpha converting enzyme inhibitory activity were used to compare the quality and predictive power of 3D-quantitative structure-activity relationship, comparative molecular field analysis, and comparative molecular similarity indices models for the atom-based, centroid/atom-based, data-based, and docked conformer-based alignment. Removal of two outliers from the initial training set of molecules improved the predictivity of models. Among the 3D-quantitative structure-activity relationship models developed using the above four alignments, the database alignment provided the optimal predictive comparative molecular field analysis model for the training set with cross-validated r(2) (q(2)) = 0.510, non-cross-validated r(2) = 0.972, standard error of estimates (s) = 0.098, and F = 215.44 and the optimal comparative molecular similarity indices model with cross-validated r(2) (q(2)) = 0.556, non-cross-validated r(2) = 0.946, standard error of estimates (s) = 0.163, and F = 99.785. These models also showed the best test set prediction for six compounds with predictive r(2) values of 0.460 and 0.535, respectively. The contour maps obtained from 3D-quantitative structure-activity relationship studies were appraised for activity trends for the molecules analyzed. The comparative molecular similarity indices models exhibited good external predictivity as compared with that of comparative molecular field analysis models. The data generated from the present study helped us to further design and report some novel and potent tumor necrosis factor-alpha converting enzyme inhibitors.
NASA Astrophysics Data System (ADS)
Ortolano, Gaetano; Visalli, Roberto; Godard, Gaston; Cirrincione, Rosolino
2018-06-01
We present a new ArcGIS®-based tool developed in the Python programming language for calibrating EDS/WDS X-ray element maps, with the aim of acquiring quantitative information of petrological interest. The calibration procedure is based on a multiple linear regression technique that takes into account interdependence among elements and is constrained by the stoichiometry of minerals. The procedure requires an appropriate number of spot analyses for use as internal standards and provides several test indexes for a rapid check of calibration accuracy. The code is based on an earlier image-processing tool designed primarily for classifying minerals in X-ray element maps; the original Python code has now been enhanced to yield calibrated maps of mineral end-members or the chemical parameters of each classified mineral. The semi-automated procedure can be used to extract a dataset that is automatically stored within queryable tables. As a case study, the software was applied to an amphibolite-facies garnet-bearing micaschist. The calibrated images obtained for both anhydrous (i.e., garnet and plagioclase) and hydrous (i.e., biotite) phases show a good fit with corresponding electron microprobe analyses. This new GIS-based tool package can thus find useful application in petrology and materials science research. Moreover, the huge quantity of data extracted opens new opportunities for the development of a thin-section microchemical database that, using a GIS platform, can be linked with other major global geoscience databases.
Comparison of satellite-derived dynamical quantities for the stratosphere of the Southern Hemisphere
NASA Technical Reports Server (NTRS)
Miles, Thomas (Editor); Oneill, Alan (Editor)
1989-01-01
As part of the international Middle Atmosphere Program (MAP), a project was instituted to study the dynamics of the Middle Atmosphere in the Southern Hemisphere (MASH). A pre-MASH workshop was held with two aims: comparison of Southern Hemisphere dynamical quantities derived from various archives of satellite data; and assessing the impact of different base-level height information on such derived quantities. The dynamical quantities examined included geopotential height, zonal wind, potential vorticity, eddy heat and momentum fluxes, and Eliassen-Palm fluxes. It was found that while there was usually qualitative agreement between the different sets of fields, substantial quantitative differences were evident, particularly in high latitudes. The fidelity of the base-level analysis was found to be of prime importance in calculating derived quantities - especially the Eliassen-Palm flux divergence and potential vorticity. Improvements in base-level analyses are recommended. In particular, quality controls should be introduced to remove spurious localized features from analyses, and information from all Antarctic radiosondes should be utilized where possible. Caution in drawing quantitative inferences from satellite data for the middle atmosphere of the Southern Hemisphere is advised.
Jin, Yan; Huang, Jing-feng; Peng, Dai-liang
2009-01-01
Ecological compensation is becoming one of key and multidiscipline issues in the field of resources and environmental management. Considering the change relation between gross domestic product (GDP) and ecological capital (EC) based on remote sensing estimation, we construct a new quantitative estimate model for ecological compensation, using county as study unit, and determine standard value so as to evaluate ecological compensation from 2001 to 2004 in Zhejiang Province, China. Spatial differences of the ecological compensation were significant among all the counties or districts. This model fills up the gap in the field of quantitative evaluation of regional ecological compensation and provides a feasible way to reconcile the conflicts among benefits in the economic, social, and ecological sectors. PMID:19353749
Jin, Yan; Huang, Jing-feng; Peng, Dai-liang
2009-04-01
Ecological compensation is becoming one of key and multidiscipline issues in the field of resources and environmental management. Considering the change relation between gross domestic product (GDP) and ecological capital (EC) based on remote sensing estimation, we construct a new quantitative estimate model for ecological compensation, using county as study unit, and determine standard value so as to evaluate ecological compensation from 2001 to 2004 in Zhejiang Province, China. Spatial differences of the ecological compensation were significant among all the counties or districts. This model fills up the gap in the field of quantitative evaluation of regional ecological compensation and provides a feasible way to reconcile the conflicts among benefits in the economic, social, and ecological sectors.
Saarman, Emily T.; Owens, Brian; Murray, Steven N.; Weisberg, Stephen B.; Field, John C.; Nielsen, Karina J.
2018-01-01
There are numerous reasons to conduct scientific research within protected areas, but research activities may also negatively impact organisms and habitats, and thus conflict with a protected area’s conservation goals. We developed a quantitative ecological decision-support framework that estimates these potential impacts so managers can weigh costs and benefits of proposed research projects and make informed permitting decisions. The framework generates quantitative estimates of the ecological impacts of the project and the cumulative impacts of the proposed project and all other projects in the protected area, and then compares the estimated cumulative impacts of all projects with policy-based acceptable impact thresholds. We use a series of simplified equations (models) to assess the impacts of proposed research to: a) the population of any targeted species, b) the major ecological assemblages that make up the community, and c) the physical habitat that supports protected area biota. These models consider both targeted and incidental impacts to the ecosystem and include consideration of the vulnerability of targeted species, assemblages, and habitats, based on their recovery time and ecological role. We parameterized the models for a wide variety of potential research activities that regularly occur in the study area using a combination of literature review and expert judgment with a precautionary approach to uncertainty. We also conducted sensitivity analyses to examine the relationships between model input parameters and estimated impacts to understand the dominant drivers of the ecological impact estimates. Although the decision-support framework was designed for and adopted by the California Department of Fish and Wildlife for permitting scientific studies in the state-wide network of marine protected areas (MPAs), the framework can readily be adapted for terrestrial and freshwater protected areas. PMID:29920527
Deployment of e-health services - a business model engineering strategy.
Kijl, Björn; Nieuwenhuis, Lambert J M; Huis in 't Veld, Rianne M H A; Hermens, Hermie J; Vollenbroek-Hutten, Miriam M R
2010-01-01
We designed a business model for deploying a myofeedback-based teletreatment service. An iterative and combined qualitative and quantitative action design approach was used for developing the business model and the related value network. Insights from surveys, desk research, expert interviews, workshops and quantitative modelling were combined to produce the first business model and then to refine it in three design cycles. The business model engineering strategy provided important insights which led to an improved, more viable and feasible business model and related value network design. Based on this experience, we conclude that the process of early stage business model engineering reduces risk and produces substantial savings in costs and resources related to service deployment.
Planning bioinformatics workflows using an expert system.
Chen, Xiaoling; Chang, Jeffrey T
2017-04-15
Bioinformatic analyses are becoming formidably more complex due to the increasing number of steps required to process the data, as well as the proliferation of methods that can be used in each step. To alleviate this difficulty, pipelines are commonly employed. However, pipelines are typically implemented to automate a specific analysis, and thus are difficult to use for exploratory analyses requiring systematic changes to the software or parameters used. To automate the development of pipelines, we have investigated expert systems. We created the Bioinformatics ExperT SYstem (BETSY) that includes a knowledge base where the capabilities of bioinformatics software is explicitly and formally encoded. BETSY is a backwards-chaining rule-based expert system comprised of a data model that can capture the richness of biological data, and an inference engine that reasons on the knowledge base to produce workflows. Currently, the knowledge base is populated with rules to analyze microarray and next generation sequencing data. We evaluated BETSY and found that it could generate workflows that reproduce and go beyond previously published bioinformatics results. Finally, a meta-investigation of the workflows generated from the knowledge base produced a quantitative measure of the technical burden imposed by each step of bioinformatics analyses, revealing the large number of steps devoted to the pre-processing of data. In sum, an expert system approach can facilitate exploratory bioinformatic analysis by automating the development of workflows, a task that requires significant domain expertise. https://github.com/jefftc/changlab. jeffrey.t.chang@uth.tmc.edu. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Planning bioinformatics workflows using an expert system
Chen, Xiaoling; Chang, Jeffrey T.
2017-01-01
Abstract Motivation: Bioinformatic analyses are becoming formidably more complex due to the increasing number of steps required to process the data, as well as the proliferation of methods that can be used in each step. To alleviate this difficulty, pipelines are commonly employed. However, pipelines are typically implemented to automate a specific analysis, and thus are difficult to use for exploratory analyses requiring systematic changes to the software or parameters used. Results: To automate the development of pipelines, we have investigated expert systems. We created the Bioinformatics ExperT SYstem (BETSY) that includes a knowledge base where the capabilities of bioinformatics software is explicitly and formally encoded. BETSY is a backwards-chaining rule-based expert system comprised of a data model that can capture the richness of biological data, and an inference engine that reasons on the knowledge base to produce workflows. Currently, the knowledge base is populated with rules to analyze microarray and next generation sequencing data. We evaluated BETSY and found that it could generate workflows that reproduce and go beyond previously published bioinformatics results. Finally, a meta-investigation of the workflows generated from the knowledge base produced a quantitative measure of the technical burden imposed by each step of bioinformatics analyses, revealing the large number of steps devoted to the pre-processing of data. In sum, an expert system approach can facilitate exploratory bioinformatic analysis by automating the development of workflows, a task that requires significant domain expertise. Availability and Implementation: https://github.com/jefftc/changlab Contact: jeffrey.t.chang@uth.tmc.edu PMID:28052928
Lorjaroenphon, Yaowapa; Cadwallader, Keith R
2015-01-28
Thirty aroma-active components of a cola-flavored carbonated beverage were quantitated by stable isotope dilution assays, and their odor activity values (OAVs) were calculated. The OAV results revealed that 1,8-cineole, (R)-(-)-linalool, and octanal made the greatest contribution to the overall aroma of the cola. A cola aroma reconstitution model was constructed by adding 20 high-purity standards to an aqueous sucrose-phosphoric acid solution. The results of headspace solid-phase microextraction and sensory analyses were used to adjust the model to better match authentic cola. The rebalanced model was used as a complete model for the omission study. Sensory results indicated that omission of a group consisting of methyleugenol, (E)-cinnamaldehyde, eugenol, and (Z)- and (E)-isoeugenols differed from the complete model, while omission of the individual components of this group did not differ from the complete model. These results indicate that a balance of numerous odorants is responsible for the characteristic aroma of cola-flavored carbonated beverages.
Schaid, Daniel J
2010-01-01
Measures of genomic similarity are the basis of many statistical analytic methods. We review the mathematical and statistical basis of similarity methods, particularly based on kernel methods. A kernel function converts information for a pair of subjects to a quantitative value representing either similarity (larger values meaning more similar) or distance (smaller values meaning more similar), with the requirement that it must create a positive semidefinite matrix when applied to all pairs of subjects. This review emphasizes the wide range of statistical methods and software that can be used when similarity is based on kernel methods, such as nonparametric regression, linear mixed models and generalized linear mixed models, hierarchical models, score statistics, and support vector machines. The mathematical rigor for these methods is summarized, as is the mathematical framework for making kernels. This review provides a framework to move from intuitive and heuristic approaches to define genomic similarities to more rigorous methods that can take advantage of powerful statistical modeling and existing software. A companion paper reviews novel approaches to creating kernels that might be useful for genomic analyses, providing insights with examples [1]. Copyright © 2010 S. Karger AG, Basel.
NASA Astrophysics Data System (ADS)
Wang, Yujie; Zhang, Xu; Liu, Chang; Pan, Rui; Chen, Zonghai
2018-06-01
The power capability and maximum charge and discharge energy are key indicators for energy management systems, which can help the energy storage devices work in a suitable area and prevent them from over-charging and over-discharging. In this work, a model based power and energy assessment approach is proposed for the lithium-ion battery and supercapacitor hybrid system. The model framework of the lithium-ion battery and supercapacitor hybrid system is developed based on the equivalent circuit model, and the model parameters are identified by regression method. Explicit analyses of the power capability and maximum charge and discharge energy prediction with multiple constraints are elaborated. Subsequently, the extended Kalman filter is employed for on-board power capability and maximum charge and discharge energy prediction to overcome estimation error caused by system disturbance and sensor noise. The charge and discharge power capability, and the maximum charge and discharge energy are quantitatively assessed under both the dynamic stress test and the urban dynamometer driving schedule. The maximum charge and discharge energy prediction of the lithium-ion battery and supercapacitor hybrid system with different time scales are explored and discussed.
Single-case synthesis tools II: Comparing quantitative outcome measures.
Zimmerman, Kathleen N; Pustejovsky, James E; Ledford, Jennifer R; Barton, Erin E; Severini, Katherine E; Lloyd, Blair P
2018-03-07
Varying methods for evaluating the outcomes of single case research designs (SCD) are currently used in reviews and meta-analyses of interventions. Quantitative effect size measures are often presented alongside visual analysis conclusions. Six measures across two classes-overlap measures (percentage non-overlapping data, improvement rate difference, and Tau) and parametric within-case effect sizes (standardized mean difference and log response ratio [increasing and decreasing])-were compared to determine if choice of synthesis method within and across classes impacts conclusions regarding effectiveness. The effectiveness of sensory-based interventions (SBI), a commonly used class of treatments for young children, was evaluated. Separately from evaluations of rigor and quality, authors evaluated behavior change between baseline and SBI conditions. SBI were unlikely to result in positive behavior change across all measures except IRD. However, subgroup analyses resulted in variable conclusions, indicating that the choice of measures for SCD meta-analyses can impact conclusions. Suggestions for using the log response ratio in SCD meta-analyses and considerations for understanding variability in SCD meta-analysis conclusions are discussed. Copyright © 2018 Elsevier Ltd. All rights reserved.
Flightdeck Automation Problems (FLAP) Model for Safety Technology Portfolio Assessment
NASA Technical Reports Server (NTRS)
Ancel, Ersin; Shih, Ann T.
2014-01-01
NASA's Aviation Safety Program (AvSP) develops and advances methodologies and technologies to improve air transportation safety. The Safety Analysis and Integration Team (SAIT) conducts a safety technology portfolio assessment (PA) to analyze the program content, to examine the benefits and risks of products with respect to program goals, and to support programmatic decision making. The PA process includes systematic identification of current and future safety risks as well as tracking several quantitative and qualitative metrics to ensure the program goals are addressing prominent safety risks accurately and effectively. One of the metrics within the PA process involves using quantitative aviation safety models to gauge the impact of the safety products. This paper demonstrates the role of aviation safety modeling by providing model outputs and evaluating a sample of portfolio elements using the Flightdeck Automation Problems (FLAP) model. The model enables not only ranking of the quantitative relative risk reduction impact of all portfolio elements, but also highlighting the areas with high potential impact via sensitivity and gap analyses in support of the program office. Although the model outputs are preliminary and products are notional, the process shown in this paper is essential to a comprehensive PA of NASA's safety products in the current program and future programs/projects.
NASA Astrophysics Data System (ADS)
Lykkegaard, Eva; Ulriksen, Lars
2016-03-01
During the past 30 years, Eccles' comprehensive social-psychological Expectancy-Value Model of Motivated Behavioural Choices (EV-MBC model) has been proven suitable for studying educational choices related to Science, Technology, Engineering and/or Mathematics (STEM). The reflections of 15 students in their last year in upper-secondary school concerning their choice of tertiary education were examined using quantitative EV-MBC surveys and repeated qualitative interviews. This article presents the analyses of three cases in detail. The analytical focus was whether the factors indicated in the EV-MBC model could be used to detect significant changes in the students' educational choice processes. An important finding was that the quantitative EV-MBC surveys and the qualitative interviews gave quite different results concerning the students' considerations about the choice of tertiary education, and that significant changes in the students' reflections were not captured by the factors of the EV-MBC model. This questions the validity of the EV-MBC surveys. Moreover, the quantitative factors from the EV-MBC model did not sufficiently explain students' dynamical educational choice processes where students in parallel considered several different potential educational trajectories. We therefore call for further studies of the EV-MBC model's use in describing longitudinal choice processes and especially in investigating significant changes.
Bondi, Robert W; Igne, Benoît; Drennen, James K; Anderson, Carl A
2012-12-01
Near-infrared spectroscopy (NIRS) is a valuable tool in the pharmaceutical industry, presenting opportunities for online analyses to achieve real-time assessment of intermediates and finished dosage forms. The purpose of this work was to investigate the effect of experimental designs on prediction performance of quantitative models based on NIRS using a five-component formulation as a model system. The following experimental designs were evaluated: five-level, full factorial (5-L FF); three-level, full factorial (3-L FF); central composite; I-optimal; and D-optimal. The factors for all designs were acetaminophen content and the ratio of microcrystalline cellulose to lactose monohydrate. Other constituents included croscarmellose sodium and magnesium stearate (content remained constant). Partial least squares-based models were generated using data from individual experimental designs that related acetaminophen content to spectral data. The effect of each experimental design was evaluated by determining the statistical significance of the difference in bias and standard error of the prediction for that model's prediction performance. The calibration model derived from the I-optimal design had similar prediction performance as did the model derived from the 5-L FF design, despite containing 16 fewer design points. It also outperformed all other models estimated from designs with similar or fewer numbers of samples. This suggested that experimental-design selection for calibration-model development is critical, and optimum performance can be achieved with efficient experimental designs (i.e., optimal designs).
Fogliata, Antonella; Nicolini, Giorgia; Clivio, Alessandro; Vanetti, Eugenio; Laksar, Sarbani; Tozzi, Angelo; Scorsetti, Marta; Cozzi, Luca
2015-10-31
To evaluate the performance of a broad scope model-based optimisation process for volumetric modulated arc therapy applied to esophageal cancer. A set of 70 previously treated patients in two different institutions, were selected to train a model for the prediction of dose-volume constraints. The model was built with a broad-scope purpose, aiming to be effective for different dose prescriptions and tumour localisations. It was validated on three groups of patients from the same institution and from another clinic not providing patients for the training phase. Comparison of the automated plans was done against reference cases given by the clinically accepted plans. Quantitative improvements (statistically significant for the majority of the analysed dose-volume parameters) were observed between the benchmark and the test plans. Of 624 dose-volume objectives assessed for plan evaluation, in 21 cases (3.3 %) the reference plans failed to respect the constraints while the model-based plans succeeded. Only in 3 cases (<0.5 %) the reference plans passed the criteria while the model-based failed. In 5.3 % of the cases both groups of plans failed and in the remaining cases both passed the tests. Plans were optimised using a broad scope knowledge-based model to determine the dose-volume constraints. The results showed dosimetric improvements when compared to the benchmark data. Particularly the plans optimised for patients from the third centre, not participating to the training, resulted in superior quality. The data suggests that the new engine is reliable and could encourage its application to clinical practice.
Directivity analysis of meander-line-coil EMATs with a wholly analytical method.
Xie, Yuedong; Liu, Zenghua; Yin, Liyuan; Wu, Jiande; Deng, Peng; Yin, Wuliang
2017-01-01
This paper presents the simulation and experimental study of the radiation pattern of a meander-line-coil EMAT. A wholly analytical method, which involves the coupling of two models: an analytical EM model and an analytical UT model, has been developed to build EMAT models and analyse the Rayleigh waves' beam directivity. For a specific sensor configuration, Lorentz forces are calculated using the EM analytical method, which is adapted from the classic Deeds and Dodd solution. The calculated Lorentz force density are imported to an analytical ultrasonic model as driven point sources, which produce the Rayleigh waves within a layered medium. The effect of the length of the meander-line-coil on the Rayleigh waves' beam directivity is analysed quantitatively and verified experimentally. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Schön, Peter; Prokop, Alexander; Naaim-Bouvet, Florence; Vionnet, Vincent; Guyomarc'h, Gilbert; Heiser, Micha; Nishimura, Kouichi
2015-04-01
Wind and the associated snow drift are dominating factors determining the snow distribution and accumulation in alpine areas, resulting in a high spatial variability of snow depth that is difficult to evaluate and quantify. The terrain-based parameter Sx characterizes the degree of shelter or exposure of a grid point provided by the upwind terrain, without the computational complexity of numerical wind field models. The parameter has shown to qualitatively predict snow redistribution with good reproduction of spatial patterns. It does not, however, provide a quantitative estimate of changes in snow depths. The objective of our research was to introduce a new parameter to quantify changes in snow depths in our research area, the Col du Lac Blanc in the French Alps. The area is at an elevation of 2700 m and particularly suited for our study due to its consistently bi-modal wind directions. Our work focused on two pronounced, approximately 10 m high terrain breaks, and we worked with 1 m resolution digital snow surface models (DSM). The DSM and measured changes in snow depths were obtained with high-accuracy terrestrial laser scan (TLS) measurements. First we calculated the terrain-based parameter Sx on a digital snow surface model and correlated Sx with measured changes in snow-depths (Δ SH). Results showed that Δ SH can be approximated by Δ SHestimated = α * Sx, where α is a newly introduced parameter. The parameter α has shown to be linked to the amount of snow deposited influenced by blowing snow flux. At the Col du Lac Blanc test side, blowing snow flux is recorded with snow particle counters (SPC). Snow flux is the number of drifting snow particles per time and area. Hence, the SPC provide data about the duration and intensity of drifting snow events, two important factors not accounted for by the terrain parameter Sx. We analyse how the SPC snow flux data can be used to estimate the magnitude of the new variable parameter α . To simulate the development of the snow surface in dependency of Sx, SPC flux and time, we apply a simple cellular automata system. The system consists of raster cells that develop through discrete time steps according to a set of rules. The rules are based on the states of neighboring cells. Our model assumes snow transport in dependency of Sx gradients between neighboring cells. The cells evolve based on difference quotients between neighbouring cells. Our analyses and results are steps towards using the terrain-based parameter Sx, coupled with SPC data, to quantitatively estimate changes in snow depths, using high raster resolutions of 1 m.
Yin, Anyue; Yamada, Akihiro; Stam, Wiro B; van Hasselt, Johan G C; van der Graaf, Piet H
2018-06-02
Development of combination therapies has received significant interest in recent years. Previously a two-receptor one-transducer (2R-1T) model was proposed to characterize drug interactions with two receptors that lead to the same phenotypic response through a common transducer pathway. We applied, for the first time, the 2R-1T model to characterize the interaction of noradrenaline and arginine-vasopressin on vasoconstriction, and performed inter-species scaling to humans using this mechanism-based model. Contractile data was obtained from in vitro rat small mesenteric arteries after exposure to single or combined challenges of noradrenaline and arginine-vasopressin with or without pre-treatment with the irreversible α-adrenoceptor antagonist, phenoxybenzamine. Data was analysed using the 2R-1T model to characterize the observed exposure-response relationships and drug-drug interaction. The model was then scaled to humans by accounting for differences in receptor density. With receptor affinities set to literature values, the 2R-1T model satisfactorily characterized the interaction between noradrenaline and arginine-vasopressin in rat small mesenteric arteries (relative standard error ≤ 20%), as well as the effect of phenoxybenzamine. Furthermore, after scaling the model to human vascular tissue, the model also adequately predicted the interaction between both agents on human renal arteries. The 2R-1T model can be of relevance to quantitatively characterize the interaction between two drugs that interact via different receptors and a common transducer pathway. Its mechanistic properties are valuable for scaling the model across species. This approach is therefore of significant value to rationally optimize novel combination treatments. This article is protected by copyright. All rights reserved.
Multidimensional quantitative analysis of mRNA expression within intact vertebrate embryos.
Trivedi, Vikas; Choi, Harry M T; Fraser, Scott E; Pierce, Niles A
2018-01-08
For decades, in situ hybridization methods have been essential tools for studies of vertebrate development and disease, as they enable qualitative analyses of mRNA expression in an anatomical context. Quantitative mRNA analyses typically sacrifice the anatomy, relying on embryo microdissection, dissociation, cell sorting and/or homogenization. Here, we eliminate the trade-off between quantitation and anatomical context, using quantitative in situ hybridization chain reaction (qHCR) to perform accurate and precise relative quantitation of mRNA expression with subcellular resolution within whole-mount vertebrate embryos. Gene expression can be queried in two directions: read-out from anatomical space to expression space reveals co-expression relationships in selected regions of the specimen; conversely, read-in from multidimensional expression space to anatomical space reveals those anatomical locations in which selected gene co-expression relationships occur. As we demonstrate by examining gene circuits underlying somitogenesis, quantitative read-out and read-in analyses provide the strengths of flow cytometry expression analyses, but by preserving subcellular anatomical context, they enable bi-directional queries that open a new era for in situ hybridization. © 2018. Published by The Company of Biologists Ltd.
Natal, Rodrigo A; Vassallo, José; Paiva, Geisilene R; Pelegati, Vitor B; Barbosa, Guilherme O; Mendonça, Guilherme R; Bondarik, Caroline; Derchain, Sophie F; Carvalho, Hernandes F; Lima, Carmen S; Cesar, Carlos L; Sarian, Luís Otávio
2018-04-01
Second-harmonic generation microscopy represents an important tool to evaluate extracellular matrix collagen structure, which undergoes changes during cancer progression. Thus, it is potentially relevant to assess breast cancer development. We propose the use of second-harmonic generation images of tumor stroma selected on hematoxylin and eosin-stained slides to evaluate the prognostic value of collagen fibers analyses in peri and intratumoral areas in patients diagnosed with invasive ductal breast carcinoma. Quantitative analyses of collagen parameters were performed using ImageJ software. These parameters presented significantly higher values in peri than in intratumoral areas. Higher intratumoral collagen uniformity was associated with high pathological stages and with the presence of axillary lymph node metastasis. In patients with immunohistochemistry-based luminal subtype, higher intratumoral collagen uniformity and quantity were independently associated with poorer relapse-free and overall survival, respectively. A multivariate response recursive partitioning model determined 12.857 and 11.894 as the best cut-offs for intratumoral collagen quantity and uniformity, respectively. These values have shown high sensitivity and specificity to differentiate distinct outcomes. Values of intratumoral collagen quantity and uniformity exceeding the cut-offs were strongly associated with poorer relapse-free and overall survival. Our findings support a promising prognostic value of quantitative evaluation of intratumoral collagen by second-harmonic generation imaging mainly in the luminal subtype breast cancer.
Characterizing nonconstant instrumental variance in emerging miniaturized analytical techniques.
Noblitt, Scott D; Berg, Kathleen E; Cate, David M; Henry, Charles S
2016-04-07
Measurement variance is a crucial aspect of quantitative chemical analysis. Variance directly affects important analytical figures of merit, including detection limit, quantitation limit, and confidence intervals. Most reported analyses for emerging analytical techniques implicitly assume constant variance (homoskedasticity) by using unweighted regression calibrations. Despite the assumption of constant variance, it is known that most instruments exhibit heteroskedasticity, where variance changes with signal intensity. Ignoring nonconstant variance results in suboptimal calibrations, invalid uncertainty estimates, and incorrect detection limits. Three techniques where homoskedasticity is often assumed were covered in this work to evaluate if heteroskedasticity had a significant quantitative impact-naked-eye, distance-based detection using paper-based analytical devices (PADs), cathodic stripping voltammetry (CSV) with disposable carbon-ink electrode devices, and microchip electrophoresis (MCE) with conductivity detection. Despite these techniques representing a wide range of chemistries and precision, heteroskedastic behavior was confirmed for each. The general variance forms were analyzed, and recommendations for accounting for nonconstant variance discussed. Monte Carlo simulations of instrument responses were performed to quantify the benefits of weighted regression, and the sensitivity to uncertainty in the variance function was tested. Results show that heteroskedasticity should be considered during development of new techniques; even moderate uncertainty (30%) in the variance function still results in weighted regression outperforming unweighted regressions. We recommend utilizing the power model of variance because it is easy to apply, requires little additional experimentation, and produces higher-precision results and more reliable uncertainty estimates than assuming homoskedasticity. Copyright © 2016 Elsevier B.V. All rights reserved.
Goeminne, Ludger J E; Gevaert, Kris; Clement, Lieven
2018-01-16
Label-free shotgun proteomics is routinely used to assess proteomes. However, extracting relevant information from the massive amounts of generated data remains difficult. This tutorial provides a strong foundation on analysis of quantitative proteomics data. We provide key statistical concepts that help researchers to design proteomics experiments and we showcase how to analyze quantitative proteomics data using our recent free and open-source R package MSqRob, which was developed to implement the peptide-level robust ridge regression method for relative protein quantification described by Goeminne et al. MSqRob can handle virtually any experimental proteomics design and outputs proteins ordered by statistical significance. Moreover, its graphical user interface and interactive diagnostic plots provide easy inspection and also detection of anomalies in the data and flaws in the data analysis, allowing deeper assessment of the validity of results and a critical review of the experimental design. Our tutorial discusses interactive preprocessing, data analysis and visualization of label-free MS-based quantitative proteomics experiments with simple and more complex designs. We provide well-documented scripts to run analyses in bash mode on GitHub, enabling the integration of MSqRob in automated pipelines on cluster environments (https://github.com/statOmics/MSqRob). The concepts outlined in this tutorial aid in designing better experiments and analyzing the resulting data more appropriately. The two case studies using the MSqRob graphical user interface will contribute to a wider adaptation of advanced peptide-based models, resulting in higher quality data analysis workflows and more reproducible results in the proteomics community. We also provide well-documented scripts for experienced users that aim at automating MSqRob on cluster environments. Copyright © 2017 Elsevier B.V. All rights reserved.
METHODS TO CLASSIFY ENVIRONMENTAL SAMPLES BASED ON MOLD ANALYSES BY QPCR
Quantitative PCR (QPCR) analysis of molds in indoor environmental samples produces highly accurate speciation and enumeration data. In a number of studies, eighty of the most common or potentially problematic indoor molds were identified and quantified in dust samples from homes...
Highway-railway at-grade crossing structures : rideability measurements and assessments.
DOT National Transportation Integrated Search
2009-05-01
This report provides two analyses for obtaining a quantitative means of rating the condition of railroad-highway at-grade crossings based on their measured roughness. Phase One of this report examined 11 crossings in the Lexington area by use of a la...
Meta-analysis is not an exact science: Call for guidance on quantitative synthesis decisions.
Haddaway, Neal R; Rytwinski, Trina
2018-05-01
Meta-analysis is becoming increasingly popular in the field of ecology and environmental management. It increases the effective power of analyses relative to single studies, and allows researchers to investigate effect modifiers and sources of heterogeneity that could not be easily examined within single studies. Many systematic reviewers will set out to conduct a meta-analysis as part of their synthesis, but meta-analysis requires a niche set of skills that are not widely held by the environmental research community. Each step in the process of carrying out a meta-analysis requires decisions that have both scientific and statistical implications. Reviewers are likely to be faced with a plethora of decisions over which effect size to choose, how to calculate variances, and how to build statistical models. Some of these decisions may be simple based on appropriateness of the options. At other times, reviewers must choose between equally valid approaches given the information available to them. This presents a significant problem when reviewers are attempting to conduct a reliable synthesis, such as a systematic review, where subjectivity is minimised and all decisions are documented and justified transparently. We propose three urgent, necessary developments within the evidence synthesis community. Firstly, we call on quantitative synthesis experts to improve guidance on how to prepare data for quantitative synthesis, providing explicit detail to support systematic reviewers. Secondly, we call on journal editors and evidence synthesis coordinating bodies (e.g. CEE) to ensure that quantitative synthesis methods are adequately reported in a transparent and repeatable manner in published systematic reviews. Finally, where faced with two or more broadly equally valid alternative methods or actions, reviewers should conduct multiple analyses, presenting all options, and discussing the implications of the different analytical approaches. We believe it is vital to tackle the possible subjectivity in quantitative synthesis described herein to ensure that the extensive efforts expended in producing systematic reviews and other evidence synthesis products is not wasted because of a lack of rigour or reliability in the final synthesis step. Copyright © 2018 Elsevier Ltd. All rights reserved.
Visualizing dispersive features in 2D image via minimum gradient method
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Yu; Wang, Yan; Shen, Zhi -Xun
Here, we developed a minimum gradient based method to track ridge features in a 2D image plot, which is a typical data representation in many momentum resolved spectroscopy experiments. Through both analytic formulation and numerical simulation, we compare this new method with existing DC (distribution curve) based and higher order derivative based analyses. We find that the new method has good noise resilience and enhanced contrast especially for weak intensity features and meanwhile preserves the quantitative local maxima information from the raw image. An algorithm is proposed to extract 1D ridge dispersion from the 2D image plot, whose quantitative applicationmore » to angle-resolved photoemission spectroscopy measurements on high temperature superconductors is demonstrated.« less
Visualizing dispersive features in 2D image via minimum gradient method
He, Yu; Wang, Yan; Shen, Zhi -Xun
2017-07-24
Here, we developed a minimum gradient based method to track ridge features in a 2D image plot, which is a typical data representation in many momentum resolved spectroscopy experiments. Through both analytic formulation and numerical simulation, we compare this new method with existing DC (distribution curve) based and higher order derivative based analyses. We find that the new method has good noise resilience and enhanced contrast especially for weak intensity features and meanwhile preserves the quantitative local maxima information from the raw image. An algorithm is proposed to extract 1D ridge dispersion from the 2D image plot, whose quantitative applicationmore » to angle-resolved photoemission spectroscopy measurements on high temperature superconductors is demonstrated.« less
NASA Astrophysics Data System (ADS)
Polverino, Pierpaolo; Frisk, Erik; Jung, Daniel; Krysander, Mattias; Pianese, Cesare
2017-07-01
The present paper proposes an advanced approach for Polymer Electrolyte Membrane Fuel Cell (PEMFC) systems fault detection and isolation through a model-based diagnostic algorithm. The considered algorithm is developed upon a lumped parameter model simulating a whole PEMFC system oriented towards automotive applications. This model is inspired by other models available in the literature, with further attention to stack thermal dynamics and water management. The developed model is analysed by means of Structural Analysis, to identify the correlations among involved physical variables, defined equations and a set of faults which may occur in the system (related to both auxiliary components malfunctions and stack degradation phenomena). Residual generators are designed by means of Causal Computation analysis and the maximum theoretical fault isolability, achievable with a minimal number of installed sensors, is investigated. The achieved results proved the capability of the algorithm to theoretically detect and isolate almost all faults with the only use of stack voltage and temperature sensors, with significant advantages from an industrial point of view. The effective fault isolability is proved through fault simulations at a specific fault magnitude with an advanced residual evaluation technique, to consider quantitative residual deviations from normal conditions and achieve univocal fault isolation.
Yuan, Yinyin; Failmezger, Henrik; Rueda, Oscar M; Ali, H Raza; Gräf, Stefan; Chin, Suet-Feung; Schwarz, Roland F; Curtis, Christina; Dunning, Mark J; Bardwell, Helen; Johnson, Nicola; Doyle, Sarah; Turashvili, Gulisa; Provenzano, Elena; Aparicio, Sam; Caldas, Carlos; Markowetz, Florian
2012-10-24
Solid tumors are heterogeneous tissues composed of a mixture of cancer and normal cells, which complicates the interpretation of their molecular profiles. Furthermore, tissue architecture is generally not reflected in molecular assays, rendering this rich information underused. To address these challenges, we developed a computational approach based on standard hematoxylin and eosin-stained tissue sections and demonstrated its power in a discovery and validation cohort of 323 and 241 breast tumors, respectively. To deconvolute cellular heterogeneity and detect subtle genomic aberrations, we introduced an algorithm based on tumor cellularity to increase the comparability of copy number profiles between samples. We next devised a predictor for survival in estrogen receptor-negative breast cancer that integrated both image-based and gene expression analyses and significantly outperformed classifiers that use single data types, such as microarray expression signatures. Image processing also allowed us to describe and validate an independent prognostic factor based on quantitative analysis of spatial patterns between stromal cells, which are not detectable by molecular assays. Our quantitative, image-based method could benefit any large-scale cancer study by refining and complementing molecular assays of tumor samples.
Zhang, Xuan; Li, Wei; Yin, Bin; Chen, Weizhong; Kelly, Declan P; Wang, Xiaoxin; Zheng, Kaiyi; Du, Yiping
2013-10-01
Coffee is the most heavily consumed beverage in the world after water, for which quality is a key consideration in commercial trade. Therefore, caffeine content which has a significant effect on the final quality of the coffee products requires to be determined fast and reliably by new analytical techniques. The main purpose of this work was to establish a powerful and practical analytical method based on near infrared spectroscopy (NIRS) and chemometrics for quantitative determination of caffeine content in roasted Arabica coffees. Ground coffee samples within a wide range of roasted levels were analyzed by NIR, meanwhile, in which the caffeine contents were quantitative determined by the most commonly used HPLC-UV method as the reference values. Then calibration models based on chemometric analyses of the NIR spectral data and reference concentrations of coffee samples were developed. Partial least squares (PLS) regression was used to construct the models. Furthermore, diverse spectra pretreatment and variable selection techniques were applied in order to obtain robust and reliable reduced-spectrum regression models. Comparing the respective quality of the different models constructed, the application of second derivative pretreatment and stability competitive adaptive reweighted sampling (SCARS) variable selection provided a notably improved regression model, with root mean square error of cross validation (RMSECV) of 0.375 mg/g and correlation coefficient (R) of 0.918 at PLS factor of 7. An independent test set was used to assess the model, with the root mean square error of prediction (RMSEP) of 0.378 mg/g, mean relative error of 1.976% and mean relative standard deviation (RSD) of 1.707%. Thus, the results provided by the high-quality calibration model revealed the feasibility of NIR spectroscopy for at-line application to predict the caffeine content of unknown roasted coffee samples, thanks to the short analysis time of a few seconds and non-destructive advantages of NIRS. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Xuan; Li, Wei; Yin, Bin; Chen, Weizhong; Kelly, Declan P.; Wang, Xiaoxin; Zheng, Kaiyi; Du, Yiping
2013-10-01
Coffee is the most heavily consumed beverage in the world after water, for which quality is a key consideration in commercial trade. Therefore, caffeine content which has a significant effect on the final quality of the coffee products requires to be determined fast and reliably by new analytical techniques. The main purpose of this work was to establish a powerful and practical analytical method based on near infrared spectroscopy (NIRS) and chemometrics for quantitative determination of caffeine content in roasted Arabica coffees. Ground coffee samples within a wide range of roasted levels were analyzed by NIR, meanwhile, in which the caffeine contents were quantitative determined by the most commonly used HPLC-UV method as the reference values. Then calibration models based on chemometric analyses of the NIR spectral data and reference concentrations of coffee samples were developed. Partial least squares (PLS) regression was used to construct the models. Furthermore, diverse spectra pretreatment and variable selection techniques were applied in order to obtain robust and reliable reduced-spectrum regression models. Comparing the respective quality of the different models constructed, the application of second derivative pretreatment and stability competitive adaptive reweighted sampling (SCARS) variable selection provided a notably improved regression model, with root mean square error of cross validation (RMSECV) of 0.375 mg/g and correlation coefficient (R) of 0.918 at PLS factor of 7. An independent test set was used to assess the model, with the root mean square error of prediction (RMSEP) of 0.378 mg/g, mean relative error of 1.976% and mean relative standard deviation (RSD) of 1.707%. Thus, the results provided by the high-quality calibration model revealed the feasibility of NIR spectroscopy for at-line application to predict the caffeine content of unknown roasted coffee samples, thanks to the short analysis time of a few seconds and non-destructive advantages of NIRS.
DRIFTSEL: an R package for detecting signals of natural selection in quantitative traits.
Karhunen, M; Merilä, J; Leinonen, T; Cano, J M; Ovaskainen, O
2013-07-01
Approaches and tools to differentiate between natural selection and genetic drift as causes of population differentiation are of frequent demand in evolutionary biology. Based on the approach of Ovaskainen et al. (2011), we have developed an R package (DRIFTSEL) that can be used to differentiate between stabilizing selection, diversifying selection and random genetic drift as causes of population differentiation in quantitative traits when neutral marker and quantitative genetic data are available. Apart from illustrating the use of this method and the interpretation of results using simulated data, we apply the package on data from three-spined sticklebacks (Gasterosteus aculeatus) to highlight its virtues. DRIFTSEL can also be used to perform usual quantitative genetic analyses in common-garden study designs. © 2013 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Liu, Cheng-Lin; Sun, Ze; Lu, Gui-Min; Yu, Jian-Guo
2018-05-01
Gas-evolving vertical electrode system is a typical electrochemical industrial reactor. Gas bubbles are released from the surfaces of the anode and affect the electrolyte flow pattern and even the cell performance. In the current work, the hydrodynamics induced by the air bubbles in a cold model was experimentally and numerically investigated. Particle image velocimetry and volumetric three-component velocimetry techniques were applied to experimentally visualize the hydrodynamics characteristics and flow fields in a two-dimensional (2D) plane and a three-dimensional (3D) space, respectively. Measurements were performed at different gas rates. Furthermore, the corresponding mathematical model was developed under identical conditions for the qualitative and quantitative analyses. The experimental measurements were compared with the numerical results based on the mathematical model. The study of the time-averaged flow field, three velocity components, instantaneous velocity and turbulent intensity indicate that the numerical model qualitatively reproduces liquid motion. The 3D model predictions capture the flow behaviour more accurately than the 2D model in this study.
Liu, Cheng-Lin; Sun, Ze; Lu, Gui-Min; Yu, Jian-Guo
2018-05-01
Gas-evolving vertical electrode system is a typical electrochemical industrial reactor. Gas bubbles are released from the surfaces of the anode and affect the electrolyte flow pattern and even the cell performance. In the current work, the hydrodynamics induced by the air bubbles in a cold model was experimentally and numerically investigated. Particle image velocimetry and volumetric three-component velocimetry techniques were applied to experimentally visualize the hydrodynamics characteristics and flow fields in a two-dimensional (2D) plane and a three-dimensional (3D) space, respectively. Measurements were performed at different gas rates. Furthermore, the corresponding mathematical model was developed under identical conditions for the qualitative and quantitative analyses. The experimental measurements were compared with the numerical results based on the mathematical model. The study of the time-averaged flow field, three velocity components, instantaneous velocity and turbulent intensity indicate that the numerical model qualitatively reproduces liquid motion. The 3D model predictions capture the flow behaviour more accurately than the 2D model in this study.
Lu, Gui-Min; Yu, Jian-Guo
2018-01-01
Gas-evolving vertical electrode system is a typical electrochemical industrial reactor. Gas bubbles are released from the surfaces of the anode and affect the electrolyte flow pattern and even the cell performance. In the current work, the hydrodynamics induced by the air bubbles in a cold model was experimentally and numerically investigated. Particle image velocimetry and volumetric three-component velocimetry techniques were applied to experimentally visualize the hydrodynamics characteristics and flow fields in a two-dimensional (2D) plane and a three-dimensional (3D) space, respectively. Measurements were performed at different gas rates. Furthermore, the corresponding mathematical model was developed under identical conditions for the qualitative and quantitative analyses. The experimental measurements were compared with the numerical results based on the mathematical model. The study of the time-averaged flow field, three velocity components, instantaneous velocity and turbulent intensity indicate that the numerical model qualitatively reproduces liquid motion. The 3D model predictions capture the flow behaviour more accurately than the 2D model in this study. PMID:29892347
Application of Mixed-Methods Approaches to Higher Education and Intersectional Analyses
ERIC Educational Resources Information Center
Griffin, Kimberly A.; Museus, Samuel D.
2011-01-01
In this article, the authors discuss the utility of combining quantitative and qualitative methods in conducting intersectional analyses. First, they discuss some of the paradigmatic underpinnings of qualitative and quantitative research, and how these methods can be used in intersectional analyses. They then consider how paradigmatic pragmatism…
Role Of Social Networks In Resilience Of Naval Recruits: A Quantitative Analysis
2016-06-01
comprises 1,297 total surveys from a total of eight divisions of recruits at two different time periods. Quantitative analyses using surveys and network... surveys from a total of eight divisions of recruits at two different time periods. Quantitative analyses using surveys and network data examine the effects...NETWORKS IN RESILIENCE OF NAVAL RECRUITS: A QUANTITATIVE ANALYSIS by Andrea M. Watling June 2016 Thesis Advisor: Edward H. Powley Co
NASA Astrophysics Data System (ADS)
Ragno, Rino; Ballante, Flavio; Pirolli, Adele; Wickersham, Richard B.; Patsilinakos, Alexandros; Hesse, Stéphanie; Perspicace, Enrico; Kirsch, Gilbert
2015-08-01
Vascular endothelial growth factor receptor-2, (VEGFR-2), is a key element in angiogenesis, the process by which new blood vessels are formed, and is thus an important pharmaceutical target. Here, 3-D quantitative structure-activity relationship (3-D QSAR) were used to build a quantitative screening and pharmacophore model of the VEGFR-2 receptors for design of inhibitors with improved activities. Most of available experimental data information has been used as training set to derive optimized and fully cross-validated eight mono-probe and a multi-probe quantitative models. Notable is the use of 262 molecules, aligned following both structure-based and ligand-based protocols, as external test set confirming the 3-D QSAR models' predictive capability and their usefulness in design new VEGFR-2 inhibitors. From a survey on literature, this is the first generation of a wide-ranging computational medicinal chemistry application on VEGFR2 inhibitors.
de Croon, E M; Blonk, R; de Zwart, B C H; Frings-Dresen, M; Broersen, J
2002-01-01
Objectives: Building on Karasek's model of job demands and control (JD-C model), this study examined the effects of job control, quantitative workload, and two occupation specific job demands (physical demands and supervisor demands) on fatigue and job dissatisfaction in Dutch lorry drivers. Methods: From 1181 lorry drivers (adjusted response 63%) self reported information was gathered by questionnaire on the independent variables (job control, quantitative workload, physical demands, and supervisor demands) and the dependent variables (fatigue and job dissatisfaction). Stepwise multiple regression analyses were performed to examine the main effects of job demands and job control and the interaction effect between job control and job demands on fatigue and job dissatisfaction. Results: The inclusion of physical and supervisor demands in the JD-C model explained a significant amount of variance in fatigue (3%) and job dissatisfaction (7%) over and above job control and quantitative workload. Moreover, in accordance with Karasek's interaction hypothesis, job control buffered the positive relation between quantitative workload and job dissatisfaction. Conclusions: Despite methodological limitations, the results suggest that the inclusion of (occupation) specific job control and job demand measures is a fruitful elaboration of the JD-C model. The occupation specific JD-C model gives occupational stress researchers better insight into the relation between the psychosocial work environment and wellbeing. Moreover, the occupation specific JD-C model may give practitioners more concrete and useful information about risk factors in the psychosocial work environment. Therefore, this model may provide points of departure for effective stress reducing interventions at work. PMID:12040108
de Croon, E M; Blonk, R W B; de Zwart, B C H; Frings-Dresen, M H W; Broersen, J P J
2002-06-01
Building on Karasek's model of job demands and control (JD-C model), this study examined the effects of job control, quantitative workload, and two occupation specific job demands (physical demands and supervisor demands) on fatigue and job dissatisfaction in Dutch lorry drivers. From 1181 lorry drivers (adjusted response 63%) self reported information was gathered by questionnaire on the independent variables (job control, quantitative workload, physical demands, and supervisor demands) and the dependent variables (fatigue and job dissatisfaction). Stepwise multiple regression analyses were performed to examine the main effects of job demands and job control and the interaction effect between job control and job demands on fatigue and job dissatisfaction. The inclusion of physical and supervisor demands in the JD-C model explained a significant amount of variance in fatigue (3%) and job dissatisfaction (7%) over and above job control and quantitative workload. Moreover, in accordance with Karasek's interaction hypothesis, job control buffered the positive relation between quantitative workload and job dissatisfaction. Despite methodological limitations, the results suggest that the inclusion of (occupation) specific job control and job demand measures is a fruitful elaboration of the JD-C model. The occupation specific JD-C model gives occupational stress researchers better insight into the relation between the psychosocial work environment and wellbeing. Moreover, the occupation specific JD-C model may give practitioners more concrete and useful information about risk factors in the psychosocial work environment. Therefore, this model may provide points of departure for effective stress reducing interventions at work.
NASA Technical Reports Server (NTRS)
Manney, Gloria L.; Sabutis, Joseph L.; Pawson, Steven; Santee, Michelle L.; Naujokat, Barbara; Swinbank, Richard; Gelman, Melvyn E.; Ebisuzaki, Wesley; Atlas, Robert (Technical Monitor)
2001-01-01
A quantitative intercomparison of six meteorological analyses is presented for the cold 1999-2000 and 1995-1996 Arctic winters. The impacts of using different analyzed temperatures in calculations of polar stratospheric cloud (PSC) formation potential, and of different winds in idealized trajectory-based temperature histories, are substantial. The area with temperatures below a PSC formation threshold commonly varies by approximately 25% among the analyses, with differences of over 50% at some times/locations. Freie University at Berlin analyses are often colder than others at T is less than or approximately 205 K. Biases between analyses vary from year to year; in January 2000. U.K. Met Office analyses were coldest and National Centers for Environmental Prediction (NCEP) analyses warmest. while NCEP analyses were usually coldest in 1995-1996 and Met Office or NCEP[National Center for Atmospheric Research Reanalysis (REAN) warmest. European Centre for Medium Range Weather Forecasting (ECMWF) temperatures agreed better with other analyses in 1999-2000, after improvements in the assimilation model. than in 1995-1996. Case-studies of temperature histories show substantial differences using Met Office, NCEP, REAN and NASA Data Assimilation Office (DAO) analyses. In January 2000 (when a large cold region was centered in the polar vortex), qualitatively similar results were obtained for all analyses. However, in February 2000 (a much warmer period) and in January and February 1996 (comparably cold to January 2000 but with large cold regions near the polar vortex edge), distributions of "potential PSC lifetimes" and total time spent below a PSC formation threshold varied significantly among the analyses. Largest peaks in "PSC lifetime" distributions in January 2000 were at 4-6 and 11-14 days. while in the 1996 periods, they were at 1-3 days. Thus different meteorological conditions in comparably cold winters had a large impact on expectations for PSC formation and on the discrepancies between different meteorological analyses. Met Office. NCEP, REAN, ECMWF and DAO analyses are commonly used for trajectory calculations and in chemical transport models; the choice of which analysis to use can strongly influence the results of such studies.
Metzger, Gregory J; Kalavagunta, Chaitanya; Spilseth, Benjamin; Bolan, Patrick J; Li, Xiufeng; Hutter, Diane; Nam, Jung W; Johnson, Andrew D; Henriksen, Jonathan C; Moench, Laura; Konety, Badrinath; Warlick, Christopher A; Schmechel, Stephen C; Koopmeiners, Joseph S
2016-06-01
Purpose To develop multiparametric magnetic resonance (MR) imaging models to generate a quantitative, user-independent, voxel-wise composite biomarker score (CBS) for detection of prostate cancer by using coregistered correlative histopathologic results, and to compare performance of CBS-based detection with that of single quantitative MR imaging parameters. Materials and Methods Institutional review board approval and informed consent were obtained. Patients with a diagnosis of prostate cancer underwent multiparametric MR imaging before surgery for treatment. All MR imaging voxels in the prostate were classified as cancer or noncancer on the basis of coregistered histopathologic data. Predictive models were developed by using more than one quantitative MR imaging parameter to generate CBS maps. Model development and evaluation of quantitative MR imaging parameters and CBS were performed separately for the peripheral zone and the whole gland. Model accuracy was evaluated by using the area under the receiver operating characteristic curve (AUC), and confidence intervals were calculated with the bootstrap procedure. The improvement in classification accuracy was evaluated by comparing the AUC for the multiparametric model and the single best-performing quantitative MR imaging parameter at the individual level and in aggregate. Results Quantitative T2, apparent diffusion coefficient (ADC), volume transfer constant (K(trans)), reflux rate constant (kep), and area under the gadolinium concentration curve at 90 seconds (AUGC90) were significantly different between cancer and noncancer voxels (P < .001), with ADC showing the best accuracy (peripheral zone AUC, 0.82; whole gland AUC, 0.74). Four-parameter models demonstrated the best performance in both the peripheral zone (AUC, 0.85; P = .010 vs ADC alone) and whole gland (AUC, 0.77; P = .043 vs ADC alone). Individual-level analysis showed statistically significant improvement in AUC in 82% (23 of 28) and 71% (24 of 34) of patients with peripheral-zone and whole-gland models, respectively, compared with ADC alone. Model-based CBS maps for cancer detection showed improved visualization of cancer location and extent. Conclusion Quantitative multiparametric MR imaging models developed by using coregistered correlative histopathologic data yielded a voxel-wise CBS that outperformed single quantitative MR imaging parameters for detection of prostate cancer, especially when the models were assessed at the individual level. (©) RSNA, 2016 Online supplemental material is available for this article.
Zaritsky, Assaf; Natan, Sari; Horev, Judith; Hecht, Inbal; Wolf, Lior; Ben-Jacob, Eshel; Tsarfaty, Ilan
2011-01-01
Confocal microscopy analysis of fluorescence and morphology is becoming the standard tool in cell biology and molecular imaging. Accurate quantification algorithms are required to enhance the understanding of different biological phenomena. We present a novel approach based on image-segmentation of multi-cellular regions in bright field images demonstrating enhanced quantitative analyses and better understanding of cell motility. We present MultiCellSeg, a segmentation algorithm to separate between multi-cellular and background regions for bright field images, which is based on classification of local patches within an image: a cascade of Support Vector Machines (SVMs) is applied using basic image features. Post processing includes additional classification and graph-cut segmentation to reclassify erroneous regions and refine the segmentation. This approach leads to a parameter-free and robust algorithm. Comparison to an alternative algorithm on wound healing assay images demonstrates its superiority. The proposed approach was used to evaluate common cell migration models such as wound healing and scatter assay. It was applied to quantify the acceleration effect of Hepatocyte growth factor/scatter factor (HGF/SF) on healing rate in a time lapse confocal microscopy wound healing assay and demonstrated that the healing rate is linear in both treated and untreated cells, and that HGF/SF accelerates the healing rate by approximately two-fold. A novel fully automated, accurate, zero-parameters method to classify and score scatter-assay images was developed and demonstrated that multi-cellular texture is an excellent descriptor to measure HGF/SF-induced cell scattering. We show that exploitation of textural information from differential interference contrast (DIC) images on the multi-cellular level can prove beneficial for the analyses of wound healing and scatter assays. The proposed approach is generic and can be used alone or alongside traditional fluorescence single-cell processing to perform objective, accurate quantitative analyses for various biological applications. PMID:22096600
Zaritsky, Assaf; Natan, Sari; Horev, Judith; Hecht, Inbal; Wolf, Lior; Ben-Jacob, Eshel; Tsarfaty, Ilan
2011-01-01
Confocal microscopy analysis of fluorescence and morphology is becoming the standard tool in cell biology and molecular imaging. Accurate quantification algorithms are required to enhance the understanding of different biological phenomena. We present a novel approach based on image-segmentation of multi-cellular regions in bright field images demonstrating enhanced quantitative analyses and better understanding of cell motility. We present MultiCellSeg, a segmentation algorithm to separate between multi-cellular and background regions for bright field images, which is based on classification of local patches within an image: a cascade of Support Vector Machines (SVMs) is applied using basic image features. Post processing includes additional classification and graph-cut segmentation to reclassify erroneous regions and refine the segmentation. This approach leads to a parameter-free and robust algorithm. Comparison to an alternative algorithm on wound healing assay images demonstrates its superiority. The proposed approach was used to evaluate common cell migration models such as wound healing and scatter assay. It was applied to quantify the acceleration effect of Hepatocyte growth factor/scatter factor (HGF/SF) on healing rate in a time lapse confocal microscopy wound healing assay and demonstrated that the healing rate is linear in both treated and untreated cells, and that HGF/SF accelerates the healing rate by approximately two-fold. A novel fully automated, accurate, zero-parameters method to classify and score scatter-assay images was developed and demonstrated that multi-cellular texture is an excellent descriptor to measure HGF/SF-induced cell scattering. We show that exploitation of textural information from differential interference contrast (DIC) images on the multi-cellular level can prove beneficial for the analyses of wound healing and scatter assays. The proposed approach is generic and can be used alone or alongside traditional fluorescence single-cell processing to perform objective, accurate quantitative analyses for various biological applications.
Baeßler, Bettina; Schaarschmidt, Frank; Treutlein, Melanie; Stehning, Christian; Schnackenburg, Bernhard; Michels, Guido; Maintz, David; Bunck, Alexander C
2017-12-01
To re-evaluate a recently suggested approach of quantifying myocardial oedema and increased tissue inhomogeneity in myocarditis by T2-mapping. Cardiac magnetic resonance data of 99 patients with myocarditis were retrospectively analysed. Thirthy healthy volunteers served as controls. T2-mapping data were acquired at 1.5 T using a gradient-spin-echo T2-mapping sequence. T2-maps were segmented according to the 16-segments AHA-model. Segmental T2-values, segmental pixel-standard deviation (SD) and the derived parameters maxT2, maxSD and madSD were analysed and compared to the established Lake Louise criteria (LLC). A re-estimation of logistic regression models revealed that all models containing an SD-parameter were superior to any model containing global myocardial T2. Using a combined cut-off of 1.8 ms for madSD + 68 ms for maxT2 resulted in a diagnostic sensitivity of 75% and specificity of 80% and showed a similar diagnostic performance compared to LLC in receiver-operating-curve analyses. Combining madSD, maxT2 and late gadolinium enhancement (LGE) in a model resulted in a superior diagnostic performance compared to LLC (sensitivity 93%, specificity 83%). The results show that the novel T2-mapping-derived parameters exhibit an additional diagnostic value over LGE with the inherent potential to overcome the current limitations of T2-mapping. • A novel quantitative approach to myocardial oedema imaging in myocarditis was re-evaluated. • The T2-mapping-derived parameters maxT2 and madSD were compared to traditional Lake-Louise criteria. • Using maxT2 and madSD with dedicated cut-offs performs similarly to Lake-Louise criteria. • Adding maxT2 and madSD to LGE results in further increased diagnostic performance. • This novel approach has the potential to overcome the limitations of T2-mapping.
Predictive value of EEG in postanoxic encephalopathy: A quantitative model-based approach.
Efthymiou, Evdokia; Renzel, Roland; Baumann, Christian R; Poryazova, Rositsa; Imbach, Lukas L
2017-10-01
The majority of comatose patients after cardiac arrest do not regain consciousness due to severe postanoxic encephalopathy. Early and accurate outcome prediction is therefore essential in determining further therapeutic interventions. The electroencephalogram is a standardized and commonly available tool used to estimate prognosis in postanoxic patients. The identification of pathological EEG patterns with poor prognosis relies however primarily on visual EEG scoring by experts. We introduced a model-based approach of EEG analysis (state space model) that allows for an objective and quantitative description of spectral EEG variability. We retrospectively analyzed standard EEG recordings in 83 comatose patients after cardiac arrest between 2005 and 2013 in the intensive care unit of the University Hospital Zürich. Neurological outcome was assessed one month after cardiac arrest using the Cerebral Performance Category. For a dynamic and quantitative EEG analysis, we implemented a model-based approach (state space analysis) to quantify EEG background variability independent from visual scoring of EEG epochs. Spectral variability was compared between groups and correlated with clinical outcome parameters and visual EEG patterns. Quantitative assessment of spectral EEG variability (state space velocity) revealed significant differences between patients with poor and good outcome after cardiac arrest: Lower mean velocity in temporal electrodes (T4 and T5) was significantly associated with poor prognostic outcome (p<0.005) and correlated with independently identified visual EEG patterns such as generalized periodic discharges (p<0.02). Receiver operating characteristic (ROC) analysis confirmed the predictive value of lower state space velocity for poor clinical outcome after cardiac arrest (AUC 80.8, 70% sensitivity, 15% false positive rate). Model-based quantitative EEG analysis (state space analysis) provides a novel, complementary marker for prognosis in postanoxic encephalopathy. Copyright © 2017 Elsevier B.V. All rights reserved.
Kennedy, Jacob J.; Whiteaker, Jeffrey R.; Schoenherr, Regine M.; Yan, Ping; Allison, Kimberly; Shipley, Melissa; Lerch, Melissa; Hoofnagle, Andrew N.; Baird, Geoffrey Stuart; Paulovich, Amanda G.
2016-01-01
Despite a clinical, economic, and regulatory imperative to develop companion diagnostics, precious few new biomarkers have been successfully translated into clinical use, due in part to inadequate protein assay technologies to support large-scale testing of hundreds of candidate biomarkers in formalin-fixed paraffin embedded (FFPE) tissues. While the feasibility of using targeted, multiple reaction monitoring-mass spectrometry (MRM-MS) for quantitative analyses of FFPE tissues has been demonstrated, protocols have not been systematically optimized for robust quantification across a large number of analytes, nor has the performance of peptide immuno-MRM been evaluated. To address this gap, we used a test battery approach coupled to MRM-MS with the addition of stable isotope labeled standard peptides (targeting 512 analytes) to quantitatively evaluate the performance of three extraction protocols in combination with three trypsin digestion protocols (i.e. 9 processes). A process based on RapiGest buffer extraction and urea-based digestion was identified to enable similar quantitation results from FFPE and frozen tissues. Using the optimized protocols for MRM-based analysis of FFPE tissues, median precision was 11.4% (across 249 analytes). There was excellent correlation between measurements made on matched FFPE and frozen tissues, both for direct MRM analysis (R2 = 0.94) and immuno-MRM (R2 = 0.89). The optimized process enables highly reproducible, multiplex, standardizable, quantitative MRM in archival tissue specimens. PMID:27462933
USDA-ARS?s Scientific Manuscript database
As a first step towards the genetic mapping of quantitative trait loci (QTL) affecting stress response variation in rainbow trout, we performed complex segregation analyses (CSA) fitting mixed inheritance models of plasma cortisol using Bayesian methods in large full-sib families of rainbow trout. ...
Innovation from a Computational Social Science Perspective: Analyses and Models
ERIC Educational Resources Information Center
Casstevens, Randy M.
2013-01-01
Innovation processes are critical for preserving and improving our standard of living. While innovation has been studied by many disciplines, the focus has been on qualitative measures that are specific to a single technological domain. I adopt a quantitative approach to investigate underlying regularities that generalize across multiple domains.…
Chen, Y; Mao, J; Lin, J; Yu, H; Peters, S; Shebley, M
2016-01-01
This subteam under the Drug Metabolism Leadership Group (Innovation and Quality Consortium) investigated the quantitative role of circulating inhibitory metabolites in drug–drug interactions using physiologically based pharmacokinetic (PBPK) modeling. Three drugs with major circulating inhibitory metabolites (amiodarone, gemfibrozil, and sertraline) were systematically evaluated in addition to the literature review of recent examples. The application of PBPK modeling in drug interactions by inhibitory parent–metabolite pairs is described and guidance on strategic application is provided. PMID:27642087
Whole-brain ex-vivo quantitative MRI of the cuprizone mouse model
Hurley, Samuel A.; Vernon, Anthony C.; Torres, Joel; Dell’Acqua, Flavio; Williams, Steve C.R.; Cash, Diana
2016-01-01
Myelin is a critical component of the nervous system and a major contributor to contrast in Magnetic Resonance (MR) images. However, the precise contribution of myelination to multiple MR modalities is still under debate. The cuprizone mouse is a well-established model of demyelination that has been used in several MR studies, but these have often imaged only a single slice and analysed a small region of interest in the corpus callosum. We imaged and analyzed the whole brain of the cuprizone mouse ex-vivo using high-resolution quantitative MR methods (multi-component relaxometry, Diffusion Tensor Imaging (DTI) and morphometry) and found changes in multiple regions, including the corpus callosum, cerebellum, thalamus and hippocampus. The presence of inflammation, confirmed with histology, presents difficulties in isolating the sensitivity and specificity of these MR methods to demyelination using this model. PMID:27833805
NASA Astrophysics Data System (ADS)
Elez, Javier; Silva, Pablo G.; Huerta, Pedro; Perucha, M. Ángeles; Civis, Jorge; Roquero, Elvira; Rodríguez-Pascua, Miguel A.; Bardají, Teresa; Giner-Robles, Jorge L.; Martínez-Graña, Antonio
2016-12-01
The Malaga basin contains an important geological record documenting the complex paleogeographic evolution of the Gibraltar Arc before, during and after the closure and desiccation of the Mediterranean Sea triggered by the "Messinian Salinity crisis" (MSC). Proxy paleo-elevation data, estimated from the stratigraphic and geomorphological records, allow the building of quantitative paleogeoid, paleotopographic and paleogeographic models for the three main paleogeographic stages: pre-MSC (Tortonian-early Messinian), syn-MSC (late Messinian) and post-MSC (early Pliocene). The methodological workflow combines classical contouring procedures used in geology and isobase map models from geomorphometric analyses and proxy data overprinted on present Digital Terrain Models. The resulting terrain quantitative models have been arranged, managed and computed in a GIS environment. The computed terrain models enable the exploration of past landscapes usually beyond the reach of classical geomorphological analyses and strongly improve the paleogeographic and paleotopographic knowledge of the study area. The resulting models suggest the occurrence of a set of uplifted littoral erosive and paleokarstic landforms that evolved during pre-MSC times. These uplifted landform assemblages can explain the origin of key elements of the present landscape, such as the Torcal de Antequera and the large amount of mogote-like relict hills present in the zone, in terms of ancient uplifted tropical islands. The most prominent landform is the extensive erosional platform dominating the Betic frontal zone that represents the relic Atlantic wave cut platform elaborated during late-Tortonian to early Messinian times. The amount of uplift derived from paleogeoid models suggests that the area rose by about 340 m during the MSC. This points to isostatic uplift triggered by differential erosional unloading (towards the Mediterranean) as the main factor controlling landscape evolution in the area during and after the MSC. Former littoral landscapes in the old emergent axis of the Gibraltar Arc were uplifted to form the main water-divide of the present Betic Cordillera in the zone.
A relative quantitative assessment of myocardial perfusion by first-pass technique: animal study
NASA Astrophysics Data System (ADS)
Chen, Jun; Zhang, Zhang; Yu, Xuefang; Zhou, Kenneth J.
2015-03-01
The purpose of this study is to quantitatively assess the myocardial perfusion by first-pass technique in swine model. Numerous techniques based on the analysis of Computed Tomography (CT) Hounsfield Unit (HU) density have emerged. Although these methods proposed to be able to assess haemodynamically significant coronary artery stenosis, their limitations are noticed. There are still needs to develop some new techniques. Experiments were performed upon five (5) closed-chest swine. Balloon catheters were placed into the coronary artery to simulate different degrees of luminal stenosis. Myocardial Blood Flow (MBF) was measured using color microsphere technique. Fractional Flow Reserve (FFR) was measured using pressure wire. CT examinations were performed twice during First-pass phase under adenosine-stress condition. CT HU Density (HUDCT) and CT HU Density Ratio (HUDRCT) were calculated using the acquired CT images. Our study presents that HUDRCT shows a good (y=0.07245+0.09963x, r2=0.898) correlation with MBF and FFR. In receiver operating characteristic (ROC) curve analyses, HUDRCT provides excellent diagnostic performance for the detection of significant ischemia during adenosine-stress as defined by FFR indicated by the value of Area Under the Curve (AUC) of 0.927. HUDRCT has the potential to be developed as a useful indicator of quantitative assessment of myocardial perfusion.
Fielding-Miller, Rebecca; Dunkle, Kristin L; Cooper, Hannah L F; Windle, Michael; Hadley, Craig
2016-01-01
Transactional sex is associated with increased risk of HIV and gender based violence in southern Africa and around the world. However the typical quantitative operationalization, "the exchange of gifts or money for sex," can be at odds with a wide array of relationship types and motivations described in qualitative explorations. To build on the strengths of both qualitative and quantitative research streams, we used cultural consensus models to identify distinct models of transactional sex in Swaziland. The process allowed us to build and validate emic scales of transactional sex, while identifying key informants for qualitative interviews within each model to contextualize women's experiences and risk perceptions. We used logistic and multinomial logistic regression models to measure associations with condom use and social status outcomes. Fieldwork was conducted between November 2013 and December 2014 in the Hhohho and Manzini regions. We identified three distinct models of transactional sex in Swaziland based on 124 Swazi women's emic valuation of what they hoped to receive in exchange for sex with their partners. In a clinic-based survey (n = 406), consensus model scales were more sensitive to condom use than the etic definition. Model consonance had distinct effects on social status for the three different models. Transactional sex is better measured as an emic spectrum of expectations within a relationship, rather than an etic binary relationship type. Cultural consensus models allowed us to blend qualitative and quantitative approaches to create an emicly valid quantitative scale grounded in qualitative context. Copyright © 2015 Elsevier Ltd. All rights reserved.
Zhang, Xin-Ke; Lan, Yi-Bin; Zhu, Bao-Qing; Xiang, Xiao-Feng; Duan, Chang-Qing; Shi, Ying
2018-01-01
Monosaccharides, organic acids and amino acids are the important flavour-related components in wines. The aim of this article is to develop and validate a method that could simultaneously analyse these compounds in wine based on silylation derivatisation and gas chromatography-mass spectrometry (GC-MS), and apply this method to the investigation of the changes of these compounds and speculate upon their related influences on Cabernet Sauvignon wine flavour during wine ageing. This work presented a new approach for wine analysis and provided more information concerning red wine ageing. This method could simultaneously quantitatively analyse 2 monosaccharides, 8 organic acids and 13 amino acids in wine. A validation experiment showed good linearity, sensitivity, reproducibility and recovery. Multiple derivatives of five amino acids have been found but their effects on quantitative analysis were negligible, except for methionine. The evolution pattern of each category was different, and we speculated that the corresponding mechanisms involving microorganism activities, physical interactions and chemical reactions had a great correlation with red wine flavours during ageing. Simultaneously quantitative analysis of monosaccharides, organic acids and amino acids in wine was feasible and reliable and this method has extensive application prospects. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
Nguyen, Thai Phuong; Chang, Wei-Chang; Lai, Yen-Chih; Hsiao, Ta-Chih; Tsai, De-Hao
2017-10-01
In this work, we develop an aerosol-based, time-resolved ion mobility-coupled mass characterization method to investigate colloidal assembly of graphene oxide (GO)-silver nanoparticle (AgNP) hybrid nanostructure on a quantitative basis. Transmission electron microscopy (TEM) and zeta potential (ZP) analysis were used to provide visual information and elemental-based particle size distributions, respectively. Results clearly show a successful controlled assembly of GO-AgNP by electrostatic-directed heterogeneous aggregation between GO and bovine serum albumin (BSA)-functionalized AgNP under an acidic environment. Additionally, physical size, mass, and conformation (i.e., number of AgNP per nanohybrid) of GO-AgNP were shown to be proportional to the number concentration ratio of AgNP to GO (R) and the selected electrical mobility diameter. An analysis of colloidal stability of GO-AgNP indicates that the stability increased with its absolute ZP, which was dependent on R and environmental pH. The work presented here provides a proof of concept for systematically synthesizing hybrid colloidal nanomaterials through the tuning of surface chemistry in aqueous phase with the ability in quantitative characterization. Graphical Abstract Colloidal assembly of graphene oxide-silver nanoparticle hybrids characterized by aerosol differential mobility-coupled mass analyses.
Gokduman, Kurtulus; Avsaroglu, M Dilek; Cakiris, Aris; Ustek, Duran; Gurakan, G Candan
2016-03-01
The aim of the current study was to develop, a new, rapid, sensitive and quantitative Salmonella detection method using a Real-Time PCR technique based on an inexpensive, easy to produce, convenient and standardized recombinant plasmid positive control. To achieve this, two recombinant plasmids were constructed as reference molecules by cloning the two most commonly used Salmonella-specific target gene regions, invA and ttrRSBC. The more rapid detection enabled by the developed method (21 h) compared to the traditional culture method (90 h) allows the quantitative evaluation of Salmonella (quantification limits of 10(1)CFU/ml and 10(0)CFU/ml for the invA target and the ttrRSBC target, respectively), as illustrated using milk samples. Three advantages illustrated by the current study demonstrate the potential of the newly developed method to be used in routine analyses in the medical, veterinary, food and water/environmental sectors: I--The method provides fast analyses including the simultaneous detection and determination of correct pathogen counts; II--The method is applicable to challenging samples, such as milk; III--The method's positive controls (recombinant plasmids) are reproducible in large quantities without the need to construct new calibration curves. Copyright © 2016 Elsevier B.V. All rights reserved.
Wu, Zujian; Pang, Wei; Coghill, George M
2015-01-01
Both qualitative and quantitative model learning frameworks for biochemical systems have been studied in computational systems biology. In this research, after introducing two forms of pre-defined component patterns to represent biochemical models, we propose an integrative qualitative and quantitative modelling framework for inferring biochemical systems. In the proposed framework, interactions between reactants in the candidate models for a target biochemical system are evolved and eventually identified by the application of a qualitative model learning approach with an evolution strategy. Kinetic rates of the models generated from qualitative model learning are then further optimised by employing a quantitative approach with simulated annealing. Experimental results indicate that our proposed integrative framework is feasible to learn the relationships between biochemical reactants qualitatively and to make the model replicate the behaviours of the target system by optimising the kinetic rates quantitatively. Moreover, potential reactants of a target biochemical system can be discovered by hypothesising complex reactants in the synthetic models. Based on the biochemical models learned from the proposed framework, biologists can further perform experimental study in wet laboratory. In this way, natural biochemical systems can be better understood.
NASA Astrophysics Data System (ADS)
Yingst, R. A.; Head, J. W.
1996-03-01
Lunar volcanic history has been examined in light of geomorphological and stratigraphic constraints placed upon the surface features. Compositional and petrological analyses have provided models for the conditions of mare parent magma generation . The connection between lunar magma source regions and volcanic surface features remains unclear, however, both conceptually and quantitatively with respect to our understanding of transport mechanisms. It has been suggested that mare emplacement was controlled by propagation of dikes driven by the overpressurization of diapir-like source regions stalled below the cooling lunar highland crust. Recent analyses of the characteristics of lava ponds in the South Pole/Aitken and Orientale/Mendel-Rydberg basins based on Clementine, Lunar Orbiter and Zond data have provided evidence that supports this theory. In this contribution we report on an analysis of the areas, volumes, modes of occurrence and crustal thicknesses for mare deposits in the Marginis and Smythii basins, and investigate implications for magma transport mechanisms.
Shinkins, Bethany; Yang, Yaling; Abel, Lucy; Fanshawe, Thomas R
2017-04-14
Evaluations of diagnostic tests are challenging because of the indirect nature of their impact on patient outcomes. Model-based health economic evaluations of tests allow different types of evidence from various sources to be incorporated and enable cost-effectiveness estimates to be made beyond the duration of available study data. To parameterize a health-economic model fully, all the ways a test impacts on patient health must be quantified, including but not limited to diagnostic test accuracy. We assessed all UK NIHR HTA reports published May 2009-July 2015. Reports were included if they evaluated a diagnostic test, included a model-based health economic evaluation and included a systematic review and meta-analysis of test accuracy. From each eligible report we extracted information on the following topics: 1) what evidence aside from test accuracy was searched for and synthesised, 2) which methods were used to synthesise test accuracy evidence and how did the results inform the economic model, 3) how/whether threshold effects were explored, 4) how the potential dependency between multiple tests in a pathway was accounted for, and 5) for evaluations of tests targeted at the primary care setting, how evidence from differing healthcare settings was incorporated. The bivariate or HSROC model was implemented in 20/22 reports that met all inclusion criteria. Test accuracy data for health economic modelling was obtained from meta-analyses completely in four reports, partially in fourteen reports and not at all in four reports. Only 2/7 reports that used a quantitative test gave clear threshold recommendations. All 22 reports explored the effect of uncertainty in accuracy parameters but most of those that used multiple tests did not allow for dependence between test results. 7/22 tests were potentially suitable for primary care but the majority found limited evidence on test accuracy in primary care settings. The uptake of appropriate meta-analysis methods for synthesising evidence on diagnostic test accuracy in UK NIHR HTAs has improved in recent years. Future research should focus on other evidence requirements for cost-effectiveness assessment, threshold effects for quantitative tests and the impact of multiple diagnostic tests.
Cardiff, Robert D; Hubbard, Neil E; Engelberg, Jesse A; Munn, Robert J; Miller, Claramae H; Walls, Judith E; Chen, Jane Q; Velásquez-García, Héctor A; Galvez, Jose J; Bell, Katie J; Beckett, Laurel A; Li, Yue-Ju; Borowsky, Alexander D
2013-01-01
Quantitative Image Analysis (QIA) of digitized whole slide images for morphometric parameters and immunohistochemistry of breast cancer antigens was used to evaluate the technical reproducibility, biological variability, and intratumoral heterogeneity in three transplantable mouse mammary tumor models of human breast cancer. The relative preservation of structure and immunogenicity of the three mouse models and three human breast cancers was also compared when fixed with representatives of four distinct classes of fixatives. The three mouse mammary tumor cell models were an ER + /PR + model (SSM2), a Her2 + model (NDL), and a triple negative model (MET1). The four breast cancer antigens were ER, PR, Her2, and Ki67. The fixatives included examples of (1) strong cross-linkers, (2) weak cross-linkers, (3) coagulants, and (4) combination fixatives. Each parameter was quantitatively analyzed using modified Aperio Technologies ImageScope algorithms. Careful pre-analytical adjustments to the algorithms were required to provide accurate results. The QIA permitted rigorous statistical analysis of results and grading by rank order. The analyses suggested excellent technical reproducibility and confirmed biological heterogeneity within each tumor. The strong cross-linker fixatives, such as formalin, consistently ranked higher than weak cross-linker, coagulant and combination fixatives in both the morphometric and immunohistochemical parameters. PMID:23399853
The Role of Recurrence Plots in Characterizing the Output-Unemployment Relationship: An Analysis
Caraiani, Petre; Haven, Emmanuel
2013-01-01
We analyse the output-unemployment relationship using an approach based on cross-recurrence plots and quantitative recurrence analysis. We use post-war period quarterly U.S. data. The results obtained show the emergence of a complex and interesting relationship. PMID:23460814
Science and Engineering Indicators 2010
ERIC Educational Resources Information Center
National Science Foundation, 2010
2010-01-01
The Science Indicators series was designed to provide a broad base of quantitative information about U.S. science, engineering, and technology for use by policymakers, researchers, and the general public. "Science and Engineering Indicators 2010" contains analyses of key aspects of the scope, quality, and vitality of the Nation's science…
NASA Technical Reports Server (NTRS)
Case, Jonathan L.
2014-01-01
The NASA Short-term Prediction Research and Transition (SPoRT) Center has been running a real-time version of the Land Information System (LIS) since summer 2010 (hereafter, SPoRTLIS). The real-time SPoRT-LIS runs the Noah land surface model (LSM) in an offline capacity apart from a numerical weather prediction model, using input atmospheric and precipitation analyses (i.e., "forcings") to drive the Noah LSM integration at 3-km resolution. Its objectives are to (1) produce local-scale information about the soil state for NOAA/National Weather Service (NWS) situational awareness applications such as drought monitoring and assessing flood potential, and (2) provide land surface initialization fields for local modeling initiatives. The current domain extent has been limited by the input atmospheric analyses that drive the Noah LSM integration within SPoRT-LIS, specifically the National Centers for Environmental Prediction (NCEP) Stage IV precipitation analyses. Due to the nature of the geographical edges of the Stage IV precipitation grid and its limitations in the western U.S., the SPoRT-LIS was originally confined to a domain fully nested within the Stage IV grid, over the southeastern half of the Conterminous United States (CONUS). In order to expand the real-time SPoRT-LIS to a full CONUS domain, alternative precipitation forcing datasets were explored in year-long, offline comparison runs of the Noah LSM. Based on results of these comparison simulations, we chose to implement the radar/gauge-based precipitation analyses from the National Severe Storms Laboratory as a replacement to the Stage IV product. The Multi-Radar Multi-Sensor (MRMS; formerly known as the National Mosaic and multi-sensor Quantitative precipitation estimate) product has full CONUS coverage at higher-resolution, thereby providing better coverage and greater detail than that of the Stage IV product. This paper will describe the expanded/upgraded SPoRT-LIS, present comparisons between the original and upgraded SPoRT-LIS, and discuss the path forward for future collaboration opportunities with SPoRT partners in the NWS.
Dong, Ren G.; Welcome, Daniel E.; McDowell, Thomas W.; Wu, John Z.
2015-01-01
While simulations of the measured biodynamic responses of the whole human body or body segments to vibration are conventionally interpreted as summaries of biodynamic measurements, and the resulting models are considered quantitative, this study looked at these simulations from a different angle: model calibration. The specific aims of this study are to review and clarify the theoretical basis for model calibration, to help formulate the criteria for calibration validation, and to help appropriately select and apply calibration methods. In addition to established vibration theory, a novel theorem of mechanical vibration is also used to enhance the understanding of the mathematical and physical principles of the calibration. Based on this enhanced understanding, a set of criteria was proposed and used to systematically examine the calibration methods. Besides theoretical analyses, a numerical testing method is also used in the examination. This study identified the basic requirements for each calibration method to obtain a unique calibration solution. This study also confirmed that the solution becomes more robust if more than sufficient calibration references are provided. Practically, however, as more references are used, more inconsistencies can arise among the measured data for representing the biodynamic properties. To help account for the relative reliabilities of the references, a baseline weighting scheme is proposed. The analyses suggest that the best choice of calibration method depends on the modeling purpose, the model structure, and the availability and reliability of representative reference data. PMID:26740726
Afenya, Evans K; Ouifki, Rachid; Camara, Baba I; Mundle, Suneel D
2016-04-01
Stemming from current emerging paradigms related to the cancer stem cell hypothesis, an existing mathematical model is expanded and used to study cell interaction dynamics in the bone marrow and peripheral blood. The proposed mathematical model is described by a system of nonlinear differential equations with delay, to quantify the dynamics in abnormal hematopoiesis. The steady states of the model are analytically and numerically obtained. Some conditions for the local asymptotic stability of such states are investigated. Model analyses suggest that malignancy may be irreversible once it evolves from a nonmalignant state into a malignant one and no intervention takes place. This leads to the proposition that a great deal of emphasis be placed on cancer prevention. Nevertheless, should malignancy arise, treatment programs for its containment or curtailment may have to include a maximum and extensive level of effort to protect normal cells from eventual destruction. Further model analyses and simulations predict that in the untreated disease state, there is an evolution towards a situation in which malignant cells dominate the entire bone marrow - peripheral blood system. Arguments are then advanced regarding requirements for quantitatively understanding cancer stem cell behavior. Among the suggested requirements are, mathematical frameworks for describing the dynamics of cancer initiation and progression, the response to treatment, the evolution of resistance, and malignancy prevention dynamics within the bone marrow - peripheral blood architecture. Copyright © 2016 Elsevier Inc. All rights reserved.
Spain, S L; Pedroso, I; Kadeva, N; Miller, M B; Iacono, W G; McGue, M; Stergiakouli, E; Smith, G D; Putallaz, M; Lubinski, D; Meaburn, E L; Plomin, R; Simpson, M A
2016-01-01
Although individual differences in intelligence (general cognitive ability) are highly heritable, molecular genetic analyses to date have had limited success in identifying specific loci responsible for its heritability. This study is the first to investigate exome variation in individuals of extremely high intelligence. Under the quantitative genetic model, sampling from the high extreme of the distribution should provide increased power to detect associations. We therefore performed a case–control association analysis with 1409 individuals drawn from the top 0.0003 (IQ >170) of the population distribution of intelligence and 3253 unselected population-based controls. Our analysis focused on putative functional exonic variants assayed on the Illumina HumanExome BeadChip. We did not observe any individual protein-altering variants that are reproducibly associated with extremely high intelligence and within the entire distribution of intelligence. Moreover, no significant associations were found for multiple rare alleles within individual genes. However, analyses using genome-wide similarity between unrelated individuals (genome-wide complex trait analysis) indicate that the genotyped functional protein-altering variation yields a heritability estimate of 17.4% (s.e. 1.7%) based on a liability model. In addition, investigation of nominally significant associations revealed fewer rare alleles associated with extremely high intelligence than would be expected under the null hypothesis. This observation is consistent with the hypothesis that rare functional alleles are more frequently detrimental than beneficial to intelligence. PMID:26239293
Spain, S L; Pedroso, I; Kadeva, N; Miller, M B; Iacono, W G; McGue, M; Stergiakouli, E; Davey Smith, G; Putallaz, M; Lubinski, D; Meaburn, E L; Plomin, R; Simpson, M A
2016-08-01
Although individual differences in intelligence (general cognitive ability) are highly heritable, molecular genetic analyses to date have had limited success in identifying specific loci responsible for its heritability. This study is the first to investigate exome variation in individuals of extremely high intelligence. Under the quantitative genetic model, sampling from the high extreme of the distribution should provide increased power to detect associations. We therefore performed a case-control association analysis with 1409 individuals drawn from the top 0.0003 (IQ >170) of the population distribution of intelligence and 3253 unselected population-based controls. Our analysis focused on putative functional exonic variants assayed on the Illumina HumanExome BeadChip. We did not observe any individual protein-altering variants that are reproducibly associated with extremely high intelligence and within the entire distribution of intelligence. Moreover, no significant associations were found for multiple rare alleles within individual genes. However, analyses using genome-wide similarity between unrelated individuals (genome-wide complex trait analysis) indicate that the genotyped functional protein-altering variation yields a heritability estimate of 17.4% (s.e. 1.7%) based on a liability model. In addition, investigation of nominally significant associations revealed fewer rare alleles associated with extremely high intelligence than would be expected under the null hypothesis. This observation is consistent with the hypothesis that rare functional alleles are more frequently detrimental than beneficial to intelligence.
Martínez-Lavanchy, P M; Chen, Z; Lünsmann, V; Marin-Cevada, V; Vilchez-Vargas, R; Pieper, D H; Reiche, N; Kappelmeyer, U; Imparato, V; Junca, H; Nijenhuis, I; Müller, J A; Kuschk, P; Heipieper, H J
2015-09-01
In the present study, microbial toluene degradation in controlled constructed wetland model systems, planted fixed-bed reactors (PFRs), was queried with DNA-based methods in combination with stable isotope fractionation analysis and characterization of toluene-degrading microbial isolates. Two PFR replicates were operated with toluene as the sole external carbon and electron source for 2 years. The bulk redox conditions in these systems were hypoxic to anoxic. The autochthonous bacterial communities, as analyzed by Illumina sequencing of 16S rRNA gene amplicons, were mainly comprised of the families Xanthomonadaceae, Comamonadaceae, and Burkholderiaceae, plus Rhodospirillaceae in one of the PFR replicates. DNA microarray analyses of the catabolic potentials for aromatic compound degradation suggested the presence of the ring monooxygenation pathway in both systems, as well as the anaerobic toluene pathway in the PFR replicate with a high abundance of Rhodospirillaceae. The presence of catabolic genes encoding the ring monooxygenation pathway was verified by quantitative PCR analysis, utilizing the obtained toluene-degrading isolates as references. Stable isotope fractionation analysis showed low-level of carbon fractionation and only minimal hydrogen fractionation in both PFRs, which matches the fractionation signatures of monooxygenation and dioxygenation. In combination with the results of the DNA-based analyses, this suggests that toluene degradation occurs predominantly via ring monooxygenation in the PFRs. Copyright © 2015, American Society for Microbiology. All Rights Reserved.
Imbriaco, Massimo; Nappi, Carmela; Puglia, Marta; De Giorgi, Marco; Dell'Aversana, Serena; Cuocolo, Renato; Ponsiglione, Andrea; De Giorgi, Igino; Polito, Maria Vincenza; Klain, Michele; Piscione, Federico; Pace, Leonardo; Cuocolo, Alberto
2017-10-26
To compare cardiac magnetic resonance (CMR) qualitative and quantitative analysis methods for the noninvasive assessment of myocardial inflammation in patients with suspected acute myocarditis (AM). A total of 61 patients with suspected AM underwent coronary angiography and CMR. Qualitative analysis was performed applying Lake-Louise Criteria (LLC), followed by quantitative analysis based on the evaluation of edema ratio (ER) and global relative enhancement (RE). Diagnostic performance was assessed for each method by measuring the area under the curves (AUC) of the receiver operating characteristic analyses. The final diagnosis of AM was based on symptoms and signs suggestive of cardiac disease, evidence of myocardial injury as defined by electrocardiogram changes, elevated troponin I, exclusion of coronary artery disease by coronary angiography, and clinical and echocardiographic follow-up at 3 months after admission to the chest pain unit. In all patients, coronary angiography did not show significant coronary artery stenosis. Troponin I levels and creatine kinase were higher in patients with AM compared to those without (both P < .001). There were no significant differences among LLC, T2-weighted short inversion time inversion recovery (STIR) sequences, early (EGE), and late (LGE) gadolinium-enhancement sequences for diagnosis of AM. The AUC for qualitative (T2-weighted STIR 0.92, EGE 0.87 and LGE 0.88) and quantitative (ER 0.89 and global RE 0.80) analyses were also similar. Qualitative and quantitative CMR analysis methods show similar diagnostic accuracy for the diagnosis of AM. These findings suggest that a simplified approach using a shortened CMR protocol including only T2-weighted STIR sequences might be useful to rule out AM in patients with acute coronary syndrome and normal coronary angiography.
Oxidative DNA damage background estimated by a system model of base excision repair
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sokhansanj, B A; Wilson, III, D M
Human DNA can be damaged by natural metabolism through free radical production. It has been suggested that the equilibrium between innate damage and cellular DNA repair results in an oxidative DNA damage background that potentially contributes to disease and aging. Efforts to quantitatively characterize the human oxidative DNA damage background level based on measuring 8-oxoguanine lesions as a biomarker have led to estimates varying over 3-4 orders of magnitude, depending on the method of measurement. We applied a previously developed and validated quantitative pathway model of human DNA base excision repair, integrating experimentally determined endogenous damage rates and model parametersmore » from multiple sources. Our estimates of at most 100 8-oxoguanine lesions per cell are consistent with the low end of data from biochemical and cell biology experiments, a result robust to model limitations and parameter variation. Our results show the power of quantitative system modeling to interpret composite experimental data and make biologically and physiologically relevant predictions for complex human DNA repair pathway mechanisms and capacity.« less
Zhang, Shuqun; Hou, Bo; Yang, Huaiyu; Zuo, Zhili
2016-05-01
Acetylcholinesterase (AChE) is an important enzyme in the pathogenesis of Alzheimer's disease (AD). Comparative quantitative structure-activity relationship (QSAR) analyses on some huprines inhibitors against AChE were carried out using comparative molecular field analysis (CoMFA), comparative molecular similarity indices analysis (CoMSIA), and hologram QSAR (HQSAR) methods. Three highly predictive QSAR models were constructed successfully based on the training set. The CoMFA, CoMSIA, and HQSAR models have values of r (2) = 0.988, q (2) = 0.757, ONC = 6; r (2) = 0.966, q (2) = 0.645, ONC = 5; and r (2) = 0.957, q (2) = 0.736, ONC = 6. The predictabilities were validated using an external test sets, and the predictive r (2) values obtained by the three models were 0.984, 0.973, and 0.783, respectively. The analysis was performed by combining the CoMFA and CoMSIA field distributions with the active sites of the AChE to further understand the vital interactions between huprines and the protease. On the basis of the QSAR study, 14 new potent molecules have been designed and six of them are predicted to be more active than the best active compound 24 described in the literature. The final QSAR models could be helpful in design and development of novel active AChE inhibitors.
Research on Improved Depth Belief Network-Based Prediction of Cardiovascular Diseases
Zhang, Hongpo
2018-01-01
Quantitative analysis and prediction can help to reduce the risk of cardiovascular disease. Quantitative prediction based on traditional model has low accuracy. The variance of model prediction based on shallow neural network is larger. In this paper, cardiovascular disease prediction model based on improved deep belief network (DBN) is proposed. Using the reconstruction error, the network depth is determined independently, and unsupervised training and supervised optimization are combined. It ensures the accuracy of model prediction while guaranteeing stability. Thirty experiments were performed independently on the Statlog (Heart) and Heart Disease Database data sets in the UCI database. Experimental results showed that the mean of prediction accuracy was 91.26% and 89.78%, respectively. The variance of prediction accuracy was 5.78 and 4.46, respectively. PMID:29854369
Towards quantitative mass spectrometry-based metabolomics in microbial and mammalian systems.
Kapoore, Rahul Vijay; Vaidyanathan, Seetharaman
2016-10-28
Metabolome analyses are a suite of analytical approaches that enable us to capture changes in the metabolome (small molecular weight components, typically less than 1500 Da) in biological systems. Mass spectrometry (MS) has been widely used for this purpose. The key challenge here is to be able to capture changes in a reproducible and reliant manner that is representative of the events that take place in vivo Typically, the analysis is carried out in vitro, by isolating the system and extracting the metabolome. MS-based approaches enable us to capture metabolomic changes with high sensitivity and resolution. When developing the technique for different biological systems, there are similarities in challenges and differences that are specific to the system under investigation. Here, we review some of the challenges in capturing quantitative changes in the metabolome with MS based approaches, primarily in microbial and mammalian systems.This article is part of the themed issue 'Quantitative mass spectrometry'. © 2016 The Author(s).
Napolitano, José G.; Gödecke, Tanja; Lankin, David C.; Jaki, Birgit U.; McAlpine, James B.; Chen, Shao-Nong; Pauli, Guido F.
2013-01-01
The development of analytical methods for parallel characterization of multiple phytoconstituents is essential to advance the quality control of herbal products. While chemical standardization is commonly carried out by targeted analysis using gas or liquid chromatography-based methods, more universal approaches based on quantitative 1H NMR (qHNMR) measurements are being used increasingly in the multi-targeted assessment of these complex mixtures. The present study describes the development of a 1D qHNMR-based method for simultaneous identification and quantification of green tea constituents. This approach utilizes computer-assisted 1H iterative Full Spin Analysis (HiFSA) and enables rapid profiling of seven catechins in commercial green tea extracts. The qHNMR results were cross-validated against quantitative profiles obtained with an orthogonal LC-MS/MS method. The relative strengths and weaknesses of both approaches are discussed, with special emphasis on the role of identical reference standards in qualitative and quantitative analyses. PMID:23870106
Comprehensive Evaluation and Analysis of China's Mainstream Online Map Service Websites
NASA Astrophysics Data System (ADS)
Zhang, H.; Jiang, J.; Huang, W.; Wang, Q.; Gu, X.
2012-08-01
With the flourish development of China's Internet market, all kinds of users for map service demand is rising continually, within it contains tremendous commercial interests. Many internet giants have got involved in the field of online map service, and defined it as an important strategic product of the company. The main purpose of this research is to evaluate these online map service websites comprehensively with a model, and analyse the problems according to the evaluation results. Then some corresponding solving measures are proposed, which provides a theoretical and application guidance for the future development of fiercely competitive online map websites. The research consists of three stages: (a) the mainstream online map service websites in China are introduced and the present situation of them is analysed through visit, investigation, consultant, analysis and research. (b) a whole comprehensive evaluation quota system of online map service websites from the view of functions, layout, interaction design color position and so on, combining with the data indexes such as time efficiency, accuracy, objectivity and authority. (c) a comprehensive evaluation to these online map service websites is proceeded based on the fuzzy evaluation mathematical model, and the difficulty that measure the map websites quantitatively is solved.
Genome-Wide Association Mapping of Acid Soil Resistance in Barley (Hordeum vulgare L.)
Zhou, Gaofeng; Broughton, Sue; Zhang, Xiao-Qi; Ma, Yanling; Zhou, Meixue; Li, Chengdao
2016-01-01
Genome-wide association studies (GWAS) based on linkage disequilibrium (LD) have been used to detect QTLs underlying complex traits in major crops. In this study, we collected 218 barley (Hordeum vulgare L.) lines including wild barley and cultivated barley from China, Canada, Australia, and Europe. A total of 408 polymorphic markers were used for population structure and LD analysis. GWAS for acid soil resistance were performed on the population using a general linkage model (GLM) and a mixed linkage model (MLM), respectively. A total of 22 QTLs (quantitative trait loci) were detected with the GLM and MLM analyses. Two QTLs, close to markers bPb-1959 (133.1 cM) and bPb-8013 (86.7 cM), localized on chromosome 1H and 4H respectively, were consistently detected in two different trials with both the GLM and MLM analyses. Furthermore, bPb-8013, the closest marker to the major Al3+ resistance gene HvAACT1 in barley, was identified to be QTL5. The QTLs could be used in marker-assisted selection to identify and pyramid different loci for improved acid soil resistance in barley. PMID:27064793
Tobin, Martin D; Sheehan, Nuala A; Scurrah, Katrina J; Burton, Paul R
2005-10-15
A population-based study of a quantitative trait may be seriously compromised when the trait is subject to the effects of a treatment. For example, in a typical study of quantitative blood pressure (BP) 15 per cent or more of middle-aged subjects may take antihypertensive treatment. Without appropriate correction, this can lead to substantial shrinkage in the estimated effect of aetiological determinants of scientific interest and a marked reduction in statistical power. Correction relies upon imputation, in treated subjects, of the underlying BP from the observed BP having invoked one or more assumptions about the bioclinical setting. There is a range of different assumptions that may be made, and a number of different analytical models that may be used. In this paper, we motivate an approach based on a censored normal regression model and compare it with a range of other methods that are currently used or advocated. We compare these methods in simulated data sets and assess the estimation bias and the loss of power that ensue when treatment effects are not appropriately addressed. We also apply the same methods to real data and demonstrate a pattern of behaviour that is consistent with that in the simulation studies. Although all approaches to analysis are necessarily approximations, we conclude that two of the adjustment methods appear to perform well across a range of realistic settings. These are: (1) the addition of a sensible constant to the observed BP in treated subjects; and (2) the censored normal regression model. A third, non-parametric, method based on averaging ordered residuals may also be advocated in some settings. On the other hand, three approaches that are used relatively commonly are fundamentally flawed and should not be used at all. These are: (i) ignoring the problem altogether and analysing observed BP in treated subjects as if it was underlying BP; (ii) fitting a conventional regression model with treatment as a binary covariate; and (iii) excluding treated subjects from the analysis. Given that the more effective methods are straightforward to implement, there is no argument for undertaking a flawed analysis that wastes power and results in excessive bias. Copyright (c) 2005 John Wiley & Sons, Ltd.
Bürger, R; Gimelfarb, A
1999-01-01
Stabilizing selection for an intermediate optimum is generally considered to deplete genetic variation in quantitative traits. However, conflicting results from various types of models have been obtained. While classical analyses assuming a large number of independent additive loci with individually small effects indicated that no genetic variation is preserved under stabilizing selection, several analyses of two-locus models showed the contrary. We perform a complete analysis of a generalization of Wright's two-locus quadratic-optimum model and investigate numerically the ability of quadratic stabilizing selection to maintain genetic variation in additive quantitative traits controlled by up to five loci. A statistical approach is employed by choosing randomly 4000 parameter sets (allelic effects, recombination rates, and strength of selection) for a given number of loci. For each parameter set we iterate the recursion equations that describe the dynamics of gamete frequencies starting from 20 randomly chosen initial conditions until an equilibrium is reached, record the quantities of interest, and calculate their corresponding mean values. As the number of loci increases from two to five, the fraction of the genome expected to be polymorphic declines surprisingly rapidly, and the loci that are polymorphic increasingly are those with small effects on the trait. As a result, the genetic variance expected to be maintained under stabilizing selection decreases very rapidly with increased number of loci. The equilibrium structure expected under stabilizing selection on an additive trait differs markedly from that expected under selection with no constraints on genotypic fitness values. The expected genetic variance, the expected polymorphic fraction of the genome, as well as other quantities of interest, are only weakly dependent on the selection intensity and the level of recombination. PMID:10353920
Wu, Qiong; Zhang, Guohui; Ci, Yusheng; Wu, Lina; Tarefder, Rafiqul A; Alcántara, Adélamar Dely
2016-05-18
Teenage drivers are more likely to be involved in severely incapacitating and fatal crashes compared to adult drivers. Moreover, because two thirds of urban vehicle miles traveled are on signal-controlled roadways, significant research efforts are needed to investigate intersection-related teenage driver injury severities and their contributing factors in terms of driver behavior, vehicle-infrastructure interactions, environmental characteristics, roadway geometric features, and traffic compositions. Therefore, this study aims to explore the characteristic differences between teenage and adult drivers in intersection-related crashes, identify the significant contributing attributes, and analyze their impacts on driver injury severities. Using crash data collected in New Mexico from 2010 to 2011, 2 multinomial logit regression models were developed to analyze injury severities for teenage and adult drivers, respectively. Elasticity analyses and transferability tests were conducted to better understand the quantitative impacts of these factors and the teenage driver injury severity model's generality. The results showed that although many of the same contributing factors were found to be significant in the both teenage and adult driver models, certain different attributes must be distinguished to specifically develop effective safety solutions for the 2 driver groups. The research findings are helpful to better understand teenage crash uniqueness and develop cost-effective solutions to reduce intersection-related teenage injury severities and facilitate driver injury mitigation research.
Ward, Keith W; Erhardt, Paul; Bachmann, Kenneth
2005-01-01
Previous publications from GlaxoSmithKline and University of Toledo laboratories convey our independent attempts to predict the half-lives of xenobiotics in humans using data obtained from rats. The present investigation was conducted to compare the performance of our published models against a common dataset obtained by merging the two sets of rat versus human half-life (hHL) data previously used by each laboratory. After combining data, mathematical analyses were undertaken by deploying both of our previous models, namely the use of an empirical algorithm based on a best-fit model and the use of rat-to-human liver blood flow ratios as a half-life correction factor. Both qualitative and quantitative analyses were performed, as well as evaluation of the impact of molecular properties on predictability. The merged dataset was remarkably diverse with respect to physiochemical and pharmacokinetic (PK) properties. Application of both models revealed similar predictability, depending upon the measure of stipulated accuracy. Certain molecular features, particularly rotatable bond count and pK(a), appeared to influence the accuracy of prediction. This collaborative effort has resulted in an improved understanding and appreciation of the value of rats to serve as a surrogate for the prediction of xenobiotic half-lives in humans when clinical pharmacokinetic studies are not possible or practicable.
The use of copula functions for predictive analysis of correlations between extreme storm tides
NASA Astrophysics Data System (ADS)
Domino, Krzysztof; Błachowicz, Tomasz; Ciupak, Maurycy
2014-11-01
In this paper we present a method used in quantitative description of weakly predictable hydrological, extreme events at inland sea. Investigations for correlations between variations of individual measuring points, employing combined statistical methods, were carried out. As a main tool for this analysis we used a two-dimensional copula function sensitive for correlated extreme effects. Additionally, a new proposed methodology, based on Detrended Fluctuations Analysis (DFA) and Anomalous Diffusion (AD), was used for the prediction of negative and positive auto-correlations and associated optimum choice of copula functions. As a practical example we analysed maximum storm tides data recorded at five spatially separated places at the Baltic Sea. For the analysis we used Gumbel, Clayton, and Frank copula functions and introduced the reversed Clayton copula. The application of our research model is associated with modelling the risk of high storm tides and possible storm flooding.
Estimation of brain network ictogenicity predicts outcome from epilepsy surgery
NASA Astrophysics Data System (ADS)
Goodfellow, M.; Rummel, C.; Abela, E.; Richardson, M. P.; Schindler, K.; Terry, J. R.
2016-07-01
Surgery is a valuable option for pharmacologically intractable epilepsy. However, significant post-operative improvements are not always attained. This is due in part to our incomplete understanding of the seizure generating (ictogenic) capabilities of brain networks. Here we introduce an in silico, model-based framework to study the effects of surgery within ictogenic brain networks. We find that factors conventionally determining the region of tissue to resect, such as the location of focal brain lesions or the presence of epileptiform rhythms, do not necessarily predict the best resection strategy. We validate our framework by analysing electrocorticogram (ECoG) recordings from patients who have undergone epilepsy surgery. We find that when post-operative outcome is good, model predictions for optimal strategies align better with the actual surgery undertaken than when post-operative outcome is poor. Crucially, this allows the prediction of optimal surgical strategies and the provision of quantitative prognoses for patients undergoing epilepsy surgery.
Spacecraft self-contamination due to back-scattering of outgas products
NASA Technical Reports Server (NTRS)
Robertson, S. J.
1976-01-01
The back-scattering of outgas contamination near an orbiting spacecraft due to intermolecular collisions was analyzed. Analytical tools were developed for making reasonably accurate quantitative estimates of the outgas contamination return flux, given a knowledge of the pertinent spacecraft and orbit conditions. Two basic collision mechanisms were considered: (1) collisions involving only outgas molecules (self-scattering) and (2) collisions between outgas molecules and molecules in the ambient atmosphere (ambient-scattering). For simplicity, the geometry was idealized to a uniformly outgassing sphere and to a disk oriented normal to the freestream. The method of solution involved an integration of an approximation of the Boltzmann kinetic equation known as the BGK (or Krook) model equation. Results were obtained in the form of simple equations relating outgas return flux to spacecraft and orbit parameters. Results were compared with previous analyses based on more simplistic models of the collision processes.
NASA Astrophysics Data System (ADS)
Nielsen, Roger L.; Ustunisik, Gokce; Weinsteiger, Allison B.; Tepley, Frank J.; Johnston, A. Dana; Kent, Adam J. R.
2017-09-01
Quantitative models of petrologic processes require accurate partition coefficients. Our ability to obtain accurate partition coefficients is constrained by their dependence on pressure temperature and composition, and on the experimental and analytical techniques we apply. The source and magnitude of error in experimental studies of trace element partitioning may go unrecognized if one examines only the processed published data. The most important sources of error are relict crystals, and analyses of more than one phase in the analytical volume. Because we have typically published averaged data, identification of compromised data is difficult if not impossible. We addressed this problem by examining unprocessed data from plagioclase/melt partitioning experiments, by comparing models based on that data with existing partitioning models, and evaluated the degree to which the partitioning models are dependent on the calibration data. We found that partitioning models are dependent on the calibration data in ways that result in erroneous model values, and that the error will be systematic and dependent on the value of the partition coefficient. In effect, use of different calibration datasets will result in partitioning models whose results are systematically biased, and that one can arrive at different and conflicting conclusions depending on how a model is calibrated, defeating the purpose of applying the models. Ultimately this is an experimental data problem, which can be solved if we publish individual analyses (not averages) or use a projection method wherein we use an independent compositional constraint to identify and estimate the uncontaminated composition of each phase.
Human salmonellosis: estimation of dose-illness from outbreak data.
Bollaerts, Kaatje; Aerts, Marc; Faes, Christel; Grijspeerdt, Koen; Dewulf, Jeroen; Mintiens, Koen
2008-04-01
The quantification of the relationship between the amount of microbial organisms ingested and a specific outcome such as infection, illness, or mortality is a key aspect of quantitative risk assessment. A main problem in determining such dose-response models is the availability of appropriate data. Human feeding trials have been criticized because only young healthy volunteers are selected to participate and low doses, as often occurring in real life, are typically not considered. Epidemiological outbreak data are considered to be more valuable, but are more subject to data uncertainty. In this article, we model the dose-illness relationship based on data of 20 Salmonella outbreaks, as discussed by the World Health Organization. In particular, we model the dose-illness relationship using generalized linear mixed models and fractional polynomials of dose. The fractional polynomial models are modified to satisfy the properties of different types of dose-illness models as proposed by Teunis et al. Within these models, differences in host susceptibility (susceptible versus normal population) are modeled as fixed effects whereas differences in serovar type and food matrix are modeled as random effects. In addition, two bootstrap procedures are presented. A first procedure accounts for stochastic variability whereas a second procedure accounts for both stochastic variability and data uncertainty. The analyses indicate that the susceptible population has a higher probability of illness at low dose levels when the combination pathogen-food matrix is extremely virulent and at high dose levels when the combination is less virulent. Furthermore, the analyses suggest that immunity exists in the normal population but not in the susceptible population.
Conflicts Management Model in School: A Mixed Design Study
ERIC Educational Resources Information Center
Dogan, Soner
2016-01-01
The object of this study is to evaluate the reasons for conflicts occurring in school according to perceptions and views of teachers and resolution strategies used for conflicts and to build a model based on the results obtained. In the research, explanatory design including quantitative and qualitative methods has been used. The quantitative part…
Physiologically based pharmacokinetic (PBPK) models bridge the gap between in vitro assays and in vivo effects by accounting for the adsorption, distribution, metabolism, and excretion of xenobiotics, which is especially useful in the assessment of human toxicity. Quantitative st...
Towards collation and modelling of the global cost of armed violence on civilians.
Taback, Nathan; Coupland, Robin
2005-01-01
A method is described which translates qualitative reports about armed violence into meaningful quantitative data allowing an evidence-based approach to the causes and effects of the global health impact of armed violence on unarmed people. Analysis of 100 randomly selected news reports shows that the type of weapon used, the psychological aspect of the violence, the number of weapons in use and the victims' vulnerability independently influence the mortality of victims. Data collated by the same method could be analysed together with indicators of poverty, development and health so illuminating the relationship between such indicators and degradation of peoples' physical security through acts of armed violence. The method could also help uphold the laws of war and human rights.
What is in a contour map? A region-based logical formalization of contour semantics
Usery, E. Lynn; Hahmann, Torsten
2015-01-01
This paper analyses and formalizes contour semantics in a first-order logic ontology that forms the basis for enabling computational common sense reasoning about contour information. The elicited contour semantics comprises four key concepts – contour regions, contour lines, contour values, and contour sets – and their subclasses and associated relations, which are grounded in an existing qualitative spatial ontology. All concepts and relations are illustrated and motivated by physical-geographic features identifiable on topographic contour maps. The encoding of the semantics of contour concepts in first-order logic and a derived conceptual model as basis for an OWL ontology lay the foundation for fully automated, semantically-aware qualitative and quantitative reasoning about contours.
Evolutionary morphology of the Tenrecoidea (Mammalia) hindlimb skeleton.
Salton, Justine A; Sargis, Eric J
2009-03-01
The tenrecs of Central Africa and Madagascar provide an excellent model for exploring adaptive radiation and functional aspects of mammalian hindlimb form. The pelvic girdle, femur, and crus of 13 tenrecoid species, and four species from the families Solenodontidae, Macroscelididae, and Erinaceidae, were examined and measured. Results from qualitative and quantitative analyses demonstrate remarkable diversity in several aspects of knee and hip joint skeletal form that are supportive of function-based hypotheses, and consistent with studies on nontenrecoid eutherian postcranial adaptation. Locomotor specialists within Tenrecoidea exhibit suites of characteristics that are widespread among eutherians with similar locomotor behaviors. Furthermore, several characters that are constrained at the subfamily level were identified. Such characters are more indicative of postural behavior than locomotor behavior. Copyright 2008 Wiley-Liss, Inc.
Analysis on the restriction factors of the green building scale promotion based on DEMATEL
NASA Astrophysics Data System (ADS)
Wenxia, Hong; Zhenyao, Jiang; Zhao, Yang
2017-03-01
In order to promote the large-scale development of the green building in our country, DEMATEL method was used to classify influence factors of green building development into three parts, including green building market, green technology and macro economy. Through the DEMATEL model, the interaction mechanism of each part was analyzed. The mutual influence degree of each barrier factor that affects the green building promotion was quantitatively analysed and key factors for the development of green building in China were also finally determined. In addition, some implementation strategies of promoting green building scale development in our country were put forward. This research will show important reference value and practical value for making policies of the green building promotion.
Domanski, Dominik; Murphy, Leigh C.; Borchers, Christoph H.
2010-01-01
We have developed a phosphatase-based phosphopeptide quantitation (PPQ) method for determining phosphorylation stoichiometry in complex biological samples. This PPQ method is based on enzymatic dephosphorylation, combined with specific and accurate peptide identification and quantification by multiple reaction monitoring (MRM) detection with stable-isotope-labeled standard peptides. In contrast with the classical MRM methods for the quantitation of phosphorylation stoichiometry, the PPQ-MRM method needs only one non-phosphorylated SIS (stable isotope-coded standard) and two analyses (one for the untreated and one for the phosphatase-treated sample), from which the expression and modification levels can accurately be determined. From these analyses, the % phosphorylation can be determined. In this manuscript, we compare the PPQ-MRM method with an MRM method without phosphatase, and demonstrate the application of these methods to the detection and quantitation of phosphorylation of the classic phosphorylated breast cancer biomarkers (ERα and HER2), and for phosphorylated RAF and ERK1, which also contain phosphorylation sites with important biological implications. Using synthetic peptides spiked into a complex protein digest, we were able to use our PPQ-MRM method to accurately determine the total phosphorylation stoichiometry on specific peptides, as well as the absolute amount of the peptide and phosphopeptide present. Analyses of samples containing ERα protein revealed that the PPQ-MRM is capable of determining phosphorylation stoichiometry in proteins from cell lines, and is in good agreement with determinations obtained using the direct MRM approach in terms of phosphorylation and total protein amount. PMID:20524616
Attomole quantitation of protein separations with accelerator mass spectrometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vogel, J S; Grant, P G; Buccholz, B A
2000-12-15
Quantification of specific proteins depends on separation by chromatography or electrophoresis followed by chemical detection schemes such as staining and fluorophore adhesion. Chemical exchange of short-lived isotopes, particularly sulfur, is also prevalent despite the inconveniences of counting radioactivity. Physical methods based on isotopic and elemental analyses offer highly sensitive protein quantitation that has linear response over wide dynamic ranges and is independent of protein conformation. Accelerator mass spectrometry quantifies long-lived isotopes such as 14C to sub-attomole sensitivity. We quantified protein interactions with small molecules such as toxins, vitamins, and natural biochemicals at precisions of 1-5% . Micro-proton-induced-xray-emission quantifies elemental abundancesmore » in separated metalloprotein samples to nanogram amounts and is capable of quantifying phosphorylated loci in gels. Accelerator-based quantitation is a possible tool for quantifying the genome translation into proteome.« less
Tojo, Axel; Malm, Johan; Marko-Varga, György; Lilja, Hans; Laurell, Thomas
2014-01-01
The antibody microarrays have become widespread, but their use for quantitative analyses in clinical samples has not yet been established. We investigated an immunoassay based on nanoporous silicon antibody microarrays for quantification of total prostate-specific-antigen (PSA) in 80 clinical plasma samples, and provide quantitative data from a duplex microarray assay that simultaneously quantifies free and total PSA in plasma. To further develop the assay the porous silicon chips was placed into a standard 96-well microtiter plate for higher throughput analysis. The samples analyzed by this quantitative microarray were 80 plasma samples obtained from men undergoing clinical PSA testing (dynamic range: 0.14-44ng/ml, LOD: 0.14ng/ml). The second dataset, measuring free PSA (dynamic range: 0.40-74.9ng/ml, LOD: 0.47ng/ml) and total PSA (dynamic range: 0.87-295ng/ml, LOD: 0.76ng/ml), was also obtained from the clinical routine. The reference for the quantification was a commercially available assay, the ProStatus PSA Free/Total DELFIA. In an analysis of 80 plasma samples the microarray platform performs well across the range of total PSA levels. This assay might have the potential to substitute for the large-scale microtiter plate format in diagnostic applications. The duplex assay paves the way for a future quantitative multiplex assay, which analyses several prostate cancer biomarkers simultaneously. PMID:22921878
Human judgment vs. quantitative models for the management of ecological resources.
Holden, Matthew H; Ellner, Stephen P
2016-07-01
Despite major advances in quantitative approaches to natural resource management, there has been resistance to using these tools in the actual practice of managing ecological populations. Given a managed system and a set of assumptions, translated into a model, optimization methods can be used to solve for the most cost-effective management actions. However, when the underlying assumptions are not met, such methods can potentially lead to decisions that harm the environment and economy. Managers who develop decisions based on past experience and judgment, without the aid of mathematical models, can potentially learn about the system and develop flexible management strategies. However, these strategies are often based on subjective criteria and equally invalid and often unstated assumptions. Given the drawbacks of both methods, it is unclear whether simple quantitative models improve environmental decision making over expert opinion. In this study, we explore how well students, using their experience and judgment, manage simulated fishery populations in an online computer game and compare their management outcomes to the performance of model-based decisions. We consider harvest decisions generated using four different quantitative models: (1) the model used to produce the simulated population dynamics observed in the game, with the values of all parameters known (as a control), (2) the same model, but with unknown parameter values that must be estimated during the game from observed data, (3) models that are structurally different from those used to simulate the population dynamics, and (4) a model that ignores age structure. Humans on average performed much worse than the models in cases 1-3, but in a small minority of scenarios, models produced worse outcomes than those resulting from students making decisions based on experience and judgment. When the models ignored age structure, they generated poorly performing management decisions, but still outperformed students using experience and judgment 66% of the time. © 2016 by the Ecological Society of America.
Quantitative Analysis of Cellular Metabolic Dissipative, Self-Organized Structures
de la Fuente, Ildefonso Martínez
2010-01-01
One of the most important goals of the postgenomic era is understanding the metabolic dynamic processes and the functional structures generated by them. Extensive studies during the last three decades have shown that the dissipative self-organization of the functional enzymatic associations, the catalytic reactions produced during the metabolite channeling, the microcompartmentalization of these metabolic processes and the emergence of dissipative networks are the fundamental elements of the dynamical organization of cell metabolism. Here we present an overview of how mathematical models can be used to address the properties of dissipative metabolic structures at different organizational levels, both for individual enzymatic associations and for enzymatic networks. Recent analyses performed with dissipative metabolic networks have shown that unicellular organisms display a singular global enzymatic structure common to all living cellular organisms, which seems to be an intrinsic property of the functional metabolism as a whole. Mathematical models firmly based on experiments and their corresponding computational approaches are needed to fully grasp the molecular mechanisms of metabolic dynamical processes. They are necessary to enable the quantitative and qualitative analysis of the cellular catalytic reactions and also to help comprehend the conditions under which the structural dynamical phenomena and biological rhythms arise. Understanding the molecular mechanisms responsible for the metabolic dissipative structures is crucial for unraveling the dynamics of cellular life. PMID:20957111
Wei, Z G; Macwan, A P; Wieringa, P A
1998-06-01
In this paper we quantitatively model degree of automation (DofA) in supervisory control as a function of the number and nature of tasks to be performed by the operator and automation. This model uses a task weighting scheme in which weighting factors are obtained from task demand load, task mental load, and task effect on system performance. The computation of DofA is demonstrated using an experimental system. Based on controlled experiments using operators, analyses of the task effect on system performance, the prediction and assessment of task demand load, and the prediction of mental load were performed. Each experiment had a different DofA. The effect of a change in DofA on system performance and mental load was investigated. It was found that system performance became less sensitive to changes in DofA at higher levels of DofA. The experimental data showed that when the operator controlled a partly automated system, perceived mental load could be predicted from the task mental load for each task component, as calculated by analyzing a situation in which all tasks were manually controlled. Actual or potential applications of this research include a methodology to balance and optimize the automation of complex industrial systems.
León-Roque, Noemí; Abderrahim, Mohamed; Nuñez-Alejos, Luis; Arribas, Silvia M; Condezo-Hoyos, Luis
2016-12-01
Several procedures are currently used to assess fermentation index (FI) of cocoa beans (Theobroma cacao L.) for quality control. However, all of them present several drawbacks. The aim of the present work was to develop and validate a simple image based quantitative procedure, using color measurement and artificial neural network (ANNs). ANN models based on color measurements were tested to predict fermentation index (FI) of fermented cocoa beans. The RGB values were measured from surface and center region of fermented beans in images obtained by camera and desktop scanner. The FI was defined as the ratio of total free amino acids in fermented versus non-fermented samples. The ANN model that included RGB color measurement of fermented cocoa surface and R/G ratio in cocoa bean of alkaline extracts was able to predict FI with no statistical difference compared with the experimental values. Performance of the ANN model was evaluated by the coefficient of determination, Bland-Altman plot and Passing-Bablok regression analyses. Moreover, in fermented beans, total sugar content and titratable acidity showed a similar pattern to the total free amino acid predicted through the color based ANN model. The results of the present work demonstrate that the proposed ANN model can be adopted as a low-cost and in situ procedure to predict FI in fermented cocoa beans through apps developed for mobile device. Copyright © 2016 Elsevier B.V. All rights reserved.
The application of time series models to cloud field morphology analysis
NASA Technical Reports Server (NTRS)
Chin, Roland T.; Jau, Jack Y. C.; Weinman, James A.
1987-01-01
A modeling method for the quantitative description of remotely sensed cloud field images is presented. A two-dimensional texture modeling scheme based on one-dimensional time series procedures is adopted for this purpose. The time series procedure used is the seasonal autoregressive, moving average (ARMA) process in Box and Jenkins. Cloud field properties such as directionality, clustering and cloud coverage can be retrieved by this method. It has been demonstrated that a cloud field image can be quantitatively defined by a small set of parameters and synthesized surrogates can be reconstructed from these model parameters. This method enables cloud climatology to be studied quantitatively.
Effects of temperature on flood forecasting: analysis of an operative case study in Alpine basins
NASA Astrophysics Data System (ADS)
Ceppi, A.; Ravazzani, G.; Salandin, A.; Rabuffetti, D.; Montani, A.; Borgonovo, E.; Mancini, M.
2013-04-01
In recent years the interest in the forecast and prevention of natural hazards related to hydro-meteorological events has increased the challenge for numerical weather modelling, in particular for limited area models, to improve the quantitative precipitation forecasts (QPF) for hydrological purposes. After the encouraging results obtained in the MAP D-PHASE Project, we decided to devote further analyses to show recent improvements in the operational use of hydro-meteorological chains, and above all to better investigate the key role played by temperature during snowy precipitation. In this study we present a reanalysis simulation of one meteorological event, which occurred in November 2008 in the Piedmont Region. The attention is focused on the key role of air temperature, which is a crucial feature in determining the partitioning of precipitation in solid and liquid phase, influencing the quantitative discharge forecast (QDF) into the Alpine region. This is linked to the basin ipsographic curve and therefore by the total contributing area related to the snow line of the event. In order to assess hydrological predictions affected by meteorological forcing, a sensitivity analysis of the model output was carried out to evaluate different simulation scenarios, considering the forecast effects which can radically modify the discharge forecast. Results show how in real-time systems hydrological forecasters have to consider also the temperature uncertainty in forecasts in order to better understand the snow dynamics and its effect on runoff during a meteorological warning with a crucial snow line over the basin. The hydrological ensemble forecasts are based on the 16 members of the meteorological ensemble system COSMO-LEPS (developed by ARPA-SIMC) based on the non-hydrostatic model COSMO, while the hydrological model used to generate the runoff simulations is the rainfall-runoff distributed FEST-WB model, developed at Politecnico di Milano.
41 CFR 60-2.10 - General purpose and contents of affirmative action programs.
Code of Federal Regulations, 2012 CFR
2012-07-01
... number of quantitative analyses designed to evaluate the composition of the workforce of the contractor... affirmative action program must include the following quantitative analyses: (i) Organizational profile—§ 60-2...
41 CFR 60-2.10 - General purpose and contents of affirmative action programs.
Code of Federal Regulations, 2011 CFR
2011-07-01
... number of quantitative analyses designed to evaluate the composition of the workforce of the contractor... affirmative action program must include the following quantitative analyses: (i) Organizational profile—§ 60-2...
41 CFR 60-2.10 - General purpose and contents of affirmative action programs.
Code of Federal Regulations, 2010 CFR
2010-07-01
... number of quantitative analyses designed to evaluate the composition of the workforce of the contractor... affirmative action program must include the following quantitative analyses: (i) Organizational profile—§ 60-2...