Complexity-aware simple modeling.
Gómez-Schiavon, Mariana; El-Samad, Hana
2018-02-26
Mathematical models continue to be essential for deepening our understanding of biology. On one extreme, simple or small-scale models help delineate general biological principles. However, the parsimony of detail in these models as well as their assumption of modularity and insulation make them inaccurate for describing quantitative features. On the other extreme, large-scale and detailed models can quantitatively recapitulate a phenotype of interest, but have to rely on many unknown parameters, making them often difficult to parse mechanistically and to use for extracting general principles. We discuss some examples of a new approach-complexity-aware simple modeling-that can bridge the gap between the small-scale and large-scale approaches. Copyright © 2018 Elsevier Ltd. All rights reserved.
Models of Quantitative Estimations: Rule-Based and Exemplar-Based Processes Compared
ERIC Educational Resources Information Center
von Helversen, Bettina; Rieskamp, Jorg
2009-01-01
The cognitive processes underlying quantitative estimations vary. Past research has identified task-contingent changes between rule-based and exemplar-based processes (P. Juslin, L. Karlsson, & H. Olsson, 2008). B. von Helversen and J. Rieskamp (2008), however, proposed a simple rule-based model--the mapping model--that outperformed the…
Quantitative Modeling of Earth Surface Processes
NASA Astrophysics Data System (ADS)
Pelletier, Jon D.
This textbook describes some of the most effective and straightforward quantitative techniques for modeling Earth surface processes. By emphasizing a core set of equations and solution techniques, the book presents state-of-the-art models currently employed in Earth surface process research, as well as a set of simple but practical research tools. Detailed case studies demonstrate application of the methods to a wide variety of processes including hillslope, fluvial, aeolian, glacial, tectonic, and climatic systems. Exercises at the end of each chapter begin with simple calculations and then progress to more sophisticated problems that require computer programming. All the necessary computer codes are available online at www.cambridge.org/9780521855976. Assuming some knowledge of calculus and basic programming experience, this quantitative textbook is designed for advanced geomorphology courses and as a reference book for professional researchers in Earth and planetary science looking for a quantitative approach to Earth surface processes.
Zhai, Hong Lin; Zhai, Yue Yuan; Li, Pei Zhen; Tian, Yue Li
2013-01-21
A very simple approach to quantitative analysis is proposed based on the technology of digital image processing using three-dimensional (3D) spectra obtained by high-performance liquid chromatography coupled with a diode array detector (HPLC-DAD). As the region-based shape features of a grayscale image, Zernike moments with inherently invariance property were employed to establish the linear quantitative models. This approach was applied to the quantitative analysis of three compounds in mixed samples using 3D HPLC-DAD spectra, and three linear models were obtained, respectively. The correlation coefficients (R(2)) for training and test sets were more than 0.999, and the statistical parameters and strict validation supported the reliability of established models. The analytical results suggest that the Zernike moment selected by stepwise regression can be used in the quantitative analysis of target compounds. Our study provides a new idea for quantitative analysis using 3D spectra, which can be extended to the analysis of other 3D spectra obtained by different methods or instruments.
Experimental Control of Simple Pendulum Model
ERIC Educational Resources Information Center
Medina, C.
2004-01-01
This paper conveys information about a Physics laboratory experiment for students with some theoretical knowledge about oscillatory motion. Students construct a simple pendulum that behaves as an ideal one, and analyze model assumption incidence on its period. The following aspects are quantitatively analyzed: vanishing friction, small amplitude,…
Ku, Hyung-Keun; Lim, Hyuk-Min; Oh, Kyong-Hwa; Yang, Hyo-Jin; Jeong, Ji-Seon; Kim, Sook-Kyung
2013-03-01
The Bradford assay is a simple method for protein quantitation, but variation in the results between proteins is a matter of concern. In this study, we compared and normalized quantitative values from two models for protein quantitation, where the residues in the protein that bind to anionic Coomassie Brilliant Blue G-250 comprise either Arg and Lys (Method 1, M1) or Arg, Lys, and His (Method 2, M2). Use of the M2 model yielded much more consistent quantitation values compared with use of the M1 model, which exhibited marked overestimations against protein standards. Copyright © 2012 Elsevier Inc. All rights reserved.
Grass Grows, the Cow Eats: A Simple Grazing Systems Model with Emergent Properties
ERIC Educational Resources Information Center
Ungar, Eugene David; Seligman, Noam G.; Noy-Meir, Imanuel
2004-01-01
We describe a simple, yet intellectually challenging model of grazing systems that introduces basic concepts in ecology and systems analysis. The practical is suitable for high-school and university curricula with a quantitative orientation, and requires only basic skills in mathematics and spreadsheet use. The model is based on Noy-Meir's (1975)…
Statistical Mechanics of the US Supreme Court
NASA Astrophysics Data System (ADS)
Lee, Edward D.; Broedersz, Chase P.; Bialek, William
2015-07-01
We build simple models for the distribution of voting patterns in a group, using the Supreme Court of the United States as an example. The maximum entropy model consistent with the observed pairwise correlations among justices' votes, an Ising spin glass, agrees quantitatively with the data. While all correlations (perhaps surprisingly) are positive, the effective pairwise interactions in the spin glass model have both signs, recovering the intuition that ideologically opposite justices negatively influence each another. Despite the competing interactions, a strong tendency toward unanimity emerges from the model, organizing the voting patterns in a relatively simple "energy landscape." Besides unanimity, other energy minima in this landscape, or maxima in probability, correspond to prototypical voting states, such as the ideological split or a tightly correlated, conservative core. The model correctly predicts the correlation of justices with the majority and gives us a measure of their influence on the majority decision. These results suggest that simple models, grounded in statistical physics, can capture essential features of collective decision making quantitatively, even in a complex political context.
Electromagnetic braking: A simple quantitative model
NASA Astrophysics Data System (ADS)
Levin, Yan; da Silveira, Fernando L.; Rizzato, Felipe B.
2006-09-01
A calculation is presented that quantitatively accounts for the terminal velocity of a cylindrical magnet falling through a long copper or aluminum pipe. The experiment and the theory are a dramatic illustration of Faraday's and Lenz's laws.
EnviroLand: A Simple Computer Program for Quantitative Stream Assessment.
ERIC Educational Resources Information Center
Dunnivant, Frank; Danowski, Dan; Timmens-Haroldson, Alice; Newman, Meredith
2002-01-01
Introduces the Enviroland computer program which features lab simulations of theoretical calculations for quantitative analysis and environmental chemistry, and fate and transport models. Uses the program to demonstrate the nature of linear and nonlinear equations. (Author/YDS)
ERIC Educational Resources Information Center
Hannan, Michael T.
This document is part of a series of chapters described in SO 011 759. Stochastic models for the sociological analysis of change and the change process in quantitative variables are presented. The author lays groundwork for the statistical treatment of simple stochastic differential equations (SDEs) and discusses some of the continuities of…
The attentional drift-diffusion model extends to simple purchasing decisions.
Krajbich, Ian; Lu, Dingchao; Camerer, Colin; Rangel, Antonio
2012-01-01
How do we make simple purchasing decisions (e.g., whether or not to buy a product at a given price)? Previous work has shown that the attentional drift-diffusion model (aDDM) can provide accurate quantitative descriptions of the psychometric data for binary and trinary value-based choices, and of how the choice process is guided by visual attention. Here we extend the aDDM to the case of purchasing decisions, and test it using an eye-tracking experiment. We find that the model also provides a reasonably accurate quantitative description of the relationship between choice, reaction time, and visual fixations using parameters that are very similar to those that best fit the previous data. The only critical difference is that the choice biases induced by the fixations are about half as big in purchasing decisions as in binary choices. This suggests that a similar computational process is used to make binary choices, trinary choices, and simple purchasing decisions.
The Attentional Drift-Diffusion Model Extends to Simple Purchasing Decisions
Krajbich, Ian; Lu, Dingchao; Camerer, Colin; Rangel, Antonio
2012-01-01
How do we make simple purchasing decisions (e.g., whether or not to buy a product at a given price)? Previous work has shown that the attentional drift-diffusion model (aDDM) can provide accurate quantitative descriptions of the psychometric data for binary and trinary value-based choices, and of how the choice process is guided by visual attention. Here we extend the aDDM to the case of purchasing decisions, and test it using an eye-tracking experiment. We find that the model also provides a reasonably accurate quantitative description of the relationship between choice, reaction time, and visual fixations using parameters that are very similar to those that best fit the previous data. The only critical difference is that the choice biases induced by the fixations are about half as big in purchasing decisions as in binary choices. This suggests that a similar computational process is used to make binary choices, trinary choices, and simple purchasing decisions. PMID:22707945
NASA Astrophysics Data System (ADS)
Bier, Martin; Brak, Bastiaan
2015-04-01
In the Netherlands there has been nationwide vaccination against the measles since 1976. However, in small clustered communities of orthodox Protestants there is widespread refusal of the vaccine. After 1976, three large outbreaks with about 3000 reported cases of the measles have occurred among these orthodox Protestants. The outbreaks appear to occur about every twelve years. We show how a simple Kermack-McKendrick-like model can quantitatively account for the periodic outbreaks. Approximate analytic formulae to connect the period, size, and outbreak duration are derived. With an enhanced model we take the latency period in account. We also expand the model to follow how different age groups are affected. Like other researchers using other methods, we conclude that large scale underreporting of the disease must occur.
SOME USES OF MODELS OF QUANTITATIVE GENETIC SELECTION IN SOCIAL SCIENCE.
Weight, Michael D; Harpending, Henry
2017-01-01
The theory of selection of quantitative traits is widely used in evolutionary biology, agriculture and other related fields. The fundamental model known as the breeder's equation is simple, robust over short time scales, and it is often possible to estimate plausible parameters. In this paper it is suggested that the results of this model provide useful yardsticks for the description of social traits and the evaluation of transmission models. The differences on a standard personality test between samples of Old Order Amish and Indiana rural young men from the same county and the decline of homicide in Medieval Europe are used as illustrative examples of the overall approach. It is shown that the decline of homicide is unremarkable under a threshold model while the differences between rural Amish and non-Amish young men are too large to be a plausible outcome of simple genetic selection in which assortative mating by affiliation is equivalent to truncation selection.
Using Algorithms in Solving Synapse Transmission Problems.
ERIC Educational Resources Information Center
Stencel, John E.
1992-01-01
Explains how a simple three-step algorithm can aid college students in solving synapse transmission problems. Reports that all of the students did not completely understand the algorithm. However, many learn a simple working model of synaptic transmission and understand why an impulse will pass across a synapse quantitatively. Students also see…
Statistical Mechanics of US Supreme Court
NASA Astrophysics Data System (ADS)
Lee, Edward; Broedersz, Chase; Bialek, William; Biophysics Theory Group Team
2014-03-01
We build simple models for the distribution of voting patterns in a group, using the Supreme Court of the United States as an example. The least structured, or maximum entropy, model that is consistent with the observed pairwise correlations among justices' votes is equivalent to an Ising spin glass. While all correlations (perhaps surprisingly) are positive, the effective pairwise interactions in the spin glass model have both signs, recovering some of our intuition that justices on opposite sides of the ideological spectrum should have a negative influence on one another. Despite the competing interactions, a strong tendency toward unanimity emerges from the model, and this agrees quantitatively with the data. The model shows that voting patterns are organized in a relatively simple ``energy landscape,'' correctly predicts the extent to which each justice is correlated with the majority, and gives us a measure of the influence that justices exert on one another. These results suggest that simple models, grounded in statistical physics, can capture essential features of collective decision making quantitatively, even in a complex political context. Funded by National Science Foundation Grants PHY-0957573 and CCF-0939370, WM Keck Foundation, Lewis-Sigler Fellowship, Burroughs Wellcome Fund, and Winston Foundation.
NASA Astrophysics Data System (ADS)
Reineker, P.; Kenkre, V. M.; Kühne, R.
1981-08-01
A quantitative comparison of a simple theoretical prediction for the drift mobility of photo-electrons in organic molecular crystals, calculated within the model of the coupled band-like and hopping motion, with experiments in napthalene of Schein et al. and Karl et al. is given.
Strengthening Student Engagement with Quantitative Subjects in a Business Faculty
ERIC Educational Resources Information Center
Warwick, Jon; Howard, Anna
2014-01-01
This paper reflects on the results of research undertaken at a large UK university relating to the teaching of quantitative subjects within a Business Faculty. It builds on a simple model of student engagement and, through the description of three case studies, describes research undertaken and developments implemented to strengthen aspects of the…
A color prediction model for imagery analysis
NASA Technical Reports Server (NTRS)
Skaley, J. E.; Fisher, J. R.; Hardy, E. E.
1977-01-01
A simple model has been devised to selectively construct several points within a scene using multispectral imagery. The model correlates black-and-white density values to color components of diazo film so as to maximize the color contrast of two or three points per composite. The CIE (Commission Internationale de l'Eclairage) color coordinate system is used as a quantitative reference to locate these points in color space. Superimposed on this quantitative reference is a perceptional framework which functionally contrasts color values in a psychophysical sense. This methodology permits a more quantitative approach to the manual interpretation of multispectral imagery while resulting in improved accuracy and lower costs.
Ultrasound hepatic/renal ratio and hepatic attenuation rate for quantifying liver fat content.
Zhang, Bo; Ding, Fang; Chen, Tian; Xia, Liang-Hua; Qian, Juan; Lv, Guo-Yi
2014-12-21
To establish and validate a simple quantitative assessment method for nonalcoholic fatty liver disease (NAFLD) based on a combination of the ultrasound hepatic/renal ratio and hepatic attenuation rate. A total of 170 subjects were enrolled in this study. All subjects were examined by ultrasound and (1)H-magnetic resonance spectroscopy ((1)H-MRS) on the same day. The ultrasound hepatic/renal echo-intensity ratio and ultrasound hepatic echo-intensity attenuation rate were obtained from ordinary ultrasound images using the MATLAB program. Correlation analysis revealed that the ultrasound hepatic/renal ratio and hepatic echo-intensity attenuation rate were significantly correlated with (1)H-MRS liver fat content (ultrasound hepatic/renal ratio: r = 0.952, P = 0.000; hepatic echo-intensity attenuation r = 0.850, P = 0.000). The equation for predicting liver fat content by ultrasound (quantitative ultrasound model) is: liver fat content (%) = 61.519 × ultrasound hepatic/renal ratio + 167.701 × hepatic echo-intensity attenuation rate -26.736. Spearman correlation analysis revealed that the liver fat content ratio of the quantitative ultrasound model was positively correlated with serum alanine aminotransferase, aspartate aminotransferase, and triglyceride, but negatively correlated with high density lipoprotein cholesterol. Receiver operating characteristic curve analysis revealed that the optimal point for diagnosing fatty liver was 9.15% in the quantitative ultrasound model. Furthermore, in the quantitative ultrasound model, fatty liver diagnostic sensitivity and specificity were 94.7% and 100.0%, respectively, showing that the quantitative ultrasound model was better than conventional ultrasound methods or the combined ultrasound hepatic/renal ratio and hepatic echo-intensity attenuation rate. If the (1)H-MRS liver fat content had a value < 15%, the sensitivity and specificity of the ultrasound quantitative model would be 81.4% and 100%, which still shows that using the model is better than the other methods. The quantitative ultrasound model is a simple, low-cost, and sensitive tool that can accurately assess hepatic fat content in clinical practice. It provides an easy and effective parameter for the early diagnosis of mild hepatic steatosis and evaluation of the efficacy of NAFLD treatment.
Analysis and Modeling of Ground Operations at Hub Airports
NASA Technical Reports Server (NTRS)
Atkins, Stephen (Technical Monitor); Andersson, Kari; Carr, Francis; Feron, Eric; Hall, William D.
2000-01-01
Building simple and accurate models of hub airports can considerably help one understand airport dynamics, and may provide quantitative estimates of operational airport improvements. In this paper, three models are proposed to capture the dynamics of busy hub airport operations. Two simple queuing models are introduced to capture the taxi-out and taxi-in processes. An integer programming model aimed at representing airline decision-making attempts to capture the dynamics of the aircraft turnaround process. These models can be applied for predictive purposes. They may also be used to evaluate control strategies for improving overall airport efficiency.
Building a Database for a Quantitative Model
NASA Technical Reports Server (NTRS)
Kahn, C. Joseph; Kleinhammer, Roger
2014-01-01
A database can greatly benefit a quantitative analysis. The defining characteristic of a quantitative risk, or reliability, model is the use of failure estimate data. Models can easily contain a thousand Basic Events, relying on hundreds of individual data sources. Obviously, entering so much data by hand will eventually lead to errors. Not so obviously entering data this way does not aid linking the Basic Events to the data sources. The best way to organize large amounts of data on a computer is with a database. But a model does not require a large, enterprise-level database with dedicated developers and administrators. A database built in Excel can be quite sufficient. A simple spreadsheet database can link every Basic Event to the individual data source selected for them. This database can also contain the manipulations appropriate for how the data is used in the model. These manipulations include stressing factors based on use and maintenance cycles, dormancy, unique failure modes, the modeling of multiple items as a single "Super component" Basic Event, and Bayesian Updating based on flight and testing experience. A simple, unique metadata field in both the model and database provides a link from any Basic Event in the model to its data source and all relevant calculations. The credibility for the entire model often rests on the credibility and traceability of the data.
Effect of quantum nuclear motion on hydrogen bonding
NASA Astrophysics Data System (ADS)
McKenzie, Ross H.; Bekker, Christiaan; Athokpam, Bijyalaxmi; Ramesh, Sai G.
2014-05-01
This work considers how the properties of hydrogen bonded complexes, X-H⋯Y, are modified by the quantum motion of the shared proton. Using a simple two-diabatic state model Hamiltonian, the analysis of the symmetric case, where the donor (X) and acceptor (Y) have the same proton affinity, is carried out. For quantitative comparisons, a parametrization specific to the O-H⋯O complexes is used. The vibrational energy levels of the one-dimensional ground state adiabatic potential of the model are used to make quantitative comparisons with a vast body of condensed phase data, spanning a donor-acceptor separation (R) range of about 2.4 - 3.0 Å, i.e., from strong to weak hydrogen bonds. The position of the proton (which determines the X-H bond length) and its longitudinal vibrational frequency, along with the isotope effects in both are described quantitatively. An analysis of the secondary geometric isotope effect, using a simple extension of the two-state model, yields an improved agreement of the predicted variation with R of frequency isotope effects. The role of bending modes is also considered: their quantum effects compete with those of the stretching mode for weak to moderate H-bond strengths. In spite of the economy in the parametrization of the model used, it offers key insights into the defining features of H-bonds, and semi-quantitatively captures several trends.
NASA Astrophysics Data System (ADS)
Sherrington, David; Davison, Lexie; Buhot, Arnaud; Garrahan, Juan P.
2002-02-01
We report a study of a series of simple model systems with only non-interacting Hamiltonians, and hence simple equilibrium thermodynamics, but with constrained dynamics of a type initially suggested by foams and idealized covalent glasses. We demonstrate that macroscopic dynamical features characteristic of real and more complex model glasses, such as two-time decays in energy and auto-correlation functions, arise from the dynamics and we explain them qualitatively and quantitatively in terms of annihilation-diffusion concepts and theory. The comparison is with strong glasses. We also consider fluctuation-dissipation relations and demonstrate subtleties of interpretation. We find no FDT breakdown when the correct normalization is chosen.
Hong, Ha; Solomon, Ethan A.; DiCarlo, James J.
2015-01-01
To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT (“face patches”) did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of ∼60,000 IT neurons and is executed as a simple weighted sum of those firing rates. SIGNIFICANCE STATEMENT We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. PMID:26424887
A comparison of quantitative methods for clinical imaging with hyperpolarized (13)C-pyruvate.
Daniels, Charlie J; McLean, Mary A; Schulte, Rolf F; Robb, Fraser J; Gill, Andrew B; McGlashan, Nicholas; Graves, Martin J; Schwaiger, Markus; Lomas, David J; Brindle, Kevin M; Gallagher, Ferdia A
2016-04-01
Dissolution dynamic nuclear polarization (DNP) enables the metabolism of hyperpolarized (13)C-labelled molecules, such as the conversion of [1-(13)C]pyruvate to [1-(13)C]lactate, to be dynamically and non-invasively imaged in tissue. Imaging of this exchange reaction in animal models has been shown to detect early treatment response and correlate with tumour grade. The first human DNP study has recently been completed, and, for widespread clinical translation, simple and reliable methods are necessary to accurately probe the reaction in patients. However, there is currently no consensus on the most appropriate method to quantify this exchange reaction. In this study, an in vitro system was used to compare several kinetic models, as well as simple model-free methods. Experiments were performed using a clinical hyperpolarizer, a human 3 T MR system, and spectroscopic imaging sequences. The quantitative methods were compared in vivo by using subcutaneous breast tumours in rats to examine the effect of pyruvate inflow. The two-way kinetic model was the most accurate method for characterizing the exchange reaction in vitro, and the incorporation of a Heaviside step inflow profile was best able to describe the in vivo data. The lactate time-to-peak and the lactate-to-pyruvate area under the curve ratio were simple model-free approaches that accurately represented the full reaction, with the time-to-peak method performing indistinguishably from the best kinetic model. Finally, extracting data from a single pixel was a robust and reliable surrogate of the whole region of interest. This work has identified appropriate quantitative methods for future work in the analysis of human hyperpolarized (13)C data. © 2016 The Authors. NMR in Biomedicine published by John Wiley & Sons Ltd.
De Benedetti, Pier G; Fanelli, Francesca
2018-03-21
Simple comparative correlation analyses and quantitative structure-kinetics relationship (QSKR) models highlight the interplay of kinetic rates and binding affinity as an essential feature in drug design and discovery. The choice of the molecular series, and their structural variations, used in QSKR modeling is fundamental to understanding the mechanistic implications of ligand and/or drug-target binding and/or unbinding processes. Here, we discuss the implications of linear correlations between kinetic rates and binding affinity constants and the relevance of the computational approaches to QSKR modeling. Copyright © 2018 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Lugt, Karel Vander
1993-01-01
Develops a simple core-halo model of a galaxy that exhibits the main features of observed rotation curves and quantitatively illustrates the need to postulate halos of dark matter. Uses only elementary mechanics. (Author/MVL)
Xu, Y.; Xia, J.; Miller, R.D.
2006-01-01
Multichannel analysis of surface waves is a developing method widely used in shallow subsurface investigations. The field procedures and related parameters are very important for successful applications. Among these parameters, the source-receiver offset range is seldom discussed in theory and normally determined by empirical or semi-quantitative methods in current practice. This paper discusses the problem from a theoretical perspective. A formula for quantitatively evaluating a layered homogenous elastic model was developed. The analytical results based on simple models and experimental data demonstrate that the formula is correct for surface wave surveys for near-surface applications. ?? 2005 Elsevier B.V. All rights reserved.
Isothermal Circumstellar Dust Shell Model for Teaching
ERIC Educational Resources Information Center
Robinson, G.; Towers, I. N.; Jovanoski, Z.
2009-01-01
We introduce a model of radiative transfer in circumstellar dust shells. By assuming that the shell is both isothermal and its thickness is small compared to its radius, the model is simple enough for students to grasp and yet still provides a quantitative description of the relevant physical features. The isothermal model can be used in a…
Das, Rudra Narayan; Roy, Kunal; Popelier, Paul L A
2015-11-01
The present study explores the chemical attributes of diverse ionic liquids responsible for their cytotoxicity in a rat leukemia cell line (IPC-81) by developing predictive classification as well as regression-based mathematical models. Simple and interpretable descriptors derived from a two-dimensional representation of the chemical structures along with quantum topological molecular similarity indices have been used for model development, employing unambiguous modeling strategies that strictly obey the guidelines of the Organization for Economic Co-operation and Development (OECD) for quantitative structure-activity relationship (QSAR) analysis. The structure-toxicity relationships that emerged from both classification and regression-based models were in accordance with the findings of some previous studies. The models suggested that the cytotoxicity of ionic liquids is dependent on the cationic surfactant action, long alkyl side chains, cationic lipophilicity as well as aromaticity, the presence of a dialkylamino substituent at the 4-position of the pyridinium nucleus and a bulky anionic moiety. The models have been transparently presented in the form of equations, thus allowing their easy transferability in accordance with the OECD guidelines. The models have also been subjected to rigorous validation tests proving their predictive potential and can hence be used for designing novel and "greener" ionic liquids. The major strength of the present study lies in the use of a diverse and large dataset, use of simple reproducible descriptors and compliance with the OECD norms. Copyright © 2015 Elsevier Ltd. All rights reserved.
A simple model for pollen-parent fecundity distributions in bee-pollinated forage legume polycrosses
USDA-ARS?s Scientific Manuscript database
Random mating or panmixis is a fundamental assumption in quantitative genetic theory. Random mating is sometimes thought to occur in actual fact although a large body of empirical work shows that this is often not the case in nature. Models have been developed to model many non-random mating phenome...
Majaj, Najib J; Hong, Ha; Solomon, Ethan A; DiCarlo, James J
2015-09-30
To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT ("face patches") did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of ∼60,000 IT neurons and is executed as a simple weighted sum of those firing rates. Significance statement: We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. Copyright © 2015 the authors 0270-6474/15/3513402-17$15.00/0.
The time-dependent response of 3- and 5-layer sandwich beams
NASA Technical Reports Server (NTRS)
Hyer, M. W.; Oleksuk, L. S. S.; Bowles, D. E.
1992-01-01
Simple sandwich beam models have been developed to study the effect of the time-dependent constitutive properties of fiber-reinforced polymer matrix composites, considered for use in orbiting precision segmented reflectors, on the overall deformations. The 3- and 5-layer beam models include layers representing the face sheets, the core, and the adhesive. The static elastic deformation response of the sandwich beam models to a midspan point load is studied using the principle of stationary potential energy. In addition to quantitative conclusions, several assumptions are discussed which simplify the analysis for the case of more complicated material models. It is shown that the simple three-layer model is sufficient in many situations.
From themes to hypotheses: following up with quantitative methods.
Morgan, David L
2015-06-01
One important category of mixed-methods research designs consists of quantitative studies that follow up on qualitative research. In this case, the themes that serve as the results from the qualitative methods generate hypotheses for testing through the quantitative methods. That process requires operationalization to translate the concepts from the qualitative themes into quantitative variables. This article illustrates these procedures with examples that range from simple operationalization to the evaluation of complex models. It concludes with an argument for not only following up qualitative work with quantitative studies but also the reverse, and doing so by going beyond integrating methods within single projects to include broader mutual attention from qualitative and quantitative researchers who work in the same field. © The Author(s) 2015.
Modeling of breath methane concentration profiles during exercise on an ergometer*
Szabó, Anna; Unterkofler, Karl; Mochalski, Pawel; Jandacka, Martin; Ruzsanyi, Vera; Szabó, Gábor; Mohácsi, Árpád; Teschl, Susanne; Teschl, Gerald; King, Julian
2016-01-01
We develop a simple three compartment model based on mass balance equations which quantitatively describes the dynamics of breath methane concentration profiles during exercise on an ergometer. With the help of this model it is possible to estimate the endogenous production rate of methane in the large intestine by measuring breath gas concentrations of methane. PMID:26828421
Medland, Sarah E; Loesch, Danuta Z; Mdzewski, Bogdan; Zhu, Gu; Montgomery, Grant W; Martin, Nicholas G
2007-01-01
The finger ridge count (a measure of pattern size) is one of the most heritable complex traits studied in humans and has been considered a model human polygenic trait in quantitative genetic analysis. Here, we report the results of the first genome-wide linkage scan for finger ridge count in a sample of 2,114 offspring from 922 nuclear families. Both univariate linkage to the absolute ridge count (a sum of all the ridge counts on all ten fingers), and multivariate linkage analyses of the counts on individual fingers, were conducted. The multivariate analyses yielded significant linkage to 5q14.1 (Logarithm of odds [LOD] = 3.34, pointwise-empirical p-value = 0.00025) that was predominantly driven by linkage to the ring, index, and middle fingers. The strongest univariate linkage was to 1q42.2 (LOD = 2.04, point-wise p-value = 0.002, genome-wide p-value = 0.29). In summary, the combination of univariate and multivariate results was more informative than simple univariate analyses alone. Patterns of quantitative trait loci factor loadings consistent with developmental fields were observed, and the simple pleiotropic model underlying the absolute ridge count was not sufficient to characterize the interrelationships between the ridge counts of individual fingers. PMID:17907812
Mager, P P; Rothe, H
1990-10-01
Multicollinearity of physicochemical descriptors leads to serious consequences in quantitative structure-activity relationship (QSAR) analysis, such as incorrect estimators and test statistics of regression coefficients of the ordinary least-squares (OLS) model applied usually to QSARs. Beside the diagnosis of the known simple collinearity, principal component regression analysis (PCRA) also allows the diagnosis of various types of multicollinearity. Only if the absolute values of PCRA estimators are order statistics that decrease monotonically, the effects of multicollinearity can be circumvented. Otherwise, obscure phenomena may be observed, such as good data recognition but low predictive model power of a QSAR model.
Mingguang, Zhang; Juncheng, Jiang
2008-10-30
Overpressure is one important cause of domino effect in accidents of chemical process equipments. Damage probability and relative threshold value are two necessary parameters in QRA of this phenomenon. Some simple models had been proposed based on scarce data or oversimplified assumption. Hence, more data about damage to chemical process equipments were gathered and analyzed, a quantitative relationship between damage probability and damage degrees of equipment was built, and reliable probit models were developed associated to specific category of chemical process equipments. Finally, the improvements of present models were evidenced through comparison with other models in literatures, taking into account such parameters: consistency between models and data, depth of quantitativeness in QRA.
Modelling the Active Hearing Process in Mosquitoes
NASA Astrophysics Data System (ADS)
Avitabile, Daniele; Homer, Martin; Jackson, Joe; Robert, Daniel; Champneys, Alan
2011-11-01
A simple microscopic mechanistic model is described of the active amplification within the Johnston's organ of the mosquito species Toxorhynchites brevipalpis. The model is based on the description of the antenna as a forced-damped oscillator coupled to a set of active threads (ensembles of scolopidia) that provide an impulsive force when they twitch. This twitching is in turn controlled by channels that are opened and closed if the antennal oscillation reaches a critical amplitude. The model matches both qualitatively and quantitatively with recent experiments. New results are presented using mathematical homogenization techniques to derive a mesoscopic model as a simple oscillator with nonlinear force and damping characteristics. It is shown how the results from this new model closely resemble those from the microscopic model as the number of threads approach physiologically correct values.
Bassingthwaighte, James B; Raymond, Gary M; Dash, Ranjan K; Beard, Daniel A; Nolan, Margaret
2016-01-01
The 'Pathway for Oxygen' is captured in a set of models describing quantitative relationships between fluxes and driving forces for the flux of oxygen from the external air source to the mitochondrial sink at cytochrome oxidase. The intervening processes involve convection, membrane permeation, diffusion of free and heme-bound O2 and enzymatic reactions. While this system's basic elements are simple: ventilation, alveolar gas exchange with blood, circulation of the blood, perfusion of an organ, uptake by tissue, and consumption by chemical reaction, integration of these pieces quickly becomes complex. This complexity led us to construct a tutorial on the ideas and principles; these first PathwayO2 models are simple but quantitative and cover: (1) a 'one-alveolus lung' with airway resistance, lung volume compliance, (2) bidirectional transport of solute gasses like O2 and CO2, (3) gas exchange between alveolar air and lung capillary blood, (4) gas solubility in blood, and circulation of blood through the capillary syncytium and back to the lung, and (5) blood-tissue gas exchange in capillaries. These open-source models are at Physiome.org and provide background for the many respiratory models there.
The Pathway for Oxygen: Tutorial Modelling on Oxygen Transport from Air to Mitochondrion
Bassingthwaighte, James B.; Raymond, Gary M.; Dash, Ranjan K.; Beard, Daniel A.; Nolan, Margaret
2016-01-01
The ‘Pathway for Oxygen’ is captured in a set of models describing quantitative relationships between fluxes and driving forces for the flux of oxygen from the external air source to the mitochondrial sink at cytochrome oxidase. The intervening processes involve convection, membrane permeation, diffusion of free and heme-bound O2 and enzymatic reactions. While this system’s basic elements are simple: ventilation, alveolar gas exchange with blood, circulation of the blood, perfusion of an organ, uptake by tissue, and consumption by chemical reaction, integration of these pieces quickly becomes complex. This complexity led us to construct a tutorial on the ideas and principles; these first PathwayO2 models are simple but quantitative and cover: 1) a ‘one-alveolus lung’ with airway resistance, lung volume compliance, 2) bidirectional transport of solute gasses like O2 and CO2, 3) gas exchange between alveolar air and lung capillary blood, 4) gas solubility in blood, and circulation of blood through the capillary syncytium and back to the lung, and 5) blood-tissue gas exchange in capillaries. These open-source models are at Physiome.org and provide background for the many respiratory models there. PMID:26782201
Kwan, Johnny S H; Kung, Annie W C; Sham, Pak C
2011-09-01
Selective genotyping can increase power in quantitative trait association. One example of selective genotyping is two-tail extreme selection, but simple linear regression analysis gives a biased genetic effect estimate. Here, we present a simple correction for the bias.
Helping Students Assess the Relative Importance of Different Intermolecular Interactions
ERIC Educational Resources Information Center
Jasien, Paul G.
2008-01-01
A semi-quantitative model has been developed to estimate the relative effects of dispersion, dipole-dipole interactions, and H-bonding on the normal boiling points ("T[subscript b]") for a subset of simple organic systems. The model is based upon a statistical analysis using multiple linear regression on a series of straight-chain organic…
Model of Market Share Affected by Social Media Reputation
NASA Astrophysics Data System (ADS)
Ishii, Akira; Kawahata, Yasuko; Goto, Ujo
Proposal of market theory to put the effect of social media into account is presented in this paper. The standard market share model in economics is employed as a market theory and the effect of social media is considered quantitatively using the mathematical model for hit phenomena. Using this model, we can estimate the effect of social media in market share as a simple market model simulation using our proposed method.
The Attentional Drift Diffusion Model of Simple Perceptual Decision-Making.
Tavares, Gabriela; Perona, Pietro; Rangel, Antonio
2017-01-01
Perceptual decisions requiring the comparison of spatially distributed stimuli that are fixated sequentially might be influenced by fluctuations in visual attention. We used two psychophysical tasks with human subjects to investigate the extent to which visual attention influences simple perceptual choices, and to test the extent to which the attentional Drift Diffusion Model (aDDM) provides a good computational description of how attention affects the underlying decision processes. We find evidence for sizable attentional choice biases and that the aDDM provides a reasonable quantitative description of the relationship between fluctuations in visual attention, choices and reaction times. We also find that exogenous manipulations of attention induce choice biases consistent with the predictions of the model.
The brainstem reticular formation is a small-world, not scale-free, network
Humphries, M.D; Gurney, K; Prescott, T.J
2005-01-01
Recently, it has been demonstrated that several complex systems may have simple graph-theoretic characterizations as so-called ‘small-world’ and ‘scale-free’ networks. These networks have also been applied to the gross neural connectivity between primate cortical areas and the nervous system of Caenorhabditis elegans. Here, we extend this work to a specific neural circuit of the vertebrate brain—the medial reticular formation (RF) of the brainstem—and, in doing so, we have made three key contributions. First, this work constitutes the first model (and quantitative review) of this important brain structure for over three decades. Second, we have developed the first graph-theoretic analysis of vertebrate brain connectivity at the neural network level. Third, we propose simple metrics to quantitatively assess the extent to which the networks studied are small-world or scale-free. We conclude that the medial RF is configured to create small-world (implying coherent rapid-processing capabilities), but not scale-free, type networks under assumptions which are amenable to quantitative measurement. PMID:16615219
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Damao; Wang, Zhien; Heymsfield, Andrew J.
Measurement of ice number concentration in clouds is important but still challenging. Stratiform mixed-phase clouds (SMCs) provide a simple scenario for retrieving ice number concentration from remote sensing measurements. The simple ice generation and growth pattern in SMCs offers opportunities to use cloud radar reflectivity (Ze) measurements and other cloud properties to infer ice number concentration quantitatively. To understand the strong temperature dependency of ice habit and growth rate quantitatively, we develop a 1-D ice growth model to calculate the ice diffusional growth along its falling trajectory in SMCs. The radar reflectivity and fall velocity profiles of ice crystals calculatedmore » from the 1-D ice growth model are evaluated with the Atmospheric Radiation Measurements (ARM) Climate Research Facility (ACRF) ground-based high vertical resolution radar measurements. Combining Ze measurements and 1-D ice growth model simulations, we develop a method to retrieve the ice number concentrations in SMCs at given cloud top temperature (CTT) and liquid water path (LWP). The retrieved ice concentrations in SMCs are evaluated with in situ measurements and with a three-dimensional cloud-resolving model simulation with a bin microphysical scheme. These comparisons show that the retrieved ice number concentrations are within an uncertainty of a factor of 2, statistically.« less
Quantitative metrics for evaluating the phased roll-out of clinical information systems.
Wong, David; Wu, Nicolas; Watkinson, Peter
2017-09-01
We introduce a novel quantitative approach for evaluating the order of roll-out during phased introduction of clinical information systems. Such roll-outs are associated with unavoidable risk due to patients transferring between clinical areas using both the old and new systems. We proposed a simple graphical model of patient flow through a hospital. Using a simple instance of the model, we showed how a roll-out order can be generated by minimising the flow of patients from the new system to the old system. The model was applied to admission and discharge data acquired from 37,080 patient journeys at the Churchill Hospital, Oxford between April 2013 and April 2014. The resulting order was evaluated empirically and produced acceptable orders. The development of data-driven approaches to clinical Information system roll-out provides insights that may not necessarily be ascertained through clinical judgment alone. Such methods could make a significant contribution to the smooth running of an organisation during the roll-out of a potentially disruptive technology. Unlike previous approaches, which are based on clinical opinion, the approach described here quantitatively assesses the appropriateness of competing roll-out strategies. The data-driven approach was shown to produce strategies that matched clinical intuition and provides a flexible framework that may be used to plan and monitor Clinical Information System roll-out. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
Cognitive niches: an ecological model of strategy selection.
Marewski, Julian N; Schooler, Lael J
2011-07-01
How do people select among different strategies to accomplish a given task? Across disciplines, the strategy selection problem represents a major challenge. We propose a quantitative model that predicts how selection emerges through the interplay among strategies, cognitive capacities, and the environment. This interplay carves out for each strategy a cognitive niche, that is, a limited number of situations in which the strategy can be applied, simplifying strategy selection. To illustrate our proposal, we consider selection in the context of 2 theories: the simple heuristics framework and the ACT-R (adaptive control of thought-rational) architecture of cognition. From the heuristics framework, we adopt the thesis that people make decisions by selecting from a repertoire of simple decision strategies that exploit regularities in the environment and draw on cognitive capacities, such as memory and time perception. ACT-R provides a quantitative theory of how these capacities adapt to the environment. In 14 simulations and 10 experiments, we consider the choice between strategies that operate on the accessibility of memories and those that depend on elaborate knowledge about the world. Based on Internet statistics, our model quantitatively predicts people's familiarity with and knowledge of real-world objects, the distributional characteristics of the associated speed of memory retrieval, and the cognitive niches of classic decision strategies, including those of the fluency, recognition, integration, lexicographic, and sequential-sampling heuristics. In doing so, the model specifies when people will be able to apply different strategies and how accurate, fast, and effortless people's decisions will be.
Kern, Susanne; Singer, Heinz; Hollender, Juliane; Schwarzenbach, René P; Fenner, Kathrin
2011-04-01
Transformation products (TPs) of chemicals released to soil, for example, pesticides, are regularly detected in surface and groundwater with some TPs even dominating observed pesticide levels. Given the large number of TPs potentially formed in the environment, straightforward prioritization methods based on available data and simple, evaluative models are required to identify TPs with a high aquatic exposure potential. While different such methods exist, none of them has so far been systematically evaluated against field data. Using a dynamic multimedia, multispecies model for TP prioritization, we compared the predicted relative surface water exposure potential of pesticides and their TPs with experimental data for 16 pesticides and 46 TPs measured in a small river draining a Swiss agricultural catchment. Twenty TPs were determined quantitatively using solid-phase extraction liquid chromatography mass spectrometry (SPE-LC-MS/MS), whereas the remaining 26 TPs could only be detected qualitatively because of the lack of analytical reference standards. Accordingly, the two sets of TPs were used for quantitative and qualitative model evaluation, respectively. Quantitative comparison of predicted with measured surface water exposure ratios for 20 pairs of TPs and parent pesticides indicated agreement within a factor of 10, except for chloridazon-desphenyl and chloridazon-methyl-desphenyl. The latter two TPs were found to be present in elevated concentrations during baseflow conditions and in groundwater samples across Switzerland, pointing toward high concentrations in exfiltrating groundwater. A simple leaching relationship was shown to qualitatively agree with the observed baseflow concentrations and to thus be useful in identifying TPs for which the simple prioritization model might underestimate actual surface water concentrations. Application of the model to the 26 qualitatively analyzed TPs showed that most of those TPs categorized as exhibiting a high aquatic exposure potential could be confirmed to be present in the majority of water samples investigated. On the basis of these results, we propose a generally applicable, model-based approach to identify those TPs of soil-applied organic contaminants that exhibit a high aquatic exposure potential to prioritize them for higher-tier, experimental investigations.
Propensity to spending of an average consumer over a brief period
NASA Astrophysics Data System (ADS)
De Luca, Roberto; Di Mauro, Marco; Falzarano, Angelo; Naddeo, Adele
2016-08-01
Understanding consumption dynamics and its impact on the whole economy and welfare within the present economic crisis is not an easy task. Indeed the level of consumer demand for different goods varies with the prices, consumer incomes and demographic factors. Furthermore crisis may trigger different behaviors which result in distortions and amplification effects. In the present work we propose a simple model to quantitatively describe the time evolution over a brief period of the amount of money an average consumer decides to spend, depending on his/her available budget. A simple hydrodynamical analog of the model is discussed. Finally, perspectives of this work are briefly outlined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shumilin, V. P.; Shumilin, A. V.; Shumilin, N. V., E-mail: vladimirshumilin@yahoo.com
2015-11-15
The paper is devoted to comparison of experimental data with theoretical predictions concerning the dependence of the current of accelerated ions on the operating voltage of a Hall thruster with an anode layer. The error made in the paper published by the authors in Plasma Phys. Rep. 40, 229 (2014) occurred because of a misprint in the Encyclopedia of Low-Temperature Plasma. In the present paper, this error is corrected. It is shown that the simple model proposed in the above-mentioned paper is in qualitative and quantitative agreement with experimental results.
Ancient Paradoxes Can Extend Mathematical Thinking
ERIC Educational Resources Information Center
Czocher, Jennifer A.; Moss, Diana L.
2017-01-01
This article presents the Snail problem, a relatively simple challenge about motion that offers engaging extensions involving the notion of infinity. It encourages students in grades 5-9 to connect mathematics learning to logic, history, and philosophy through analyzing the problem, making sense of quantitative relationships, and modeling with…
NASA Astrophysics Data System (ADS)
Wang, Pin; Bista, Rajan K.; Khalbuss, Walid E.; Qiu, Wei; Staton, Kevin D.; Zhang, Lin; Brentnall, Teresa A.; Brand, Randall E.; Liu, Yang
2011-03-01
Alterations in nuclear architecture are the hallmark diagnostic characteristic of cancer cells. In this work, we show that the nuclear architectural characteristics quantified by spatial-domain low-coherence quantitative phase microscopy (SL-QPM), is more sensitive for the identification of cancer cells than conventional cytopathology. We demonstrated the importance of nuclear architectural characteristics in both an animal model of intestinal carcinogenesis - APC/Min mouse model and human cytology specimens with colorectal cancer by identifying cancer from cytologically noncancerous appearing cells. The determination of nanoscale nuclear architecture using this simple and practical optical instrument is a significant advance towards cancer diagnosis.
Falter, Christian; Ellinger, Dorothea; von Hülsen, Behrend; Heim, René; Voigt, Christian A.
2015-01-01
The outwardly directed cell wall and associated plasma membrane of epidermal cells represent the first layers of plant defense against intruding pathogens. Cell wall modifications and the formation of defense structures at sites of attempted pathogen penetration are decisive for plant defense. A precise isolation of these stress-induced structures would allow a specific analysis of regulatory mechanism and cell wall adaption. However, methods for large-scale epidermal tissue preparation from the model plant Arabidopsis thaliana, which would allow proteome and cell wall analysis of complete, laser-microdissected epidermal defense structures, have not been provided. We developed the adhesive tape – liquid cover glass technique (ACT) for simple leaf epidermis preparation from A. thaliana, which is also applicable on grass leaves. This method is compatible with subsequent staining techniques to visualize stress-related cell wall structures, which were precisely isolated from the epidermal tissue layer by laser microdissection (LM) coupled to laser pressure catapulting. We successfully demonstrated that these specific epidermal tissue samples could be used for quantitative downstream proteome and cell wall analysis. The development of the ACT for simple leaf epidermis preparation and the compatibility to LM and downstream quantitative analysis opens new possibilities in the precise examination of stress- and pathogen-related cell wall structures in epidermal cells. Because the developed tissue processing is also applicable on A. thaliana, well-established, model pathosystems that include the interaction with powdery mildews can be studied to determine principal regulatory mechanisms in plant–microbe interaction with their potential outreach into crop breeding. PMID:25870605
Falter, Christian; Ellinger, Dorothea; von Hülsen, Behrend; Heim, René; Voigt, Christian A
2015-01-01
The outwardly directed cell wall and associated plasma membrane of epidermal cells represent the first layers of plant defense against intruding pathogens. Cell wall modifications and the formation of defense structures at sites of attempted pathogen penetration are decisive for plant defense. A precise isolation of these stress-induced structures would allow a specific analysis of regulatory mechanism and cell wall adaption. However, methods for large-scale epidermal tissue preparation from the model plant Arabidopsis thaliana, which would allow proteome and cell wall analysis of complete, laser-microdissected epidermal defense structures, have not been provided. We developed the adhesive tape - liquid cover glass technique (ACT) for simple leaf epidermis preparation from A. thaliana, which is also applicable on grass leaves. This method is compatible with subsequent staining techniques to visualize stress-related cell wall structures, which were precisely isolated from the epidermal tissue layer by laser microdissection (LM) coupled to laser pressure catapulting. We successfully demonstrated that these specific epidermal tissue samples could be used for quantitative downstream proteome and cell wall analysis. The development of the ACT for simple leaf epidermis preparation and the compatibility to LM and downstream quantitative analysis opens new possibilities in the precise examination of stress- and pathogen-related cell wall structures in epidermal cells. Because the developed tissue processing is also applicable on A. thaliana, well-established, model pathosystems that include the interaction with powdery mildews can be studied to determine principal regulatory mechanisms in plant-microbe interaction with their potential outreach into crop breeding.
Combinational Reasoning of Quantitative Fuzzy Topological Relations for Simple Fuzzy Regions
Liu, Bo; Li, Dajun; Xia, Yuanping; Ruan, Jian; Xu, Lili; Wu, Huanyi
2015-01-01
In recent years, formalization and reasoning of topological relations have become a hot topic as a means to generate knowledge about the relations between spatial objects at the conceptual and geometrical levels. These mechanisms have been widely used in spatial data query, spatial data mining, evaluation of equivalence and similarity in a spatial scene, as well as for consistency assessment of the topological relations of multi-resolution spatial databases. The concept of computational fuzzy topological space is applied to simple fuzzy regions to efficiently and more accurately solve fuzzy topological relations. Thus, extending the existing research and improving upon the previous work, this paper presents a new method to describe fuzzy topological relations between simple spatial regions in Geographic Information Sciences (GIS) and Artificial Intelligence (AI). Firstly, we propose a new definition for simple fuzzy line segments and simple fuzzy regions based on the computational fuzzy topology. And then, based on the new definitions, we also propose a new combinational reasoning method to compute the topological relations between simple fuzzy regions, moreover, this study has discovered that there are (1) 23 different topological relations between a simple crisp region and a simple fuzzy region; (2) 152 different topological relations between two simple fuzzy regions. In the end, we have discussed some examples to demonstrate the validity of the new method, through comparisons with existing fuzzy models, we showed that the proposed method can compute more than the existing models, as it is more expressive than the existing fuzzy models. PMID:25775452
Quantiprot - a Python package for quantitative analysis of protein sequences.
Konopka, Bogumił M; Marciniak, Marta; Dyrka, Witold
2017-07-17
The field of protein sequence analysis is dominated by tools rooted in substitution matrices and alignments. A complementary approach is provided by methods of quantitative characterization. A major advantage of the approach is that quantitative properties defines a multidimensional solution space, where sequences can be related to each other and differences can be meaningfully interpreted. Quantiprot is a software package in Python, which provides a simple and consistent interface to multiple methods for quantitative characterization of protein sequences. The package can be used to calculate dozens of characteristics directly from sequences or using physico-chemical properties of amino acids. Besides basic measures, Quantiprot performs quantitative analysis of recurrence and determinism in the sequence, calculates distribution of n-grams and computes the Zipf's law coefficient. We propose three main fields of application of the Quantiprot package. First, quantitative characteristics can be used in alignment-free similarity searches, and in clustering of large and/or divergent sequence sets. Second, a feature space defined by quantitative properties can be used in comparative studies of protein families and organisms. Third, the feature space can be used for evaluating generative models, where large number of sequences generated by the model can be compared to actually observed sequences.
Biomat development in soil treatment units for on-site wastewater treatment.
Winstanley, H F; Fowler, A C
2013-10-01
We provide a simple mathematical model of the bioremediation of contaminated wastewater leaching into the subsoil below a septic tank percolation system. The model comprises a description of the percolation system's flows, together with equations describing the growth of biomass and the uptake of an organic contaminant concentration. By first rendering the model dimensionless, it can be partially solved, to provide simple insights into the processes which control the efficacy of the system. In particular, we provide quantitative insight into the effect of a near surface biomat on subsoil permeability; this can lead to trench ponding, and thus propagation of effluent further down the trench. Using the computed vadose zone flow field, the model can be simply extended to include reactive transport of other contaminants of interest.
DOT National Transportation Integrated Search
2017-06-01
The objective of this study was to develop an objective, quantitative method for evaluating damage to bridge girders by using artificial neural networks (ANNs). This evaluation method, which is a supplement to visual inspection, requires only the res...
The Ether Wind and the Global Positioning System.
ERIC Educational Resources Information Center
Muller, Rainer
2000-01-01
Explains how students can perform a refutation of the ether theory using information from the Global Positioning System (GPS). Discusses the functioning of the GPS, qualitatively describes how position determination would be affected by an ether wind, and illustrates the pertinent ideas with a simple quantitative model. (WRM)
Quantifying Confidence in Model Predictions for Hypersonic Aircraft Structures
2015-03-01
of isolating calibrations of models in the network, segmented and simultaneous calibration are compared using the Kullback - Leibler ...value of θ. While not all test -statistics are as simple as measuring goodness or badness of fit , their directional interpretations tend to remain...data quite well, qualitatively. Quantitative goodness - of - fit tests are problematic because they assume a true empirical CDF is being tested or
NASA Astrophysics Data System (ADS)
Attia, Khalid A. M.; Nassar, Mohammed W. I.; El-Zeiny, Mohamed B.; Serag, Ahmed
2016-03-01
Different chemometric models were applied for the quantitative analysis of amoxicillin (AMX), and flucloxacillin (FLX) in their binary mixtures, namely, partial least squares (PLS), spectral residual augmented classical least squares (SRACLS), concentration residual augmented classical least squares (CRACLS) and artificial neural networks (ANNs). All methods were applied with and without variable selection procedure (genetic algorithm GA). The methods were used for the quantitative analysis of the drugs in laboratory prepared mixtures and real market sample via handling the UV spectral data. Robust and simpler models were obtained by applying GA. The proposed methods were found to be rapid, simple and required no preliminary separation steps.
ERIC Educational Resources Information Center
Valverde, Juan; This, Herve; Vignolle, Marc
2007-01-01
A simple method for the quantitative determination of photosynthetic pigments extracted from green beans using thin-layer chromatography is proposed. Various extraction methods are compared, and it is shown how a simple flatbed scanner and free software for image processing can give a quantitative determination of pigments. (Contains 5 figures.)
Homeopathic potentization based on nanoscale domains.
Czerlinski, George; Ypma, Tjalling
2011-12-01
The objectives of this study were to present a simple descriptive and quantitative model of how high potencies in homeopathy arise. The model begins with the mechanochemical production of hydrogen and hydroxyl radicals from water and the electronic stabilization of the resulting nanodomains of water molecules. The life of these domains is initially limited to a few days, but may extend to years when the electromagnetic characteristic of a homeopathic agent is copied onto the domains. This information is transferred between the original agent and the nanodomains, and also between previously imprinted nanodomains and new ones. The differential equations previously used to describe these processes are replaced here by exponential expressions, corresponding to simplified model mechanisms. Magnetic stabilization is also involved, since these long-lived domains apparently require the presence of the geomagnetic field. Our model incorporates this factor in the formation of the long-lived compound. Numerical simulation and graphs show that the potentization mechanism can be described quantitatively by a very simplified mechanism. The omitted factors affect only the fine structure of the kinetics. Measurements of pH changes upon absorption of different electromagnetic frequencies indicate that about 400 nanodomains polymerize to form one cooperating unit. Singlet excited states of some compounds lead to dramatic changes in their hydrogen ion dissociation constant, explaining this pH effect and suggesting that homeopathic information is imprinted as higher singlet excited states. A simple description is provided of the process of potentization in homeopathic dilutions. With the exception of minor details, this simple model replicates the results previously obtained from a more complex model. While excited states are short lived in isolated molecules, they become long lived in nanodomains that form coherent cooperative aggregates controlled by the geomagnetic field. These domains either slowly emit biophotons or perform specific biochemical work at their target.
System-level modeling of acetone-butanol-ethanol fermentation.
Liao, Chen; Seo, Seung-Oh; Lu, Ting
2016-05-01
Acetone-butanol-ethanol (ABE) fermentation is a metabolic process of clostridia that produces bio-based solvents including butanol. It is enabled by an underlying metabolic reaction network and modulated by cellular gene regulation and environmental cues. Mathematical modeling has served as a valuable strategy to facilitate the understanding, characterization and optimization of this process. In this review, we highlight recent advances in system-level, quantitative modeling of ABE fermentation. We begin with an overview of integrative processes underlying the fermentation. Next we survey modeling efforts including early simple models, models with a systematic metabolic description, and those incorporating metabolism through simple gene regulation. Particular focus is given to a recent system-level model that integrates the metabolic reactions, gene regulation and environmental cues. We conclude by discussing the remaining challenges and future directions towards predictive understanding of ABE fermentation. © FEMS 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Measurement of regional cerebral blood flow with copper-62-PTSM and a three-compartment model.
Okazawa, H; Yonekura, Y; Fujibayashi, Y; Mukai, T; Nishizawa, S; Magata, Y; Ishizu, K; Tamaki, N; Konishi, J
1996-07-01
We evaluated quantitatively 62Cu-labeled pyruvaldehyde bis(N4-methylthiosemicarbazone) copper II (62Cu-PTSM) as a brain perfusion tracer for positron emission tomography (PET). For quantitative measurement, the octanol extraction method is needed to correct for arterial radioactivity in estimating the lipophilic input function, but the procedure is not practical for clinical studies. To measure regional cerebral blood flow (rCBF) by 62Cu-PTSM with simple arterial blood sampling, a standard curve of the octanol extraction ratio and a three-compartment model were applied. We performed both 15O-labeled water PET and 62 Cu-PTSM PET with dynamic data acquisition and arterial sampling in six subjects. Data obtained in 10 subjects studied previously were used for the standard octanol extraction curve. Arterial activity was measured and corrected to obtain the true input function using the standard curve. Graphical analysis (Gjedde-Patlak plot) with the data for each subject fitted by a straight regression line suggested that 62Cu-PTSM can be analyzed by the three-compartment model with negligible K4. Using this model, K1-K3 were estimated from curve fitting of the cerebral time-activity curve and the corrected input function. The fractional uptake of 62Cu-PTSM was corrected to rCBF with the individual extraction at steady state calculated from K1-K3. The influx rates (Ki) obtained from three-compartment model and graphical analyses were compared for the validation of the model. A comparison of rCBF values obtained from 62Cu-PTSM and 150-water studies demonstrated excellent correlation. The results suggest the potential feasibility of quantitation of cerebral perfusion with 62Cu-PTSM accompanied by dynamic PET and simple arterial sampling.
How human drivers control their vehicle
NASA Astrophysics Data System (ADS)
Wagner, P.
2006-08-01
The data presented here show that human drivers apply a discrete noisy control mechanism to drive their vehicle. A car-following model built on these observations, together with some physical limitations (crash-freeness, acceleration), lead to non-Gaussian probability distributions in the speed difference and distance which are in good agreement with empirical data. All model parameters have a clear physical meaning and can be measured. Despite its apparent complexity, this model is simple to understand and might serve as a starting point to develop even quantitatively correct models.
Operational models of infrastructure resilience.
Alderson, David L; Brown, Gerald G; Carlyle, W Matthew
2015-04-01
We propose a definition of infrastructure resilience that is tied to the operation (or function) of an infrastructure as a system of interacting components and that can be objectively evaluated using quantitative models. Specifically, for any particular system, we use quantitative models of system operation to represent the decisions of an infrastructure operator who guides the behavior of the system as a whole, even in the presence of disruptions. Modeling infrastructure operation in this way makes it possible to systematically evaluate the consequences associated with the loss of infrastructure components, and leads to a precise notion of "operational resilience" that facilitates model verification, validation, and reproducible results. Using a simple example of a notional infrastructure, we demonstrate how to use these models for (1) assessing the operational resilience of an infrastructure system, (2) identifying critical vulnerabilities that threaten its continued function, and (3) advising policymakers on investments to improve resilience. © 2014 Society for Risk Analysis.
Hossler, Fred E.; Douglas, John E.
2001-05-01
Vascular corrosion casting has been used for about 40 years to produce replicas of normal and abnormal vasculature and microvasculature of various tissues and organs that could be viewed at the ultrastructural level. In combination with scanning electron microscopy (SEM), the primary application of corrosion casting has been to describe the morphology and anatomical distribution of blood vessels in these tissues. However, such replicas should also contain quantitative information about that vasculature. This report summarizes some simple quantitative applications of vascular corrosion casting. Casts were prepared by infusing Mercox resin or diluted Mercox resin into the vasculature. Surrounding tissues were removed with KOH, hot water, and formic acid, and the resulting dried casts were observed with routine SEM. The orientation, size, and frequency of vascular endothelial cells were determined from endothelial nuclear imprints on various cast surfaces. Vascular volumes of heart, lung, and avian salt gland were calculated using tissue and resin densities, and weights. Changes in vascular volume and functional capillary density in an experimentally induced emphysema model were estimated from confocal images of casts. Clearly, corrosion casts lend themselves to quantitative analysis. However, because blood vessels differ in their compliances, in their responses to the toxicity of casting resins, and in their response to varying conditions of corrosion casting procedures, it is prudent to use care in interpreting this quantitative data. Some of the applications and limitations of quantitative methodology with corrosion casts are reviewed here.
A Simple Model for Immature Retrovirus Capsid Assembly
NASA Astrophysics Data System (ADS)
Paquay, Stefan; van der Schoot, Paul; Dragnea, Bogdan
In this talk I will present simulations of a simple model for capsomeres in immature virus capsids, consisting of only point particles with a tunable range of attraction constrained to a spherical surface. We find that, at sufficiently low density, a short interaction range is sufficient for the suppression of five-fold defects in the packing and causes instead larger tears and scars in the capsid. These findings agree both qualitatively and quantitatively with experiments on immature retrovirus capsids, implying that the structure of the retroviral protein lattice can, for a large part, be explained simply by the effective interaction between the capsomeres. We thank the HFSP for funding under Grant RGP0017/2012.
2011-01-01
refinement of the vehicle body structure through quantitative assessment of stiffness and modal parameter changes resulting from modifications to the beam...differential placed on the axle , adjustment of the torque output to the opposite wheel may be required to obtain the correct solution. Thus...represented by simple inertial components with appropriate model connectivity instead to determine the free modal response of powertrain type
Sensitivity analysis of bi-layered ceramic dental restorations.
Zhang, Zhongpu; Zhou, Shiwei; Li, Qing; Li, Wei; Swain, Michael V
2012-02-01
The reliability and longevity of ceramic prostheses have become a major concern. The existing studies have focused on some critical issues from clinical perspectives, but more researches are needed to address fundamental sciences and fabrication issues to ensure the longevity and durability of ceramic prostheses. The aim of this paper was to explore how "sensitive" the thermal and mechanical responses, in terms of changes in temperature and thermal residual stress of the bi-layered ceramic systems and crown models will be with respect to the perturbation of the design variables chosen (e.g. layer thickness and heat transfer coefficient) in a quantitative way. In this study, three bi-layered ceramic models with different geometries are considered: (i) a simple bi-layered plate, (ii) a simple bi-layer triangle, and (iii) an axisymmetric bi-layered crown. The layer thickness and convective heat transfer coefficient (or cooling rate) seem to be more sensitive for the porcelain fused on zirconia substrate models. The resultant sensitivities indicate a critical importance of the heat transfer coefficient and thickness ratio of core to veneer on the temperature distributions and residual stresses in each model. The findings provide a quantitative basis for assessing the effects of fabrication uncertainties and optimizing the design of ceramic prostheses. Copyright © 2011 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Gui, Jiang; Moore, Jason H.; Williams, Scott M.; Andrews, Peter; Hillege, Hans L.; van der Harst, Pim; Navis, Gerjan; Van Gilst, Wiek H.; Asselbergs, Folkert W.; Gilbert-Diamond, Diane
2013-01-01
We present an extension of the two-class multifactor dimensionality reduction (MDR) algorithm that enables detection and characterization of epistatic SNP-SNP interactions in the context of a quantitative trait. The proposed Quantitative MDR (QMDR) method handles continuous data by modifying MDR’s constructive induction algorithm to use a T-test. QMDR replaces the balanced accuracy metric with a T-test statistic as the score to determine the best interaction model. We used a simulation to identify the empirical distribution of QMDR’s testing score. We then applied QMDR to genetic data from the ongoing prospective Prevention of Renal and Vascular End-Stage Disease (PREVEND) study. PMID:23805232
Regional evaluation of evapotranspiration in the Everglades
German, Edward R.
1996-01-01
Understanding the water budget of the Everglades system is crucial to the success of restoration and management strategies. Although the water budget is simple in concept, it is difficult to assess quantitatively. Models used to simulate changes in water levels and vegetation resulting from management strategies need to accurately simulate all components of the water budget.
Systems Engineering of Education V: Quantitative Concepts for Education Systems.
ERIC Educational Resources Information Center
Silvern, Leonard C.
The fifth (of 14) volume of the Education and Training Consultant's (ETC) series on systems engineering of education is designed for readers who have completed others in the series. It reviews arithmetic and algebraic procedures and applies these to simple education and training systems. Flowchart models of example problems are developed and…
Anderson, Carl A; McRae, Allan F; Visscher, Peter M
2006-07-01
Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using simulation we compare this method to both the Cox and Weibull proportional hazards models and a standard linear regression method that ignores censoring. The grouped linear regression method is of equivalent power to both the Cox and Weibull proportional hazards methods and is significantly better than the standard linear regression method when censored observations are present. The method is also robust to the proportion of censored individuals and the underlying distribution of the trait. On the basis of linear regression methodology, the grouped linear regression model is computationally simple and fast and can be implemented readily in freely available statistical software.
The fluid trampoline: droplets bouncing on a soap film
NASA Astrophysics Data System (ADS)
Bush, John; Gilet, Tristan
2008-11-01
We present the results of a combined experimental and theoretical investigation of droplets falling onto a horizontal soap film. Both static and vertically vibrated soap films are considered. A quasi-static description of the soap film shape yields a force-displacement relation that provides excellent agreement with experiment, and allows us to model the film as a nonlinear spring. This approach yields an accurate criterion for the transition between droplet bouncing and crossing on the static film; moreover, it allows us to rationalize the observed constancy of the contact time and scaling for the coefficient of restitution in the bouncing states. On the vibrating film, a variety of bouncing behaviours were observed, including simple and complex periodic states, multiperiodicity and chaos. A simple theoretical model is developed that captures the essential physics of the bouncing process, reproducing all observed bouncing states. Quantitative agreement between model and experiment is deduced for simple periodic modes, and qualitative agreement for more complex periodic and chaotic bouncing states.
A simple optical model to estimate suspended particulate matter in Yellow River Estuary.
Qiu, Zhongfeng
2013-11-18
Distribution of the suspended particulate matter (SPM) concentration is a key issue for analyzing the deposition and erosion variety of the estuary and evaluating the material fluxes from river to sea. Satellite remote sensing is a useful tool to investigate the spatial variation of SPM concentration in estuarial zones. However, algorithm developments and validations of the SPM concentrations in Yellow River Estuary (YRE) have been seldom performed before and therefore our knowledge on the quality of retrieval of SPM concentration is poor. In this study, we developed a new simple optical model to estimate SPM concentration in YRE by specifying the optimal wavelength ratios (600-710 nm)/ (530-590 nm) based on observations of 5 cruises during 2004 and 2011. The simple optical model was attentively calibrated and the optimal band ratios were selected for application to multiple sensors, 678/551 for the Moderate Resolution Imaging Spectroradiometer (MODIS), 705/560 for the Medium Resolution Imaging Spectrometer (MERIS) and 680/555 for the Geostationary Ocean Color Imager (GOCI). With the simple optical model, the relative percentage difference and the mean absolute error were 35.4% and 15.6 gm(-3) respectively for MODIS, 42.2% and 16.3 gm(-3) for MERIS, and 34.2% and 14.7 gm(-3) for GOCI, based on an independent validation data set. Our results showed a good precision of estimation for SPM concentration using the new simple optical model, contrasting with the poor estimations derived from existing empirical models. Providing an available atmospheric correction scheme for satellite imagery, our simple model could be used for quantitative monitoring of SPM concentrations in YRE.
Development and characterisation of a novel three-dimensional inter-kingdom wound biofilm model.
Townsend, Eleanor M; Sherry, Leighann; Rajendran, Ranjith; Hansom, Donald; Butcher, John; Mackay, William G; Williams, Craig; Ramage, Gordon
2016-11-01
Chronic diabetic foot ulcers are frequently colonised and infected by polymicrobial biofilms that ultimately prevent healing. This study aimed to create a novel in vitro inter-kingdom wound biofilm model on complex hydrogel-based cellulose substrata to test commonly used topical wound treatments. Inter-kingdom triadic biofilms composed of Candida albicans, Pseudomonas aeruginosa, and Staphylococcus aureus were shown to be quantitatively greater in this model compared to a simple substratum when assessed by conventional culture, metabolic dye and live dead qPCR. These biofilms were both structurally complex and compositionally dynamic in response to topical therapy, so when treated with either chlorhexidine or povidone iodine, principal component analysis revealed that the 3-D cellulose model was minimally impacted compared to the simple substratum model. This study highlights the importance of biofilm substratum and inclusion of relevant polymicrobial and inter-kingdom components, as these impact penetration and efficacy of topical antiseptics.
In silico method for modelling metabolism and gene product expression at genome scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lerman, Joshua A.; Hyduke, Daniel R.; Latif, Haythem
2012-07-03
Transcription and translation use raw materials and energy generated metabolically to create the macromolecular machinery responsible for all cellular functions, including metabolism. A biochemically accurate model of molecular biology and metabolism will facilitate comprehensive and quantitative computations of an organism's molecular constitution as a function of genetic and environmental parameters. Here we formulate a model of metabolism and macromolecular expression. Prototyping it using the simple microorganism Thermotoga maritima, we show our model accurately simulates variations in cellular composition and gene expression. Moreover, through in silico comparative transcriptomics, the model allows the discovery of new regulons and improving the genome andmore » transcription unit annotations. Our method presents a framework for investigating molecular biology and cellular physiology in silico and may allow quantitative interpretation of multi-omics data sets in the context of an integrated biochemical description of an organism.« less
Hou, Chen; Amunugama, Kaushalya
2015-07-01
The relationship between energy expenditure and longevity has been a central theme in aging studies. Empirical studies have yielded controversial results, which cannot be reconciled by existing theories. In this paper, we present a simple theoretical model based on first principles of energy conservation and allometric scaling laws. The model takes into considerations the energy tradeoffs between life history traits and the efficiency of the energy utilization, and offers quantitative and qualitative explanations for a set of seemingly contradictory empirical results. We show that oxidative metabolism can affect cellular damage and longevity in different ways in animals with different life histories and under different experimental conditions. Qualitative data and the linearity between energy expenditure, cellular damage, and lifespan assumed in previous studies are not sufficient to understand the complexity of the relationships. Our model provides a theoretical framework for quantitative analyses and predictions. The model is supported by a variety of empirical studies, including studies on the cellular damage profile during ontogeny; the intra- and inter-specific correlations between body mass, metabolic rate, and lifespan; and the effects on lifespan of (1) diet restriction and genetic modification of growth hormone, (2) the cold and exercise stresses, and (3) manipulations of antioxidant. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
ERIC Educational Resources Information Center
Muthen, Bengt
This paper investigates methods that avoid using multiple groups to represent the missing data patterns in covariance structure modeling, attempting instead to do a single-group analysis where the only action the analyst has to take is to indicate that data is missing. A new covariance structure approach developed by B. Muthen and G. Arminger is…
Forest management and economics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buongiorno, J.; Gilless, J.K.
1987-01-01
This volume provides a survey of quantitative methods, guiding the reader through formulation and analysis of models that address forest management problems. The authors use simple mathematics, graphics, and short computer programs to explain each method. Emphasizing applications, they discuss linear, integer, dynamic, and goal programming; simulation; network modeling; and econometrics, as these relate to problems of determining economic harvest schedules in even-aged and uneven-aged forests, the evaluation of forest policies, multiple-objective decision making, and more.
Naik, P K; Singh, T; Singh, H
2009-07-01
Quantitative structure-activity relationship (QSAR) analyses were performed independently on data sets belonging to two groups of insecticides, namely the organophosphates and carbamates. Several types of descriptors including topological, spatial, thermodynamic, information content, lead likeness and E-state indices were used to derive quantitative relationships between insecticide activities and structural properties of chemicals. A systematic search approach based on missing value, zero value, simple correlation and multi-collinearity tests as well as the use of a genetic algorithm allowed the optimal selection of the descriptors used to generate the models. The QSAR models developed for both organophosphate and carbamate groups revealed good predictability with r(2) values of 0.949 and 0.838 as well as [image omitted] values of 0.890 and 0.765, respectively. In addition, a linear correlation was observed between the predicted and experimental LD(50) values for the test set data with r(2) of 0.871 and 0.788 for both the organophosphate and carbamate groups, indicating that the prediction accuracy of the QSAR models was acceptable. The models were also tested successfully from external validation criteria. QSAR models developed in this study should help further design of novel potent insecticides.
Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models
Anderson, Ryan; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott M.; Morris, Richard V.; Ehlmann, Bethany L.; Dyar, M. Darby
2017-01-01
Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “sub-model” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.
Das, Payel; Matysiak, Silvina; Clementi, Cecilia
2005-01-01
Coarse-grained models have been extremely valuable in promoting our understanding of protein folding. However, the quantitative accuracy of existing simplified models is strongly hindered either from the complete removal of frustration (as in the widely used Gō-like models) or from the compromise with the minimal frustration principle and/or realistic protein geometry (as in the simple on-lattice models). We present a coarse-grained model that “naturally” incorporates sequence details and energetic frustration into an overall minimally frustrated folding landscape. The model is coupled with an optimization procedure to design the parameters of the protein Hamiltonian to fold into a desired native structure. The application to the study of src-Src homology 3 domain shows that this coarse-grained model contains the main physical-chemical ingredients that are responsible for shaping the folding landscape of this protein. The results illustrate the importance of nonnative interactions and energetic heterogeneity for a quantitative characterization of folding mechanisms. PMID:16006532
Early Understandings of Simple Food Chains: A Learning Progression for the Preschool Years
ERIC Educational Resources Information Center
Allen, Michael
2017-01-01
Aspects of preschoolers' ecological understandings were explored in a cross-age, quantitative study that utilised a sample of seventy-five 3- to 5-year-old children. Specifically, their concepts of feeding relationships were determined by presenting physical models of three-step food chains during structured interviews. A majority of children,…
USDA-ARS?s Scientific Manuscript database
In recent years, increased awareness of the potential interactions between rising atmospheric CO2 concentrations ([CO2]) and temperature has illustrated the importance of multi-factorial ecosystem manipulation experiments for validating Earth System models. To address the urgent need for increased u...
EIT Noise Resonance Power Broadening: a probe for coherence dynamics
NASA Astrophysics Data System (ADS)
Crescimanno, Michael; O'Leary, Shannon; Snider, Charles
2012-06-01
EIT noise correlation spectroscopy holds promise as a simple, robust method for performing high resolution spectroscopy used in devices as diverse as magnetometers and clocks. One useful feature of these noise correlation resonances is that they do not power broaden with the EIT window. We report on measurements of the eventual power broadening (at higher optical powers) of these resonances and a simple, quantitative theoretical model that relates the observed power broadening slope with processes such as two-photon detuning gradients and coherence diffusion. These processes reduce the ground state coherence relative to that of a homogeneous system, and thus the power broadening slope of the EIT noise correlation resonance may be a simple, useful probe for coherence dynamics.
Singularities in x-ray spectra of metals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mahan, G.D.
1987-08-01
The x-ray spectroscopies discussed are absorption, emission, and photoemission. The singularities show up in each of them in a different manner. In absorption and emission they show up as power law singularities at the thresholds frequencies. This review will emphasize two themes. First a simple model is proposed to describe this phenomena, which is now called the MND model after MAHAN-NOZIERES-DeDOMINICIS. Exact analytical solutions are now available for this model for the three spectroscopies discussed above. These analytical models can be evaluated numerically in a simple way. The second theme of this review is that great care must be usedmore » when comparing the theory to experiment. A number of factors influence the edge shapes in x-ray spectroscopy. The edge singularities play an important role, and are observed in many matals. Quantitative fits of the theory to experiment require the consideration of other factors. 51 refs.« less
Culture and Demography: From Reluctant Bedfellows to Committed Partners
Bachrach, Christine A.
2015-01-01
Demography and culture have had a long but ambivalent relationship. Cultural influences are widely recognized as important for demographic outcomes, but are often “backgrounded” in demographic research. I argue that progress towards a more successful integration is feasible and suggest a network model of culture as a potential tool. The network model bridges both traditional (holistic and institutional) and contemporary (tool kit) models of culture used in the social sciences and offers a simple vocabulary for the diverse set of cultural concepts such as attitudes, beliefs and norms, and quantitative measures of how culture is organized. The proposed model conceptualizes culture as a nested network of meanings which are represented by schemas that range in complexity from simple concepts to multifaceted cultural models. I illustrate the potential value of a model using accounts of the cultural changes underpinning the transformation of marriage in the U.S. and point to developments in the social, cognitive and computational sciences that could facilitate the application of the model in empirical demographic research. PMID:24338643
Culture and demography: from reluctant bedfellows to committed partners.
Bachrach, Christine A
2014-02-01
Demography and culture have had a long but ambivalent relationship. Cultural influences are widely recognized as important for demographic outcomes but are often "backgrounded" in demographic research. I argue that progress toward a more successful integration is feasible and suggest a network model of culture as a potential tool. The network model bridges both traditional (holistic and institutional) and contemporary (tool kit) models of culture used in the social sciences and offers a simple vocabulary for a diverse set of cultural concepts, such as attitudes, beliefs, and norms, as well as quantitative measures of how culture is organized. The proposed model conceptualizes culture as a nested network of meanings represented by schemas that range in complexity from simple concepts to multifaceted cultural models. I illustrate the potential value of a model using accounts of the cultural changes underpinning the transformation of marriage in the United States and point to developments in the social, cognitive, and computational sciences that could facilitate the application of the model in empirical demographic research.
Mechanochemical models of processive molecular motors
NASA Astrophysics Data System (ADS)
Lan, Ganhui; Sun, Sean X.
2012-05-01
Motor proteins are the molecular engines powering the living cell. These nanometre-sized molecules convert chemical energy, both enthalpic and entropic, into useful mechanical work. High resolution single molecule experiments can now observe motor protein movement with increasing precision. The emerging data must be combined with structural and kinetic measurements to develop a quantitative mechanism. This article describes a modelling framework where quantitative understanding of motor behaviour can be developed based on the protein structure. The framework is applied to myosin motors, with emphasis on how synchrony between motor domains give rise to processive unidirectional movement. The modelling approach shows that the elasticity of protein domains are important in regulating motor function. Simple models of protein domain elasticity are presented. The framework can be generalized to other motor systems, or an ensemble of motors such as muscle contraction. Indeed, for hundreds of myosins, our framework can be reduced to the Huxely-Simmons description of muscle movement in the mean-field limit.
Mass and Environment as Drivers of Galaxy Evolution: Simplicity and its Consequences
NASA Astrophysics Data System (ADS)
Peng, Yingjie
2012-01-01
The galaxy population appears to be composed of infinitely complex different types and properties at first sight, however, when large samples of galaxies are studied, it appears that the vast majority of galaxies just follow simple scaling relations and similar evolutional modes while the outliers represent some minority. The underlying simplicities of the interrelationships among stellar mass, star formation rate and environment are seen in SDSS and zCOSMOS. We demonstrate that the differential effects of mass and environment are completely separable to z 1, indicating that two distinct physical processes are operating, namely the "mass quenching" and "environment quenching". These two simple quenching processes, plus some additional quenching due to merging, then naturally produce the Schechter form of the galaxy stellar mass functions and make quantitative predictions for the inter-relationships between the Schechter parameters of star-forming and passive galaxies in different environments. All of these detailed quantitative relationships are indeed seen, to very high precision, in SDSS, lending strong support to our simple empirically-based model. The model also offers qualitative explanations for the "anti-hierarchical" age-mass relation and the alpha-enrichment patterns for passive galaxies and makes some other testable predictions such as the mass function of the population of transitory objects that are in the process of being quenched, the galaxy major- and minor-merger rates, the galaxy stellar mass assembly history, star formation history and etc. Although still purely phenomenological, the model makes clear what the evolutionary characteristics of the relevant physical processes must in fact be.
ERIC Educational Resources Information Center
Anderson, James L.; And Others
1980-01-01
Presents an undergraduate quantitative analysis experiment, describing an atomic absorption quantitation scheme that is fast, sensitive and comparatively simple relative to other titration experiments. (CS)
NASA Astrophysics Data System (ADS)
Wang, Pin; Bista, Rajan K.; Khalbuss, Walid E.; Qiu, Wei; Uttam, Shikhar; Staton, Kevin; Zhang, Lin; Brentnall, Teresa A.; Brand, Randall E.; Liu, Yang
2010-11-01
Definitive diagnosis of malignancy is often challenging due to limited availability of human cell or tissue samples and morphological similarity with certain benign conditions. Our recently developed novel technology-spatial-domain low-coherence quantitative phase microscopy (SL-QPM)-overcomes the technical difficulties and enables us to obtain quantitative information about cell nuclear architectural characteristics with nanoscale sensitivity. We explore its ability to improve the identification of malignancy, especially in cytopathologically non-cancerous-appearing cells. We perform proof-of-concept experiments with an animal model of colorectal carcinogenesis-APCMin mouse model and human cytology specimens of colorectal cancer. We show the ability of in situ nanoscale nuclear architectural characteristics in identifying cancerous cells, especially in those labeled as ``indeterminate or normal'' by expert cytopathologists. Our approach is based on the quantitative analysis of the cell nucleus on the original cytology slides without additional processing, which can be readily applied in a conventional clinical setting. Our simple and practical optical microscopy technique may lead to the development of novel methods for early detection of cancer.
Quantitative Evaluation of Performance during Robot-assisted Treatment.
Peri, E; Biffi, E; Maghini, C; Servodio Iammarrone, F; Gagliardi, C; Germiniasi, C; Pedrocchi, A; Turconi, A C; Reni, G
2016-01-01
This article is part of the Focus Theme of Methods of Information in Medicine on "Methodologies, Models and Algorithms for Patients Rehabilitation". The great potential of robots in extracting quantitative and meaningful data is not always exploited in clinical practice. The aim of the present work is to describe a simple parameter to assess the performance of subjects during upper limb robotic training exploiting data automatically recorded by the robot, with no additional effort for patients and clinicians. Fourteen children affected by cerebral palsy (CP) performed a training with Armeo®Spring. Each session was evaluated with P, a simple parameter that depends on the overall performance recorded, and median and interquartile values were computed to perform a group analysis. Median (interquartile) values of P significantly increased from 0.27 (0.21) at T0 to 0.55 (0.27) at T1 . This improvement was functionally validated by a significant increase of the Melbourne Assessment of Unilateral Upper Limb Function. The parameter described here was able to show variations in performance over time and enabled a quantitative evaluation of motion abilities in a way that is reliable with respect to a well-known clinical scale.
Measuring water and sediment discharge from a road plot with a settling basin and tipping bucket
Thomas A. Black; Charles H. Luce
2013-01-01
A simple empirical method quantifies water and sediment production from a forest road surface, and is well suited for calibration and validation of road sediment models. To apply this quantitative method, the hydrologic technician installs bordered plots on existing typical road segments and measures coarse sediment production in a settling tank. When a tipping bucket...
Dikow, Nicola; Nygren, Anders Oh; Schouten, Jan P; Hartmann, Carolin; Krämer, Nikola; Janssen, Bart; Zschocke, Johannes
2007-06-01
Standard methods used for genomic methylation analysis allow the detection of complete absence of either methylated or non-methylated alleles but are usually unable to detect changes in the proportion of methylated and unmethylated alleles. We compare two methods for quantitative methylation analysis, using the chromosome 15q11-q13 imprinted region as model. Absence of the non-methylated paternal allele in this region leads to Prader-Willi syndrome (PWS) whilst absence of the methylated maternal allele results in Angelman syndrome (AS). A proportion of AS is caused by mosaic imprinting defects which may be missed with standard methods and require quantitative analysis for their detection. Sequence-based quantitative methylation analysis (SeQMA) involves quantitative comparison of peaks generated through sequencing reactions after bisulfite treatment. It is simple, cost-effective and can be easily established for a large number of genes. However, our results support previous suggestions that methods based on bisulfite treatment may be problematic for exact quantification of methylation status. Methylation-specific multiplex ligation-dependent probe amplification (MS-MLPA) avoids bisulfite treatment. It detects changes in both CpG methylation as well as copy number of up to 40 chromosomal sequences in one simple reaction. Once established in a laboratory setting, the method is more accurate, reliable and less time consuming.
An Introduction to Magnetospheric Physics by Means of Simple Models
NASA Technical Reports Server (NTRS)
Stern, D. P.
1981-01-01
The large scale structure and behavior of the Earth's magnetosphere is discussed. The model is suitable for inclusion in courses on space physics, plasmas, astrophysics or the Earth's environment, as well as for self-study. Nine quantitative problems, dealing with properties of linear superpositions of a dipole and a constant field are presented. Topics covered include: open and closed models of the magnetosphere; field line motion; the role of magnetic merging (reconnection); magnetospheric convection; and the origin of the magnetopause, polar cusps, and high latitude lobes.
The effect of the behavior of an average consumer on the public debt dynamics
NASA Astrophysics Data System (ADS)
De Luca, Roberto; Di Mauro, Marco; Falzarano, Angelo; Naddeo, Adele
2017-09-01
An important issue within the present economic crisis is understanding the dynamics of the public debt of a given country, and how the behavior of average consumers and tax payers in that country affects it. Starting from a model of the average consumer behavior introduced earlier by the authors, we propose a simple model to quantitatively address this issue. The model is then studied and analytically solved under some reasonable simplifying assumptions. In this way we obtain a condition under which the public debt steadily decreases.
Influence of mom and dad: quantitative genetic models for maternal effects and genomic imprinting.
Santure, Anna W; Spencer, Hamish G
2006-08-01
The expression of an imprinted gene is dependent on the sex of the parent it was inherited from, and as a result reciprocal heterozygotes may display different phenotypes. In contrast, maternal genetic terms arise when the phenotype of an offspring is influenced by the phenotype of its mother beyond the direct inheritance of alleles. Both maternal effects and imprinting may contribute to resemblance between offspring of the same mother. We demonstrate that two standard quantitative genetic models for deriving breeding values, population variances and covariances between relatives, are not equivalent when maternal genetic effects and imprinting are acting. Maternal and imprinting effects introduce both sex-dependent and generation-dependent effects that result in differences in the way additive and dominance effects are defined for the two approaches. We use a simple example to demonstrate that both imprinting and maternal genetic effects add extra terms to covariances between relatives and that model misspecification may over- or underestimate true covariances or lead to extremely variable parameter estimation. Thus, an understanding of various forms of parental effects is essential in correctly estimating quantitative genetic variance components.
Modeling RNA interference in mammalian cells
2011-01-01
Background RNA interference (RNAi) is a regulatory cellular process that controls post-transcriptional gene silencing. During RNAi double-stranded RNA (dsRNA) induces sequence-specific degradation of homologous mRNA via the generation of smaller dsRNA oligomers of length between 21-23nt (siRNAs). siRNAs are then loaded onto the RNA-Induced Silencing multiprotein Complex (RISC), which uses the siRNA antisense strand to specifically recognize mRNA species which exhibit a complementary sequence. Once the siRNA loaded-RISC binds the target mRNA, the mRNA is cleaved and degraded, and the siRNA loaded-RISC can degrade additional mRNA molecules. Despite the widespread use of siRNAs for gene silencing, and the importance of dosage for its efficiency and to avoid off target effects, none of the numerous mathematical models proposed in literature was validated to quantitatively capture the effects of RNAi on the target mRNA degradation for different concentrations of siRNAs. Here, we address this pressing open problem performing in vitro experiments of RNAi in mammalian cells and testing and comparing different mathematical models fitting experimental data to in-silico generated data. We performed in vitro experiments in human and hamster cell lines constitutively expressing respectively EGFP protein or tTA protein, measuring both mRNA levels, by quantitative Real-Time PCR, and protein levels, by FACS analysis, for a large range of concentrations of siRNA oligomers. Results We tested and validated four different mathematical models of RNA interference by quantitatively fitting models' parameters to best capture the in vitro experimental data. We show that a simple Hill kinetic model is the most efficient way to model RNA interference. Our experimental and modeling findings clearly show that the RNAi-mediated degradation of mRNA is subject to saturation effects. Conclusions Our model has a simple mathematical form, amenable to analytical investigations and a small set of parameters with an intuitive physical meaning, that makes it a unique and reliable mathematical tool. The findings here presented will be a useful instrument for better understanding RNAi biology and as modelling tool in Systems and Synthetic Biology. PMID:21272352
Ueda, Kazuhiro; Tanaka, Toshiki; Li, Tao-Sheng; Tanaka, Nobuyuki; Hamano, Kimikazu
2009-03-01
The prediction of pulmonary functional reserve is mandatory in therapeutic decision-making for patients with resectable lung cancer, especially those with underlying lung disease. Volumetric analysis in combination with densitometric analysis of the affected lung lobe or segment with quantitative computed tomography (CT) helps to identify residual pulmonary function, although the utility of this modality needs investigation. The subjects of this prospective study were 30 patients with resectable lung cancer. A three-dimensional CT lung model was created with voxels representing normal lung attenuation (-600 to -910 Hounsfield units). Residual pulmonary function was predicted by drawing a boundary line between the lung to be preserved and that to be resected, directly on the lung model. The predicted values were correlated with the postoperative measured values. The predicted and measured values corresponded well (r=0.89, p<0.001). Although the predicted values corresponded with values predicted by simple calculation using a segment-counting method (r=0.98), there were two outliers whose pulmonary functional reserves were predicted more accurately by CT than by segment counting. The measured pulmonary functional reserves were significantly higher than the predicted values in patients with extensive emphysematous areas (<-910 Hounsfield units), but not in patients with chronic obstructive pulmonary disease. Quantitative CT yielded accurate prediction of functional reserve after lung cancer surgery and helped to identify patients whose functional reserves are likely to be underestimated. Hence, this modality should be utilized for patients with marginal pulmonary function.
Second-harmonic diffraction from holographic volume grating.
Nee, Tsu-Wei
2006-10-01
The full polarization property of holographic volume-grating enhanced second-harmonic diffraction (SHD) is investigated theoretically. The nonlinear coefficient is derived from a simple atomic model of the material. By using a simple volume-grating model, the SHD fields and Mueller matrices are first derived. The SHD phase-mismatching effect for a thick sample is analytically investigated. This theory is justified by fitting with published experimental SHD data of thin-film samples. The SHD of an existing polymethyl methacrylate (PMMA) holographic 2-mm-thick volume-grating sample is investigated. This sample has two strong coupling linear diffraction peaks and five SHD peaks. The splitting of SHD peaks is due to the phase-mismatching effect. The detector sensitivity and laser power needed to measure these peak signals are quantitatively estimated.
Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Ryan B.; Clegg, Samuel M.; Frydenvang, Jens
We report that accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response ofmore » an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “submodel” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. Lastly, the sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.« less
Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models
Anderson, Ryan B.; Clegg, Samuel M.; Frydenvang, Jens; ...
2016-12-15
We report that accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response ofmore » an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “submodel” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. Lastly, the sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.« less
Caballero-Lima, David; Kaneva, Iliyana N.; Watton, Simon P.
2013-01-01
In the hyphal tip of Candida albicans we have made detailed quantitative measurements of (i) exocyst components, (ii) Rho1, the regulatory subunit of (1,3)-β-glucan synthase, (iii) Rom2, the specialized guanine-nucleotide exchange factor (GEF) of Rho1, and (iv) actin cortical patches, the sites of endocytosis. We use the resulting data to construct and test a quantitative 3-dimensional model of fungal hyphal growth based on the proposition that vesicles fuse with the hyphal tip at a rate determined by the local density of exocyst components. Enzymes such as (1,3)-β-glucan synthase thus embedded in the plasma membrane continue to synthesize the cell wall until they are removed by endocytosis. The model successfully predicts the shape and dimensions of the hyphae, provided that endocytosis acts to remove cell wall-synthesizing enzymes at the subapical bands of actin patches. Moreover, a key prediction of the model is that the distribution of the synthase is substantially broader than the area occupied by the exocyst. This prediction is borne out by our quantitative measurements. Thus, although the model highlights detailed issues that require further investigation, in general terms the pattern of tip growth of fungal hyphae can be satisfactorily explained by a simple but quantitative model rooted within the known molecular processes of polarized growth. Moreover, the methodology can be readily adapted to model other forms of polarized growth, such as that which occurs in plant pollen tubes. PMID:23666623
Bioheat model evaluations of laser effects on tissues: role of water evaporation and diffusion
NASA Astrophysics Data System (ADS)
Nagulapally, Deepthi; Joshi, Ravi P.; Thomas, Robert J.
2011-03-01
A two-dimensional, time-dependent bioheat model is applied to evaluate changes in temperature and water content in tissues subjected to laser irradiation. Our approach takes account of liquid-to-vapor phase changes and a simple diffusive flow of water within the biotissue. An energy balance equation considers blood perfusion, metabolic heat generation, laser absorption, and water evaporation. The model also accounts for the water dependence of tissue properties (both thermal and optical), and variations in blood perfusion rates based on local tissue injury. Our calculations show that water diffusion would reduce the local temperature increases and hot spots in comparison to simple models that ignore the role of water in the overall thermal and mass transport. Also, the reduced suppression of perfusion rates due to tissue heating and damage with water diffusion affect the necrotic depth. Two-dimensional results for the dynamic temperature, water content, and damage distributions will be presented for skin simulations. It is argued that reduction in temperature gradients due to water diffusion would mitigate local refractive index variations, and hence influence the phenomenon of thermal lensing. Finally, simple quantitative evaluations of pressure increases within the tissue due to laser absorption are presented.
Teaching optical phenomena with Tracker
NASA Astrophysics Data System (ADS)
Rodrigues, M.; Simeão Carvalho, P.
2014-11-01
Since the invention and dissemination of domestic laser pointers, observing optical phenomena is a relatively easy task. Any student can buy a laser and experience at home, in a qualitative way, the reflection, refraction and even diffraction phenomena of light. However, quantitative experiments need instruments of high precision that have a relatively complex setup. Fortunately, nowadays it is possible to analyse optical phenomena in a simple and quantitative way using the freeware video analysis software ‘Tracker’. In this paper, we show the advantages of video-based experimental activities for teaching concepts in optics. We intend to show: (a) how easy the study of such phenomena can be, even at home, because only simple materials are needed, and Tracker provides the necessary measuring instruments; and (b) how we can use Tracker to improve students’ understanding of some optical concepts. We give examples using video modelling to study the laws of reflection, Snell’s laws, focal distances in lenses and mirrors, and diffraction phenomena, which we hope will motivate teachers to implement it in their own classes and schools.
Shi, Weimin; Zhang, Xiaoya; Shen, Qi
2010-01-01
Quantitative structure-activity relationship (QSAR) study of chemokine receptor 5 (CCR5) binding affinity of substituted 1-(3,3-diphenylpropyl)-piperidinyl amides and ureas and toxicity of aromatic compounds have been performed. The gene expression programming (GEP) was used to select variables and produce nonlinear QSAR models simultaneously using the selected variables. In our GEP implementation, a simple and convenient method was proposed to infer the K-expression from the number of arguments of the function in a gene, without building the expression tree. The results were compared to those obtained by artificial neural network (ANN) and support vector machine (SVM). It has been demonstrated that the GEP is a useful tool for QSAR modeling. Copyright 2009 Elsevier Masson SAS. All rights reserved.
Thermodynamics and Mechanics of Membrane Curvature Generation and Sensing by Proteins and Lipids
Baumgart, Tobias; Capraro, Benjamin R.; Zhu, Chen; Das, Sovan L.
2014-01-01
Research investigating lipid membrane curvature generation and sensing is a rapidly developing frontier in membrane physical chemistry and biophysics. The fast recent progress is based on the discovery of a plethora of proteins involved in coupling membrane shape to cellular membrane function, the design of new quantitative experimental techniques to study aspects of membrane curvature, and the development of analytical theories and simulation techniques that allow a mechanistic interpretation of quantitative measurements. The present review first provides an overview of important classes of membrane proteins for which function is coupled to membrane curvature. We then survey several mechanisms that are assumed to underlie membrane curvature sensing and generation. Finally, we discuss relatively simple thermodynamic/mechanical models that allow quantitative interpretation of experimental observations. PMID:21219150
NASA Astrophysics Data System (ADS)
shunhe, Li; jianhua, Rao; lin, Gui; weimin, Zhang; degang, Liu
2017-11-01
The result of remanufacturing evaluation is the basis for judging whether the heavy duty machine tool can remanufacture in the EOL stage of the machine tool lifecycle management.The objectivity and accuracy of evaluation is the key to the evaluation method.In this paper, the catastrophe progression method is introduced into the quantitative evaluation of heavy duty machine tools’ remanufacturing,and the results are modified by the comprehensive adjustment method,which makes the evaluation results accord with the standard of human conventional thinking.Using the catastrophe progression method to establish the heavy duty machine tools’ quantitative evaluation model,to evaluate the retired TK6916 type CNC floor milling-boring machine’s remanufacturing.The evaluation process is simple,high quantification,the result is objective.
Silkworm cocoons inspire models for random fiber and particulate composites
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen Fujia; Porter, David; Vollrath, Fritz
The bioengineering design principles evolved in silkworm cocoons make them ideal natural prototypes and models for structural composites. Cocoons depend for their stiffness and strength on the connectivity of bonding between their constituent materials of silk fibers and sericin binder. Strain-activated mechanisms for loss of bonding connectivity in cocoons can be translated directly into a surprisingly simple yet universal set of physically realistic as well as predictive quantitative structure-property relations for a wide range of technologically important fiber and particulate composite materials.
Silkworm cocoons inspire models for random fiber and particulate composites
NASA Astrophysics Data System (ADS)
Chen, Fujia; Porter, David; Vollrath, Fritz
2010-10-01
The bioengineering design principles evolved in silkworm cocoons make them ideal natural prototypes and models for structural composites. Cocoons depend for their stiffness and strength on the connectivity of bonding between their constituent materials of silk fibers and sericin binder. Strain-activated mechanisms for loss of bonding connectivity in cocoons can be translated directly into a surprisingly simple yet universal set of physically realistic as well as predictive quantitative structure-property relations for a wide range of technologically important fiber and particulate composite materials.
2017-02-08
cost benefit of the technology. 7.1 COST MODEL A simple cost model for the technology is presented so that a remediation professional can understand...reporting costs . The benefit of the qPCR analyses is that they allow the user to determine if aerobic cometabolism is possible. Because the PHE and...of Chlorinated Ethylenes February 2017 This document has been cleared for public release; Distribution Statement A Page Intentionally Left
Sensitivity Study for Long Term Reliability
NASA Technical Reports Server (NTRS)
White, Allan L.
2008-01-01
This paper illustrates using Markov models to establish system and maintenance requirements for small electronic controllers where the goal is a high probability of continuous service for a long period of time. The system and maintenance items considered are quality of components, various degrees of simple redundancy, redundancy with reconfiguration, diagnostic levels, periodic maintenance, and preventive maintenance. Markov models permit a quantitative investigation with comparison and contrast. An element of special interest is the use of conditional probability to study the combination of imperfect diagnostics and periodic maintenance.
Afantitis, Antreas; Melagraki, Georgia; Sarimveis, Haralambos; Koutentis, Panayiotis A; Markopoulos, John; Igglessi-Markopoulou, Olga
2006-08-01
A quantitative-structure activity relationship was obtained by applying Multiple Linear Regression Analysis to a series of 80 1-[2-hydroxyethoxy-methyl]-6-(phenylthio) thymine (HEPT) derivatives with significant anti-HIV activity. For the selection of the best among 37 different descriptors, the Elimination Selection Stepwise Regression Method (ES-SWR) was utilized. The resulting QSAR model (R (2) (CV) = 0.8160; S (PRESS) = 0.5680) proved to be very accurate both in training and predictive stages.
A study of stiffness, residual strength and fatigue life relationships for composite laminates
NASA Technical Reports Server (NTRS)
Ryder, J. T.; Crossman, F. W.
1983-01-01
Qualitative and quantitative exploration of the relationship between stiffness, strength, fatigue life, residual strength, and damage of unnotched, graphite/epoxy laminates subjected to tension loading. Clarification of the mechanics of the tension loading is intended to explain previous contradictory observations and hypotheses; to develop a simple procedure to anticipate strength, fatigue life, and stiffness changes; and to provide reasons for the study of more complex cases of compression, notches, and spectrum fatigue loading. Mathematical models are developed based upon analysis of the damage states. Mathematical models were based on laminate analysis, free body type modeling or a strain energy release rate. Enough understanding of the tension loaded case is developed to allow development of a proposed, simple procedure for calculating strain to failure, stiffness, strength, data scatter, and shape of the stress-life curve for unnotched laminates subjected to tension load.
Director gliding in a nematic liquid crystal layer: Quantitative comparison with experiments
NASA Astrophysics Data System (ADS)
Mema, E.; Kondic, L.; Cummings, L. J.
2018-03-01
The interaction between nematic liquid crystals and polymer-coated substrates may lead to slow reorientation of the easy axis (so-called "director gliding") when a prolonged external field is applied. We consider the experimental evidence of zenithal gliding observed by Joly et al. [Phys. Rev. E 70, 050701 (2004), 10.1103/PhysRevE.70.050701] and Buluy et al. [J. Soc. Inf. Disp. 14, 603 (2006), 10.1889/1.2235686] as well as azimuthal gliding observed by S. Faetti and P. Marianelli [Liq. Cryst. 33, 327 (2006), 10.1080/02678290500512227], and we present a simple, physically motivated model that captures the slow dynamics of gliding, both in the presence of an electric field and after the electric field is turned off. We make a quantitative comparison of our model results and the experimental data and conclude that our model explains the gliding evolution very well.
CDP++.Italian: Modelling Sublexical and Supralexical Inconsistency in a Shallow Orthography
Perry, Conrad; Ziegler, Johannes C.; Zorzi, Marco
2014-01-01
Most models of reading aloud have been constructed to explain data in relatively complex orthographies like English and French. Here, we created an Italian version of the Connectionist Dual Process Model of Reading Aloud (CDP++) to examine the extent to which the model could predict data in a language which has relatively simple orthography-phonology relationships but is relatively complex at a suprasegmental (word stress) level. We show that the model exhibits good quantitative performance and accounts for key phenomena observed in naming studies, including some apparently contradictory findings. These effects include stress regularity and stress consistency, both of which have been especially important in studies of word recognition and reading aloud in Italian. Overall, the results of the model compare favourably to an alternative connectionist model that can learn non-linear spelling-to-sound mappings. This suggests that CDP++ is currently the leading computational model of reading aloud in Italian, and that its simple linear learning mechanism adequately captures the statistical regularities of the spelling-to-sound mapping both at the segmental and supra-segmental levels. PMID:24740261
Magnitude of the magnetic exchange interaction in the heavy-fermion antiferromagnet CeRhIn 5
Das, Pinaki; Lin, S. -Z.; Ghimire, N. J.; ...
2014-12-08
We have used high-resolution neutron spectroscopy experiments to determine the complete spin wave spectrum of the heavy-fermion antiferromagnet CeRhIn₅. The spin wave dispersion can be quantitatively reproduced with a simple frustrated J₁-J₂ model that also naturally explains the magnetic spin-spiral ground state of CeRhIn₅ and yields a dominant in-plane nearest-neighbor magnetic exchange constant J₀=0.74(3) meV. Our results lead the way to a quantitative understanding of the rich low-temperature phase diagram of the prominent CeTIn₅ (T = Co, Rh, Ir) class of heavy-fermion materials.
Making predictions of mangrove deforestation: a comparison of two methods in Kenya.
Rideout, Alasdair J R; Joshi, Neha P; Viergever, Karin M; Huxham, Mark; Briers, Robert A
2013-11-01
Deforestation of mangroves is of global concern given their importance for carbon storage, biogeochemical cycling and the provision of other ecosystem services, but the links between rates of loss and potential drivers or risk factors are rarely evaluated. Here, we identified key drivers of mangrove loss in Kenya and compared two different approaches to predicting risk. Risk factors tested included various possible predictors of anthropogenic deforestation, related to population, suitability for land use change and accessibility. Two approaches were taken to modelling risk; a quantitative statistical approach and a qualitative categorical ranking approach. A quantitative model linking rates of loss to risk factors was constructed based on generalized least squares regression and using mangrove loss data from 1992 to 2000. Population density, soil type and proximity to roads were the most important predictors. In order to validate this model it was used to generate a map of losses of Kenyan mangroves predicted to have occurred between 2000 and 2010. The qualitative categorical model was constructed using data from the same selection of variables, with the coincidence of different risk factors in particular mangrove areas used in an additive manner to create a relative risk index which was then mapped. Quantitative predictions of loss were significantly correlated with the actual loss of mangroves between 2000 and 2010 and the categorical risk index values were also highly correlated with the quantitative predictions. Hence, in this case the relatively simple categorical modelling approach was of similar predictive value to the more complex quantitative model of mangrove deforestation. The advantages and disadvantages of each approach are discussed, and the implications for mangroves are outlined. © 2013 Blackwell Publishing Ltd.
Analysis of bacterial migration. 2: Studies with multiple attractant gradients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strauss, I.; Frymier, P.D.; Hahn, C.M.
1995-02-01
Many motile bacteria exhibit chemotaxis, the ability to bias their random motion toward or away from increasing concentrations of chemical substances which benefit or inhibit their survival, respectively. Since bacteria encounter numerous chemical concentration gradients simultaneously in natural surroundings, it is necessary to know quantitatively how a bacterial population responds in the presence of more than one chemical stimulus to develop predictive mathematical models describing bacterial migration in natural systems. This work evaluates three hypothetical models describing the integration of chemical signals from multiple stimuli: high sensitivity, maximum signal, and simple additivity. An expression for the tumbling probability for individualmore » stimuli is modified according to the proposed models and incorporated into the cell balance equation for a 1-D attractant gradient. Random motility and chemotactic sensitivity coefficients, required input parameters for the model, are measured for single stimulus responses. Theoretical predictions with the three signal integration models are compared to the net chemotactic response of Escherichia coli to co- and antidirectional gradients of D-fucose and [alpha]-methylaspartate in the stopped-flow diffusion chamber assay. Results eliminate the high-sensitivity model and favor the simple additivity over the maximum signal. None of the simple models, however, accurately predict the observed behavior, suggesting a more complex model with more steps in the signal processing mechanism is required to predict responses to multiple stimuli.« less
ERIC Educational Resources Information Center
Slisko, Josip; Cruz, Adrian Corona
2013-01-01
There is a general agreement that critical thinking is an important element of 21st century skills. Although critical thinking is a very complex and controversial conception, many would accept that recognition and evaluation of assumptions is a basic critical-thinking process. When students use simple mathematical model to reason quantitatively…
ERIC Educational Resources Information Center
Goldhaber, Dan
2010-01-01
The formula is simple: Highly effective teachers equal student academic success. Yet, the physics of American education is anything but. Thus, the question facing education reformers is how can teacher effectiveness be accurately measured in order to improve the teacher workforce? Given the demand for objective, quantitative measures of teacher…
Zhang, Liu-Xia; Cao, Yi-Ren; Xiao, Hua; Liu, Xiao-Ping; Liu, Shao-Rong; Meng, Qing-Hua; Fan, Liu-Yin; Cao, Cheng-Xi
2016-03-15
In the present work we address a simple, rapid and quantitative analytical method for detection of different proteins present in biological samples. For this, we proposed the model of titration of double protein (TDP) and its relevant leverage theory relied on the retardation signal of chip moving reaction boundary electrophoresis (MRBE). The leverage principle showed that the product of the first protein content and its absolute retardation signal is equal to that of the second protein content and its absolute one. To manifest the model, we achieved theoretical self-evidence for the demonstration of the leverage principle at first. Then relevant experiments were conducted on the TDP-MRBE chip. The results revealed that (i) there was a leverage principle of retardation signal within the TDP of two pure proteins, and (ii) a lever also existed within these two complex protein samples, evidently demonstrating the validity of TDP model and leverage theory in MRBE chip. It was also showed that the proposed technique could provide a rapid and simple quantitative analysis of two protein samples in a mixture. Finally, we successfully applied the developed technique for the quantification of soymilk in adulterated infant formula. The TDP-MRBE opens up a new window for the detection of adulteration ratio of the poor food (milk) in blended high quality one. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Royston, Thomas J.; Zhang, Xiangling; Mansy, Hussein A.; Sandler, Richard H.
2002-05-01
Experimental studies have shown that a pneumothorax (collapsed lung) substantially alters the propagation of sound introduced at the mouth of an intubated subject and measured at the chest surface. Thus, it is hypothesized that an inexpensive diagnostic procedure could be developed for detection of a pneumothorax based on a simple acoustic test. In the present study, theoretical models of sound transmission through the pulmonary system and chest region are reviewed in the context of their ability to predict acoustic changes caused by a pneumothorax, as well as other pathologic conditions. Such models could aid in parametric design studies to develop acoustic means of diagnosing pneumothorax and other lung pathologies. Extensions of previously developed simple models of the authors are presented that are in more quantitative agreement with experimental results and that simulate both transmission from the bronchial airways to the chest wall, as well as reflection in the bronchial airways. [Research supported by NIH NCRR Grant No. 14250 and NIH NHLBI Grant No. 61108.
Correlation Imaging Reveals Specific Crowding Dynamics of Kinesin Motor Proteins
NASA Astrophysics Data System (ADS)
Miedema, Daniël M.; Kushwaha, Vandana S.; Denisov, Dmitry V.; Acar, Seyda; Nienhuis, Bernard; Peterman, Erwin J. G.; Schall, Peter
2017-10-01
Molecular motor proteins fulfill the critical function of transporting organelles and other building blocks along the biopolymer network of the cell's cytoskeleton, but crowding effects are believed to crucially affect this motor-driven transport due to motor interactions. Physical transport models, like the paradigmatic, totally asymmetric simple exclusion process (TASEP), have been used to predict these crowding effects based on simple exclusion interactions, but verifying them in experiments remains challenging. Here, we introduce a correlation imaging technique to precisely measure the motor density, velocity, and run length along filaments under crowding conditions, enabling us to elucidate the physical nature of crowding and test TASEP model predictions. Using the kinesin motor proteins kinesin-1 and OSM-3, we identify crowding effects in qualitative agreement with TASEP predictions, and we achieve excellent quantitative agreement by extending the model with motor-specific interaction ranges and crowding-dependent detachment probabilities. These results confirm the applicability of basic nonequilibrium models to the intracellular transport and highlight motor-specific strategies to deal with crowding.
Convection driven zonal flows and vortices in the major planets.
Busse, F. H.
1994-06-01
The dynamical properties of convection in rotating cylindrical annuli and spherical shells are reviewed. Simple theoretical models and experimental simulations of planetary convection through the use of the centrifugal force in the laboratory are emphasized. The model of columnar convection in a cylindrical annulus not only serves as a guide to the dynamical properties of convection in rotating sphere; it also is of interest as a basic physical system that exhibits several dynamical properties in their most simple form. The generation of zonal mean flows is discussed in some detail and examples of recent numerical computations are presented. The exploration of the parameter space for the annulus model is not yet complete and the theoretical exploration of convection in rotating spheres is still in the beginning phase. Quantitative comparisons with the observations of the dynamics of planetary atmospheres will have to await the consideration in the models of the effects of magnetic fields and the deviations from the Boussinesq approximation.
Interpretation of Ground Temperature Anomalies in Hydrothermal Discharge Areas
NASA Astrophysics Data System (ADS)
Price, A. N.; Lindsey, C.; Fairley, J. P., Jr.
2017-12-01
Researchers have long noted the potential for shallow hydrothermal fluids to perturb near-surface temperatures. Several investigators have made qualitative or semi-quantitative use of elevated surface temperatures; for example, in snowfall calorimetry, or for tracing subsurface flow paths. However, little effort has been expended to develop a quantitative framework connecting surface temperature observations with conditions in the subsurface. Here, we examine an area of shallow subsurface flow at Burgdorf Hot Springs, in the Payette National Forest, north of McCall, Idaho USA. We present a simple analytical model that uses easily-measured surface data to infer the temperatures of laterally-migrating shallow hydrothermal fluids. The model is calibrated using shallow ground temperature measurements and overburden thickness estimates from seismic refraction studies. The model predicts conditions in the shallow subsurface, and suggests that the Biot number may place a more important control on the expression of near-surface thermal perturbations than previously thought. In addition, our model may have application in inferring difficult-to-measure parameters, such as shallow subsurface discharge from hydrothermal springs.
Model of Pressure Distribution in Vortex Flow Controls
NASA Astrophysics Data System (ADS)
Mielczarek, Szymon; Sawicki, Jerzy M.
2015-06-01
Vortex valves belong to the category of hydrodynamic flow controls. They are important and theoretically interesting devices, so complex from hydraulic point of view, that probably for this reason none rational concept of their operation has been proposed so far. In consequence, functioning of vortex valves is described by CFD-methods (computer-aided simulation of technical objects) or by means of simple empirical relations (using discharge coefficient or hydraulic loss coefficient). Such rational model of the considered device is proposed in the paper. It has a simple algebraic form, but is well grounded physically. The basic quantitative relationship, which describes the valve operation, i.e. dependence between the flow discharge and the circumferential pressure head, caused by the rotation, has been verified empirically. Conformity between calculated and measured parameters of the device allows for acceptation of the proposed concept.
Põder, Endel
2011-02-16
Dot lattices are very simple multi-stable images where the dots can be perceived as being grouped in different ways. The probabilities of grouping along different orientations as dependent on inter-dot distances along these orientations can be predicted by a simple quantitative model. L. Bleumers, P. De Graef, K. Verfaillie, and J. Wagemans (2008) found that for peripheral presentation, this model should be combined with random guesses on a proportion of trials. The present study shows that the probability of random responses decreases with decreasing ambiguity of lattices and is different for bi-stable and tri-stable lattices. With central presentation, similar effects can be produced by adding positional noise to the dots. The results suggest that different levels of internal positional noise might explain the differences between peripheral and central proximity grouping.
Ninomiya, Shinji; Tokumine, Asako; Yasuda, Toru; Tomizawa, Yasuko
2007-01-01
A training system with quantitative evaluation of performance for training perfusionists is valuable for preparation for rare but critical situations. A simulator system, ECCSIM-Lite, for extracorporeal circulation (ECC) training of perfusionists was developed. This system consists of a computer system containing a simulation program of the hemodynamic conditions and the training scenario with instructions, a flow sensor unit, a reservoir with a built-in water level sensor, and an ECC circuit with a soft bag representing the human body. This system is relatively simple, easy to handle, compact, and reasonably inexpensive. Quantitative information is recorded, including the changes in arterial flow by the manipulation of a knob, the changes in venous drainage by handling a clamp, and the change in reservoir level; the time courses of the above parameters are presented graphically. To increase the realism of the training, a numerical-hydraulic circulatory model was applied. Following the instruction and explanation of the scenario in the form of audio and video captions, it is possible for a trainee to undertake self-study without an instructor or a computer operator. To validate the system, a training session was given to three beginners using a simple training scenario; it was possible to record the performance of the perfusion sessions quantitatively. In conclusion, the ECCSIM-Lite system is expected to be useful for perfusion training, since quantitative information about the trainee's performance is recorded and it is possible to use the data for assessment and comparison.
NASA Astrophysics Data System (ADS)
Klein, D. Harley; Leal, L. Gary; García-Cervera, Carlos J.; Ceniceros, Hector D.
2007-02-01
We consider the behavior of the Doi-Marrucci-Greco (DMG) model for nematic liquid crystalline polymers in planar shear flow. We found the DMG model to exhibit dynamics in both qualitative and quantitative agreement with experimental observations reported by Larson and Mead [Liq. Cryst. 15, 151 (1993)] for the Ericksen number and Deborah number cascades. For increasing shear rates within the Ericksen number cascade, the DMG model displays three distinct regimes: stable simple shear, stable roll cells, and irregular structure accompanied by disclination formation. In accordance with experimental observations, the model predicts both ±1 and ±1/2 disclinations. Although ±1 defects form via the ridge-splitting mechanism first identified by Feng, Tao, and Leal [J. Fluid Mech. 449, 179 (2001)], a new mechanism is identified for the formation of ±1/2 defects. Within the Deborah number cascade, with increasing Deborah number, the DMG model exhibits a streamwise banded texture, in the absence of disclinations and roll cells, followed by a monodomain wherein the mean orientation lies within the shear plane throughout the domain.
Protein detection by Simple Western™ analysis.
Harris, Valerie M
2015-01-01
Protein Simple© has taken a well-known protein detection method, the western blot, and revolutionized it. The Simple Western™ system uses capillary electrophoresis to identify and quantitate a protein of interest. Protein Simple© provides multiple detection apparatuses (Wes, Sally Sue, or Peggy Sue) that are suggested to save scientists valuable time by allowing the researcher to prepare the protein sample, load it along with necessary antibodies and substrates, and walk away. Within 3-5 h the protein will be separated by size, or charge, immuno-detection of target protein will be accurately quantitated, and results will be immediately made available. Using the Peggy Sue instrument, one study recently examined changes in MAPK signaling proteins in the sex-determining stage of gonadal development. Here the methodology is described.
Causal Loop Analysis of coastal geomorphological systems
NASA Astrophysics Data System (ADS)
Payo, Andres; Hall, Jim W.; French, Jon; Sutherland, James; van Maanen, Barend; Nicholls, Robert J.; Reeve, Dominic E.
2016-03-01
As geomorphologists embrace ever more sophisticated theoretical frameworks that shift from simple notions of evolution towards single steady equilibria to recognise the possibility of multiple response pathways and outcomes, morphodynamic modellers are facing the problem of how to keep track of an ever-greater number of system feedbacks. Within coastal geomorphology, capturing these feedbacks is critically important, especially as the focus of activity shifts from reductionist models founded on sediment transport fundamentals to more synthesist ones intended to resolve emergent behaviours at decadal to centennial scales. This paper addresses the challenge of mapping the feedback structure of processes controlling geomorphic system behaviour with reference to illustrative applications of Causal Loop Analysis at two study cases: (1) the erosion-accretion behaviour of graded (mixed) sediment beds, and (2) the local alongshore sediment fluxes of sand-rich shorelines. These case study examples are chosen on account of their central role in the quantitative modelling of geomorphological futures and as they illustrate different types of causation. Causal loop diagrams, a form of directed graph, are used to distil the feedback structure to reveal, in advance of more quantitative modelling, multi-response pathways and multiple outcomes. In the case of graded sediment bed, up to three different outcomes (no response, and two disequilibrium states) can be derived from a simple qualitative stability analysis. For the sand-rich local shoreline behaviour case, two fundamentally different responses of the shoreline (diffusive and anti-diffusive), triggered by small changes of the shoreline cross-shore position, can be inferred purely through analysis of the causal pathways. Explicit depiction of feedback-structure diagrams is beneficial when developing numerical models to explore coastal morphological futures. By explicitly mapping the feedbacks included and neglected within a model, the modeller can readily assess if critical feedback loops are included.
Morris, Melody K.; Saez-Rodriguez, Julio; Clarke, David C.; Sorger, Peter K.; Lauffenburger, Douglas A.
2011-01-01
Predictive understanding of cell signaling network operation based on general prior knowledge but consistent with empirical data in a specific environmental context is a current challenge in computational biology. Recent work has demonstrated that Boolean logic can be used to create context-specific network models by training proteomic pathway maps to dedicated biochemical data; however, the Boolean formalism is restricted to characterizing protein species as either fully active or inactive. To advance beyond this limitation, we propose a novel form of fuzzy logic sufficiently flexible to model quantitative data but also sufficiently simple to efficiently construct models by training pathway maps on dedicated experimental measurements. Our new approach, termed constrained fuzzy logic (cFL), converts a prior knowledge network (obtained from literature or interactome databases) into a computable model that describes graded values of protein activation across multiple pathways. We train a cFL-converted network to experimental data describing hepatocytic protein activation by inflammatory cytokines and demonstrate the application of the resultant trained models for three important purposes: (a) generating experimentally testable biological hypotheses concerning pathway crosstalk, (b) establishing capability for quantitative prediction of protein activity, and (c) prediction and understanding of the cytokine release phenotypic response. Our methodology systematically and quantitatively trains a protein pathway map summarizing curated literature to context-specific biochemical data. This process generates a computable model yielding successful prediction of new test data and offering biological insight into complex datasets that are difficult to fully analyze by intuition alone. PMID:21408212
Cervical Vertebral Body's Volume as a New Parameter for Predicting the Skeletal Maturation Stages.
Choi, Youn-Kyung; Kim, Jinmi; Yamaguchi, Tetsutaro; Maki, Koutaro; Ko, Ching-Chang; Kim, Yong-Il
2016-01-01
This study aimed to determine the correlation between the volumetric parameters derived from the images of the second, third, and fourth cervical vertebrae by using cone beam computed tomography with skeletal maturation stages and to propose a new formula for predicting skeletal maturation by using regression analysis. We obtained the estimation of skeletal maturation levels from hand-wrist radiographs and volume parameters derived from the second, third, and fourth cervical vertebrae bodies from 102 Japanese patients (54 women and 48 men, 5-18 years of age). We performed Pearson's correlation coefficient analysis and simple regression analysis. All volume parameters derived from the second, third, and fourth cervical vertebrae exhibited statistically significant correlations (P < 0.05). The simple regression model with the greatest R-square indicated the fourth-cervical-vertebra volume as an independent variable with a variance inflation factor less than ten. The explanation power was 81.76%. Volumetric parameters of cervical vertebrae using cone beam computed tomography are useful in regression models. The derived regression model has the potential for clinical application as it enables a simple and quantitative analysis to evaluate skeletal maturation level.
Cervical Vertebral Body's Volume as a New Parameter for Predicting the Skeletal Maturation Stages
Choi, Youn-Kyung; Kim, Jinmi; Maki, Koutaro; Ko, Ching-Chang
2016-01-01
This study aimed to determine the correlation between the volumetric parameters derived from the images of the second, third, and fourth cervical vertebrae by using cone beam computed tomography with skeletal maturation stages and to propose a new formula for predicting skeletal maturation by using regression analysis. We obtained the estimation of skeletal maturation levels from hand-wrist radiographs and volume parameters derived from the second, third, and fourth cervical vertebrae bodies from 102 Japanese patients (54 women and 48 men, 5–18 years of age). We performed Pearson's correlation coefficient analysis and simple regression analysis. All volume parameters derived from the second, third, and fourth cervical vertebrae exhibited statistically significant correlations (P < 0.05). The simple regression model with the greatest R-square indicated the fourth-cervical-vertebra volume as an independent variable with a variance inflation factor less than ten. The explanation power was 81.76%. Volumetric parameters of cervical vertebrae using cone beam computed tomography are useful in regression models. The derived regression model has the potential for clinical application as it enables a simple and quantitative analysis to evaluate skeletal maturation level. PMID:27340668
Easy way to determine quantitative spatial resolution distribution for a general inverse problem
NASA Astrophysics Data System (ADS)
An, M.; Feng, M.
2013-12-01
The spatial resolution computation of a solution was nontrivial and more difficult than solving an inverse problem. Most geophysical studies, except for tomographic studies, almost uniformly neglect the calculation of a practical spatial resolution. In seismic tomography studies, a qualitative resolution length can be indicatively given via visual inspection of the restoration of a synthetic structure (e.g., checkerboard tests). An effective strategy for obtaining quantitative resolution length is to calculate Backus-Gilbert resolution kernels (also referred to as a resolution matrix) by matrix operation. However, not all resolution matrices can provide resolution length information, and the computation of resolution matrix is often a difficult problem for very large inverse problems. A new class of resolution matrices, called the statistical resolution matrices (An, 2012, GJI), can be directly determined via a simple one-parameter nonlinear inversion performed based on limited pairs of random synthetic models and their inverse solutions. The total procedure were restricted to forward/inversion processes used in the real inverse problem and were independent of the degree of inverse skill used in the solution inversion. Spatial resolution lengths can be directly given during the inversion. Tests on 1D/2D/3D model inversion demonstrated that this simple method can be at least valid for a general linear inverse problem.
Dynamics of Individual cilia to external loading- A simple one dimensional picture
NASA Astrophysics Data System (ADS)
Swaminathan, Vinay; Hill, David; Superfine, R.
2008-10-01
From being called the cellular janitors to swinging debauchers, cilia have captured the fascinations of researchers for over 200 years. In cystic fibrosis and chronic obstructive pulmonary disease where the cilia loses it's function, the protective mucus layer in the lung thickens and mucociliary clearance breaks down, leading to inflammation along the airways and an increased rate of infection. The mechanistic understanding of mucus clearance depends on a quantitative assessment of the axoneme dynamics and the maximum force the cilia are capable of generating and imparting to the mucus layer. Similar to the situation in molecular motors, detailed quantitative measurements of dynamics under applied load conditions are expected to be essential in developing predictive models. Based on our measurements of the dynamics of individual ciliary motion in the human bronchial epithelial cell under the application of an applied load, we present a simple one dimensional model for the axoneme dynamics and quantify the axoneme stiffness, the internal force generated by the axoneme, the stall force and show how the dynamics sheds insight on the time dependence of the internal force generation. The internal force generated by the axoneme is related to the ability of cilia to propel fluids and to their potential role in force sensing.
NASA Technical Reports Server (NTRS)
Stutzman, Warren L.
1989-01-01
This paper reviews the effects of precipitation on earth-space communication links operating the 10 to 35 GHz frequency range. Emphasis is on the quantitative prediction of rain attenuation and depolarization. Discussions center on the models developed at Virginia Tech. Comments on other models are included as well as literature references to key works. Also included is the system level modeling for dual polarized communication systems with techniques for calculating antenna and propagation medium effects. Simple models for the calculation of average annual attenuation and cross-polarization discrimination (XPD) are presented. Calculation of worst month statistics are also presented.
On the improbability of intelligent extraterrestrials
NASA Astrophysics Data System (ADS)
Bond, A.
1982-05-01
Discussions relating to the prevalence of extraterrestrial life generally remain ambiguous due to the lack of a suitable model for the development of biology. In this paper a simple model is proposed based on neutral evolution theory which leads to quantitative values for the genome growth rate within a biosphere. It is hypothesised that the genome size is a measure of organism complexity and hence an indicator of the likelihood of intelligence. The calculations suggest that organisms with the complexity of human beings may be rare and only occur with a probability below once per galaxy.
The Interrelationship between Promoter Strength, Gene Expression, and Growth Rate
Klesmith, Justin R.; Detwiler, Emily E.; Tomek, Kyle J.; Whitehead, Timothy A.
2014-01-01
In exponentially growing bacteria, expression of heterologous protein impedes cellular growth rates. Quantitative understanding of the relationship between expression and growth rate will advance our ability to forward engineer bacteria, important for metabolic engineering and synthetic biology applications. Recently, a work described a scaling model based on optimal allocation of ribosomes for protein translation. This model quantitatively predicts a linear relationship between microbial growth rate and heterologous protein expression with no free parameters. With the aim of validating this model, we have rigorously quantified the fitness cost of gene expression by using a library of synthetic constitutive promoters to drive expression of two separate proteins (eGFP and amiE) in E. coli in different strains and growth media. In all cases, we demonstrate that the fitness cost is consistent with the previous findings. We expand upon the previous theory by introducing a simple promoter activity model to quantitatively predict how basal promoter strength relates to growth rate and protein expression. We then estimate the amount of protein expression needed to support high flux through a heterologous metabolic pathway and predict the sizable fitness cost associated with enzyme production. This work has broad implications across applied biological sciences because it allows for prediction of the interplay between promoter strength, protein expression, and the resulting cost to microbial growth rates. PMID:25286161
Research of MPPT for photovoltaic generation based on two-dimensional cloud model
NASA Astrophysics Data System (ADS)
Liu, Shuping; Fan, Wei
2013-03-01
The cloud model is a mathematical representation to fuzziness and randomness in linguistic concepts. It represents a qualitative concept with expected value Ex, entropy En and hyper entropy He, and integrates the fuzziness and randomness of a linguistic concept in a unified way. This model is a new method for transformation between qualitative and quantitative in the knowledge. This paper is introduced MPPT (maximum power point tracking, MPPT) controller based two- dimensional cloud model through analysis of auto-optimization MPPT control of photovoltaic power system and combining theory of cloud model. Simulation result shows that the cloud controller is simple and easy, directly perceived through the senses, and has strong robustness, better control performance.
Kinetics and efficiency of the hydrated electron-induced dehalogenation by the sulfite/UV process.
Li, Xuchun; Fang, Jingyun; Liu, Guifang; Zhang, Shujuan; Pan, Bingcai; Ma, Jun
2014-10-01
Hydrated electron (e(aq)(-)), which is listed among the most reactive reducing species, has great potential for removal and detoxification of recalcitrant contaminants. Here we provided quantitative insight into the availability and conversion of e(aq)(-) in a newly developed sulfite/UV process. Using monochloroacetic acid as a simple e(aq)(-)-probe, the e(aq)(-)-induced dehalogenation kinetics in synthetic and surface water was well predicted by the developed models. The models interpreted the complex roles of pH and S(IV), and also revealed the positive effects of UV intensity and temperature quantitatively. Impacts of humic acid, ferrous ion, carbonate/bicarbonate, and surface water matrix were also examined. Despite the retardation of dehalogenation by electron scavengers, the process was effective even in surface water. Efficiency of the process was discussed, and the optimization approaches were proposed. This study is believed to better understand the e(aq)(-)-induced dehalogenation by the sulfite/UV process in a quantitative manner, which is very important for its potential application in water treatment. Copyright © 2014 Elsevier Ltd. All rights reserved.
Simple mathematical law benchmarks human confrontations.
Johnson, Neil F; Medina, Pablo; Zhao, Guannan; Messinger, Daniel S; Horgan, John; Gill, Paul; Bohorquez, Juan Camilo; Mattson, Whitney; Gangi, Devon; Qi, Hong; Manrique, Pedro; Velasquez, Nicolas; Morgenstern, Ana; Restrepo, Elvira; Johnson, Nicholas; Spagat, Michael; Zarama, Roberto
2013-12-10
Many high-profile societal problems involve an individual or group repeatedly attacking another - from child-parent disputes, sexual violence against women, civil unrest, violent conflicts and acts of terror, to current cyber-attacks on national infrastructure and ultrafast cyber-trades attacking stockholders. There is an urgent need to quantify the likely severity and timing of such future acts, shed light on likely perpetrators, and identify intervention strategies. Here we present a combined analysis of multiple datasets across all these domains which account for >100,000 events, and show that a simple mathematical law can benchmark them all. We derive this benchmark and interpret it, using a minimal mechanistic model grounded by state-of-the-art fieldwork. Our findings provide quantitative predictions concerning future attacks; a tool to help detect common perpetrators and abnormal behaviors; insight into the trajectory of a 'lone wolf'; identification of a critical threshold for spreading a message or idea among perpetrators; an intervention strategy to erode the most lethal clusters; and more broadly, a quantitative starting point for cross-disciplinary theorizing about human aggression at the individual and group level, in both real and online worlds.
Simple mathematical law benchmarks human confrontations
NASA Astrophysics Data System (ADS)
Johnson, Neil F.; Medina, Pablo; Zhao, Guannan; Messinger, Daniel S.; Horgan, John; Gill, Paul; Bohorquez, Juan Camilo; Mattson, Whitney; Gangi, Devon; Qi, Hong; Manrique, Pedro; Velasquez, Nicolas; Morgenstern, Ana; Restrepo, Elvira; Johnson, Nicholas; Spagat, Michael; Zarama, Roberto
2013-12-01
Many high-profile societal problems involve an individual or group repeatedly attacking another - from child-parent disputes, sexual violence against women, civil unrest, violent conflicts and acts of terror, to current cyber-attacks on national infrastructure and ultrafast cyber-trades attacking stockholders. There is an urgent need to quantify the likely severity and timing of such future acts, shed light on likely perpetrators, and identify intervention strategies. Here we present a combined analysis of multiple datasets across all these domains which account for >100,000 events, and show that a simple mathematical law can benchmark them all. We derive this benchmark and interpret it, using a minimal mechanistic model grounded by state-of-the-art fieldwork. Our findings provide quantitative predictions concerning future attacks; a tool to help detect common perpetrators and abnormal behaviors; insight into the trajectory of a `lone wolf' identification of a critical threshold for spreading a message or idea among perpetrators; an intervention strategy to erode the most lethal clusters; and more broadly, a quantitative starting point for cross-disciplinary theorizing about human aggression at the individual and group level, in both real and online worlds.
Li, Yuanpeng; Li, Fucui; Yang, Xinhao; Guo, Liu; Huang, Furong; Chen, Zhenqiang; Chen, Xingdan; Zheng, Shifu
2018-08-05
A rapid quantitative analysis model for determining the glycated albumin (GA) content based on Attenuated total reflectance (ATR)-Fourier transform infrared spectroscopy (FTIR) combining with linear SiPLS and nonlinear SVM has been developed. Firstly, the real GA content in human serum was determined by GA enzymatic method, meanwhile, the ATR-FTIR spectra of serum samples from the population of health examination were obtained. The spectral data of the whole spectra mid-infrared region (4000-600 cm -1 ) and GA's characteristic region (1800-800 cm -1 ) were used as the research object of quantitative analysis. Secondly, several preprocessing steps including first derivative, second derivative, variable standardization and spectral normalization, were performed. Lastly, quantitative analysis regression models were established by using SiPLS and SVM respectively. The SiPLS modeling results are as follows: root mean square error of cross validation (RMSECV T ) = 0.523 g/L, calibration coefficient (R C ) = 0.937, Root Mean Square Error of Prediction (RMSEP T ) = 0.787 g/L, and prediction coefficient (R P ) = 0.938. The SVM modeling results are as follows: RMSECV T = 0.0048 g/L, R C = 0.998, RMSEP T = 0.442 g/L, and R p = 0.916. The results indicated that the model performance was improved significantly after preprocessing and optimization of characteristic regions. While modeling performance of nonlinear SVM was considerably better than that of linear SiPLS. Hence, the quantitative analysis model for GA in human serum based on ATR-FTIR combined with SiPLS and SVM is effective. And it does not need sample preprocessing while being characterized by simple operations and high time efficiency, providing a rapid and accurate method for GA content determination. Copyright © 2018 Elsevier B.V. All rights reserved.
Estimating tuberculosis incidence from primary survey data: a mathematical modeling approach.
Pandey, S; Chadha, V K; Laxminarayan, R; Arinaminpathy, N
2017-04-01
There is an urgent need for improved estimations of the burden of tuberculosis (TB). To develop a new quantitative method based on mathematical modelling, and to demonstrate its application to TB in India. We developed a simple model of TB transmission dynamics to estimate the annual incidence of TB disease from the annual risk of tuberculous infection and prevalence of smear-positive TB. We first compared model estimates for annual infections per smear-positive TB case using previous empirical estimates from China, Korea and the Philippines. We then applied the model to estimate TB incidence in India, stratified by urban and rural settings. Study model estimates show agreement with previous empirical estimates. Applied to India, the model suggests an annual incidence of smear-positive TB of 89.8 per 100 000 population (95%CI 56.8-156.3). Results show differences in urban and rural TB: while an urban TB case infects more individuals per year, a rural TB case remains infectious for appreciably longer, suggesting the need for interventions tailored to these different settings. Simple models of TB transmission, in conjunction with necessary data, can offer approaches to burden estimation that complement those currently being used.
NASA Astrophysics Data System (ADS)
Ramadhan, Rifqi; Prabowo, Rian Gilang; Aprilliyani, Ria; Basari
2018-02-01
Victims of acute cancer and tumor are growing each year and cancer becomes one of the causes of human deaths in the world. Cancers or tumor tissue cells are cells that grow abnormally and turn to take over and damage the surrounding tissues. At the beginning, cancers or tumors do not have definite symptoms in its early stages, and can even attack the tissues inside of the body. This phenomena is not identifiable under visual human observation. Therefore, an early detection system which is cheap, quick, simple, and portable is essensially required to anticipate the further development of cancer or tumor. Among all of the modalities, microwave imaging is considered to be a cheaper, simple, and portable system method. There are at least two simple image reconstruction algorithms i.e. Filtered Back Projection (FBP) and Algebraic Reconstruction Technique (ART), which have been adopted in some common modalities. In this paper, both algorithms will be compared by reconstructing the image from an artificial tissue model (i.e. phantom), which has two different dielectric distributions. We addressed two performance comparisons, namely quantitative and qualitative analysis. Qualitative analysis includes the smoothness of the image and also the success in distinguishing dielectric differences by observing the image with human eyesight. In addition, quantitative analysis includes Histogram, Structural Similarity Index (SSIM), Mean Squared Error (MSE), and Peak Signal-to-Noise Ratio (PSNR) calculation were also performed. As a result, quantitative parameters of FBP might show better values than the ART. However, ART is likely more capable to distinguish two different dielectric value than FBP, due to higher contrast in ART and wide distribution grayscale level.
Kato, Junki; Masaki, Ayako; Fujii, Keiichiro; Takino, Hisashi; Murase, Takayuki; Yonekura, Kentaro; Utsunomiya, Atae; Ishida, Takashi; Iida, Shinsuke; Inagaki, Hiroshi
2016-11-01
Detection of HTLV-1 provirus using paraffin tumor sections may assist the diagnosis of adult T-cell leukemia/lymphoma (ATLL). For the detection, non-quantitative PCR assay has been reported, but its usefulness and limitations remain unclear. To our knowledge, quantitative PCR assay using paraffin tumor sections has not been reported. Using paraffin sections from ATLLs and non-ATLL T-cell lymphomas, we first performed non-quantitative PCR for HTLV-1 provirus. Next, we determined tumor ratios and carried out quantitative PCR to obtain provirus copy numbers. The results were analyzed with a simple regression model and a novel criterion, cut-off using 95 % rejection limits. Our quantitative PCR assay showed an excellent association between tumor ratios and the copy numbers (r = 0.89, P < 0.0001). The 95 % rejection limits provided a statistical basis for the range for the determination of HTLV-1 involvement. Its application suggested that results of non-quantitative PCR assay should be interpreted very carefully and that our quantitative PCR assay is useful to estimate the status of HTLV-1 involvement in the tumor cases. In conclusion, our quantitative PCR assay using paraffin tumor sections may be useful for the screening of ATLL cases, especially in HTLV-1 non-endemic areas where easy access to serological testing for HTLV-1 infection is limited. © 2016 Japanese Society of Pathology and John Wiley & Sons Australia, Ltd.
A proposed mathematical model for sleep patterning.
Lawder, R E
1984-01-01
The simple model of a ramp, intersecting a triangular waveform, yields results which conform with seven generalized observations of sleep patterning; including the progressive lengthening of 'rapid-eye-movement' (REM) sleep periods within near-constant REM/nonREM cycle periods. Predicted values of REM sleep time, and of Stage 3/4 nonREM sleep time, can be computed using the observed values of other parameters. The distributions of the actual REM and Stage 3/4 times relative to the predicted values were closer to normal than the distributions relative to simple 'best line' fits. It was found that sleep onset tends to occur at a particular moment in the individual subject's '90-min cycle' (the use of a solar time-scale masks this effect), which could account for a subject with a naturally short sleep/wake cycle synchronizing to a 24-h rhythm. A combined 'sleep control system' model offers quantitative simulation of the sleep patterning of endogenous depressives and, with a different perturbation, qualitative simulation of the symptoms of narcolepsy.
Quantitative, steady-state properties of Catania's computational model of the operant reserve.
Berg, John P; McDowell, J J
2011-05-01
Catania (2005) found that a computational model of the operant reserve (Skinner, 1938) produced realistic behavior in initial, exploratory analyses. Although Catania's operant reserve computational model demonstrated potential to simulate varied behavioral phenomena, the model was not systematically tested. The current project replicated and extended the Catania model, clarified its capabilities through systematic testing, and determined the extent to which it produces behavior corresponding to matching theory. Significant departures from both classic and modern matching theory were found in behavior generated by the model across all conditions. The results suggest that a simple, dynamic operant model of the reflex reserve does not simulate realistic steady state behavior. Copyright © 2011 Elsevier B.V. All rights reserved.
Simple spatial scaling rules behind complex cities.
Li, Ruiqi; Dong, Lei; Zhang, Jiang; Wang, Xinran; Wang, Wen-Xu; Di, Zengru; Stanley, H Eugene
2017-11-28
Although most of wealth and innovation have been the result of human interaction and cooperation, we are not yet able to quantitatively predict the spatial distributions of three main elements of cities: population, roads, and socioeconomic interactions. By a simple model mainly based on spatial attraction and matching growth mechanisms, we reveal that the spatial scaling rules of these three elements are in a consistent framework, which allows us to use any single observation to infer the others. All numerical and theoretical results are consistent with empirical data from ten representative cities. In addition, our model can also provide a general explanation of the origins of the universal super- and sub-linear aggregate scaling laws and accurately predict kilometre-level socioeconomic activity. Our work opens a new avenue for uncovering the evolution of cities in terms of the interplay among urban elements, and it has a broad range of applications.
Control of DNA strand displacement kinetics using toehold exchange.
Zhang, David Yu; Winfree, Erik
2009-12-02
DNA is increasingly being used as the engineering material of choice for the construction of nanoscale circuits, structures, and motors. Many of these enzyme-free constructions function by DNA strand displacement reactions. The kinetics of strand displacement can be modulated by toeholds, short single-stranded segments of DNA that colocalize reactant DNA molecules. Recently, the toehold exchange process was introduced as a method for designing fast and reversible strand displacement reactions. Here, we characterize the kinetics of DNA toehold exchange and model it as a three-step process. This model is simple and quantitatively predicts the kinetics of 85 different strand displacement reactions from the DNA sequences. Furthermore, we use toehold exchange to construct a simple catalytic reaction. This work improves the understanding of the kinetics of nucleic acid reactions and will be useful in the rational design of dynamic DNA and RNA circuits and nanodevices.
[Quality assurance of the renal applications software].
del Real Núñez, R; Contreras Puertas, P I; Moreno Ortega, E; Mena Bares, L M; Maza Muret, F R; Latre Romero, J M
2007-01-01
The need for quality assurance of all technical aspects of nuclear medicine studies is widely recognised. However, little attention has been paid to the quality assurance of the applications software. Our work reported here aims at verifying the analysis software for processing of renal nuclear medicine studies (renograms). The software tools were used to build a synthetic dynamic model of renal system. The model consists of two phases: perfusion and function. The organs of interest (kidneys, bladder and aortic artery) were simple geometric forms. The uptake of the renal structures was described by mathematic functions. Curves corresponding to normal or pathological conditions were simulated for kidneys, bladder and aortic artery by appropriate selection of parameters. There was no difference between the parameters of the mathematic curves and the quantitative data produced by the renal analysis program. Our test procedure is simple to apply, reliable, reproducible and rapid to verify the renal applications software.
Dynamics of liquid spreading on solid surfaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalliadasis, S.; Chang, H.C.
1996-09-01
Using simple scaling arguments and a precursor film model, the authors show that the appropriate macroscopic contact angle {theta} during the slow spreading of a completely or partially wetting liquid under conditions of viscous flow and small slopes should be described by tan {theta} = [tan{sup 3} {theta}{sub e} {minus} 9 log {eta}Ca]{sup 1/3} where {theta}{sub e} is the static contact angle, Ca is the capillary number, and {eta} is a scaled Hamaker constant. Using this simple relation as a boundary condition, the authors are able to quantitatively model, without any empirical parameter, the spreading dynamics of several classical spreadingmore » phenomena (capillary rise, sessile, and pendant drop spreading) by simply equating the slope of the leading order static bulk region to the dynamic contact angle boundary condition without performing a matched asymptotic analysis for each case independently as is usually done in the literature.« less
Determining absolute protein numbers by quantitative fluorescence microscopy.
Verdaasdonk, Jolien Suzanne; Lawrimore, Josh; Bloom, Kerry
2014-01-01
Biological questions are increasingly being addressed using a wide range of quantitative analytical tools to examine protein complex composition. Knowledge of the absolute number of proteins present provides insights into organization, function, and maintenance and is used in mathematical modeling of complex cellular dynamics. In this chapter, we outline and describe three microscopy-based methods for determining absolute protein numbers--fluorescence correlation spectroscopy, stepwise photobleaching, and ratiometric comparison of fluorescence intensity to known standards. In addition, we discuss the various fluorescently labeled proteins that have been used as standards for both stepwise photobleaching and ratiometric comparison analysis. A detailed procedure for determining absolute protein number by ratiometric comparison is outlined in the second half of this chapter. Counting proteins by quantitative microscopy is a relatively simple yet very powerful analytical tool that will increase our understanding of protein complex composition. © 2014 Elsevier Inc. All rights reserved.
Human judgment vs. quantitative models for the management of ecological resources.
Holden, Matthew H; Ellner, Stephen P
2016-07-01
Despite major advances in quantitative approaches to natural resource management, there has been resistance to using these tools in the actual practice of managing ecological populations. Given a managed system and a set of assumptions, translated into a model, optimization methods can be used to solve for the most cost-effective management actions. However, when the underlying assumptions are not met, such methods can potentially lead to decisions that harm the environment and economy. Managers who develop decisions based on past experience and judgment, without the aid of mathematical models, can potentially learn about the system and develop flexible management strategies. However, these strategies are often based on subjective criteria and equally invalid and often unstated assumptions. Given the drawbacks of both methods, it is unclear whether simple quantitative models improve environmental decision making over expert opinion. In this study, we explore how well students, using their experience and judgment, manage simulated fishery populations in an online computer game and compare their management outcomes to the performance of model-based decisions. We consider harvest decisions generated using four different quantitative models: (1) the model used to produce the simulated population dynamics observed in the game, with the values of all parameters known (as a control), (2) the same model, but with unknown parameter values that must be estimated during the game from observed data, (3) models that are structurally different from those used to simulate the population dynamics, and (4) a model that ignores age structure. Humans on average performed much worse than the models in cases 1-3, but in a small minority of scenarios, models produced worse outcomes than those resulting from students making decisions based on experience and judgment. When the models ignored age structure, they generated poorly performing management decisions, but still outperformed students using experience and judgment 66% of the time. © 2016 by the Ecological Society of America.
Astrobiological complexity with probabilistic cellular automata.
Vukotić, Branislav; Ćirković, Milan M
2012-08-01
The search for extraterrestrial life and intelligence constitutes one of the major endeavors in science, but has yet been quantitatively modeled only rarely and in a cursory and superficial fashion. We argue that probabilistic cellular automata (PCA) represent the best quantitative framework for modeling the astrobiological history of the Milky Way and its Galactic Habitable Zone. The relevant astrobiological parameters are to be modeled as the elements of the input probability matrix for the PCA kernel. With the underlying simplicity of the cellular automata constructs, this approach enables a quick analysis of large and ambiguous space of the input parameters. We perform a simple clustering analysis of typical astrobiological histories with "Copernican" choice of input parameters and discuss the relevant boundary conditions of practical importance for planning and guiding empirical astrobiological and SETI projects. In addition to showing how the present framework is adaptable to more complex situations and updated observational databases from current and near-future space missions, we demonstrate how numerical results could offer a cautious rationale for continuation of practical SETI searches.
Sriyudthsak, Kansuporn; Iwata, Michio; Hirai, Masami Yokota; Shiraishi, Fumihide
2014-06-01
The availability of large-scale datasets has led to more effort being made to understand characteristics of metabolic reaction networks. However, because the large-scale data are semi-quantitative, and may contain biological variations and/or analytical errors, it remains a challenge to construct a mathematical model with precise parameters using only these data. The present work proposes a simple method, referred to as PENDISC (Parameter Estimation in a N on- DImensionalized S-system with Constraints), to assist the complex process of parameter estimation in the construction of a mathematical model for a given metabolic reaction system. The PENDISC method was evaluated using two simple mathematical models: a linear metabolic pathway model with inhibition and a branched metabolic pathway model with inhibition and activation. The results indicate that a smaller number of data points and rate constant parameters enhances the agreement between calculated values and time-series data of metabolite concentrations, and leads to faster convergence when the same initial estimates are used for the fitting. This method is also shown to be applicable to noisy time-series data and to unmeasurable metabolite concentrations in a network, and to have a potential to handle metabolome data of a relatively large-scale metabolic reaction system. Furthermore, it was applied to aspartate-derived amino acid biosynthesis in Arabidopsis thaliana plant. The result provides confirmation that the mathematical model constructed satisfactorily agrees with the time-series datasets of seven metabolite concentrations.
Microtubules soften due to cross-sectional flattening
Memet, Edvin; Hilitsk, Feodor; Morris, Margaret A.; ...
2018-06-01
We use optical trapping to continuously bend an isolated microtubule while simultaneously measuring the applied force and the resulting filament strain, thus allowing us to determine its elastic properties over a wide range of applied strains. We find that, while in the low-strain regime, microtubules may be quantitatively described in terms of the classical Euler-Bernoulli elastic filament, above a critical strain they deviate from this simple elastic model, showing a softening response with increasing deformations. A three-dimensional thin-shell model, in which the increased mechanical compliance is caused by flattening and eventual buckling of the filament cross-section, captures this softening effectmore » in the high strain regime and yields quantitative values of the effective mechanical properties of microtubules. Our results demonstrate that properties of microtubules are highly dependent on the magnitude of the applied strain and offer a new interpretation for the large variety in microtubule mechanical data measured by different methods.« less
Microtubules soften due to cross-sectional flattening
DOE Office of Scientific and Technical Information (OSTI.GOV)
Memet, Edvin; Hilitsk, Feodor; Morris, Margaret A.
We use optical trapping to continuously bend an isolated microtubule while simultaneously measuring the applied force and the resulting filament strain, thus allowing us to determine its elastic properties over a wide range of applied strains. We find that, while in the low-strain regime, microtubules may be quantitatively described in terms of the classical Euler-Bernoulli elastic filament, above a critical strain they deviate from this simple elastic model, showing a softening response with increasing deformations. A three-dimensional thin-shell model, in which the increased mechanical compliance is caused by flattening and eventual buckling of the filament cross-section, captures this softening effectmore » in the high strain regime and yields quantitative values of the effective mechanical properties of microtubules. Our results demonstrate that properties of microtubules are highly dependent on the magnitude of the applied strain and offer a new interpretation for the large variety in microtubule mechanical data measured by different methods.« less
Hydrogen Donor-Acceptor Fluctuations from Kinetic Isotope Effects: A Phenomenological Model
Roston, Daniel; Cheatum, Christopher M.; Kohen, Amnon
2012-01-01
Kinetic isotope effects (KIEs) and their temperature dependence can probe the structural and dynamic nature of enzyme-catalyzed proton or hydride transfers. The molecular interpretation of their temperature dependence requires expensive and specialized QM/MM calculations to provide a quantitative molecular understanding. Currently available phenomenological models use a non-adiabatic assumption that is not appropriate for most hydride and proton-transfer reactions, while others require more parameters than the experimental data justify. Here we propose a phenomenological interpretation of KIEs based on a simple method to quantitatively link the size and temperature dependence of KIEs to a conformational distribution of the catalyzed reaction. The present model assumes adiabatic hydrogen tunneling, and by fitting experimental KIE data, the model yields a population distribution for fluctuations of the distance between donor and acceptor atoms. Fits to data from a variety of proton and hydride transfers catalyzed by enzymes and their mutants, as well as non-enzymatic reactions, reveal that steeply temperature-dependent KIEs indicate the presence of at least two distinct conformational populations, each with different kinetic behaviors. We present the results of these calculations for several published cases and discuss how the predictions of the calculations might be experimentally tested. The current analysis does not replace molecular quantum mechanics/molecular mechanics (QM/MM) investigations, but it provides a fast and accessible way to quantitatively interpret KIEs in the context of a Marcus-like model. PMID:22857146
Hass, Joachim; Hertäg, Loreen; Durstewitz, Daniel
2016-01-01
The prefrontal cortex is centrally involved in a wide range of cognitive functions and their impairment in psychiatric disorders. Yet, the computational principles that govern the dynamics of prefrontal neural networks, and link their physiological, biochemical and anatomical properties to cognitive functions, are not well understood. Computational models can help to bridge the gap between these different levels of description, provided they are sufficiently constrained by experimental data and capable of predicting key properties of the intact cortex. Here, we present a detailed network model of the prefrontal cortex, based on a simple computationally efficient single neuron model (simpAdEx), with all parameters derived from in vitro electrophysiological and anatomical data. Without additional tuning, this model could be shown to quantitatively reproduce a wide range of measures from in vivo electrophysiological recordings, to a degree where simulated and experimentally observed activities were statistically indistinguishable. These measures include spike train statistics, membrane potential fluctuations, local field potentials, and the transmission of transient stimulus information across layers. We further demonstrate that model predictions are robust against moderate changes in key parameters, and that synaptic heterogeneity is a crucial ingredient to the quantitative reproduction of in vivo-like electrophysiological behavior. Thus, we have produced a physiologically highly valid, in a quantitative sense, yet computationally efficient PFC network model, which helped to identify key properties underlying spike time dynamics as observed in vivo, and can be harvested for in-depth investigation of the links between physiology and cognition. PMID:27203563
Accounting for nitrogen fixation in simple models of lake nitrogen loading/export.
Ruan, Xiaodan; Schellenger, Frank; Hellweger, Ferdi L
2014-05-20
Coastal eutrophication, an important global environmental problem, is primarily caused by excess nitrogen and management efforts consequently focus on lowering watershed N export (e.g., by reducing fertilizer use). Simple quantitative models are needed to evaluate alternative scenarios at the watershed scale. Existing models generally assume that, for a specific lake/reservoir, a constant fraction of N loading is exported downstream. However, N fixation by cyanobacteria may increase when the N loading is reduced, which may change the (effective) fraction of N exported. Here we present a model that incorporates this process. The model (Fixation and Export of Nitrogen from Lakes, FENL) is based on a steady-state mass balance with loading, output, loss/retention, and N fixation, where the amount fixed is a function of the N/P ratio of the loading (i.e., when N/P is less than a threshold value, N is fixed). Three approaches are used to parametrize and evaluate the model, including microcosm lab experiments, lake field observations/budgets and lake ecosystem model applications. Our results suggest that N export will not be reduced proportionally with N loading, which needs to be considered when evaluating management scenarios.
Exploring electrical resistance: a novel kinesthetic model helps to resolve some misconceptions
NASA Astrophysics Data System (ADS)
Cottle, Dan; Marshall, Rick
2016-09-01
A simple ‘hands on’ physical model is described which displays analogous behaviour to some aspects of the free electron theory of metals. Using it students can get a real feel for what is going on inside a metallic conductor. Ohms Law, the temperature dependence of resistivity, the dependence of resistance on geometry, how the conduction electrons respond to a potential difference and the concepts of mean free path and drift speed of the conduction electrons can all be explored. Some quantitative results obtained by using the model are compared with the predictions of Drude’s free electron theory of electrical conduction.
Wilczynski, Bartek; Furlong, Eileen E M
2010-04-15
Development is regulated by dynamic patterns of gene expression, which are orchestrated through the action of complex gene regulatory networks (GRNs). Substantial progress has been made in modeling transcriptional regulation in recent years, including qualitative "coarse-grain" models operating at the gene level to very "fine-grain" quantitative models operating at the biophysical "transcription factor-DNA level". Recent advances in genome-wide studies have revealed an enormous increase in the size and complexity or GRNs. Even relatively simple developmental processes can involve hundreds of regulatory molecules, with extensive interconnectivity and cooperative regulation. This leads to an explosion in the number of regulatory functions, effectively impeding Boolean-based qualitative modeling approaches. At the same time, the lack of information on the biophysical properties for the majority of transcription factors within a global network restricts quantitative approaches. In this review, we explore the current challenges in moving from modeling medium scale well-characterized networks to more poorly characterized global networks. We suggest to integrate coarse- and find-grain approaches to model gene regulatory networks in cis. We focus on two very well-studied examples from Drosophila, which likely represent typical developmental regulatory modules across metazoans. Copyright (c) 2009 Elsevier Inc. All rights reserved.
Growth of wormlike micelles in nonionic surfactant solutions: Quantitative theory vs. experiment.
Danov, Krassimir D; Kralchevsky, Peter A; Stoyanov, Simeon D; Cook, Joanne L; Stott, Ian P; Pelan, Eddie G
2018-06-01
Despite the considerable advances of molecular-thermodynamic theory of micelle growth, agreement between theory and experiment has been achieved only in isolated cases. A general theory that can provide self-consistent quantitative description of the growth of wormlike micelles in mixed surfactant solutions, including the experimentally observed high peaks in viscosity and aggregation number, is still missing. As a step toward the creation of such theory, here we consider the simplest system - nonionic wormlike surfactant micelles from polyoxyethylene alkyl ethers, C i E j . Our goal is to construct a molecular-thermodynamic model that is in agreement with the available experimental data. For this goal, we systematized data for the micelle mean mass aggregation number, from which the micelle growth parameter was determined at various temperatures. None of the available models can give a quantitative description of these data. We constructed a new model, which is based on theoretical expressions for the interfacial-tension, headgroup-steric and chain-conformation components of micelle free energy, along with appropriate expressions for the parameters of the model, including their temperature and curvature dependencies. Special attention was paid to the surfactant chain-conformation free energy, for which a new more general formula was derived. As a result, relatively simple theoretical expressions are obtained. All parameters that enter these expressions are known, which facilitates the theoretical modeling of micelle growth for various nonionic surfactants in excellent agreement with the experiment. The constructed model can serve as a basis that can be further upgraded to obtain quantitative description of micelle growth in more complicated systems, including binary and ternary mixtures of nonionic, ionic and zwitterionic surfactants, which determines the viscosity and stability of various formulations in personal-care and house-hold detergency. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Novel method for quantitative ANA measurement using near-infrared imaging.
Peterson, Lisa K; Wells, Daniel; Shaw, Laura; Velez, Maria-Gabriela; Harbeck, Ronald; Dragone, Leonard L
2009-09-30
Antinuclear antibodies (ANA) have been detected in patients with systemic rheumatic diseases and are used in the screening and/or diagnosis of autoimmunity in patients as well as mouse models of systemic autoimmunity. Indirect immunofluorescence (IIF) on HEp-2 cells is the gold standard for ANA screening. However, its usefulness is limited in diagnosis, prognosis and monitoring of disease activity due to the lack of standardization in performing the technique, subjectivity in interpreting the results and the fact that it is only semi-quantitative. Various immunological techniques have been developed in an attempt to improve upon the method to quantify ANA, including enzyme-linked immunosorbent assays (ELISAs), line immunoassays (LIAs), multiplexed bead immunoassays and IIF on substrates other than HEp-2 cells. Yet IIF on HEp-2 cells remains the most common screening method for ANA. In this study, we describe a simple quantitative method to detect ANA which combines IIF on HEp-2 coated slides with analysis using a near-infrared imaging (NII) system. Using NII to determine ANA titer, 86.5% (32 of 37) of the titers for human patient samples were within 2 dilutions of those determined by IIF, which is the acceptable range for proficiency testing. Combining an initial screening for nuclear staining using microscopy with titration by NII resulted in 97.3% (36 of 37) of the titers detected to be within two dilutions of those determined by IIF. The NII method for quantitative ANA measurements using serum from both patients and mice with autoimmunity provides a fast, relatively simple, objective, sensitive and reproducible assay, which could easily be standardized for comparison between laboratories.
Validating and improving a zero-dimensional stack voltage model of the Vanadium Redox Flow Battery
NASA Astrophysics Data System (ADS)
König, S.; Suriyah, M. R.; Leibfried, T.
2018-02-01
Simple, computationally efficient battery models can contribute significantly to the development of flow batteries. However, validation studies for these models on an industrial-scale stack level are rarely published. We first extensively present a simple stack voltage model for the Vanadium Redox Flow Battery. For modeling the concentration overpotential, we derive mass transfer coefficients from experimental results presented in the 1990s. The calculated mass transfer coefficient of the positive half-cell is 63% larger than of the negative half-cell, which is not considered in models published to date. Further, we advance the concentration overpotential model by introducing an apparent electrochemically active electrode surface which differs from the geometric electrode area. We use the apparent surface as fitting parameter for adapting the model to experimental results of a flow battery manufacturer. For adapting the model, we propose a method for determining the agreement between model and reality quantitatively. To protect the manufacturer's intellectual property, we introduce a normalization method for presenting the results. For the studied stack, the apparent electrochemically active surface of the electrode is 41% larger than its geometrical area. Hence, the current density in the diffusion layer is 29% smaller than previously reported for a zero-dimensional model.
Leakey, Tatiana I; Zielinski, Jerzy; Siegfried, Rachel N; Siegel, Eric R; Fan, Chun-Yang; Cooney, Craig A
2008-06-01
DNA methylation at cytosines is a widely studied epigenetic modification. Methylation is commonly detected using bisulfite modification of DNA followed by PCR and additional techniques such as restriction digestion or sequencing. These additional techniques are either laborious, require specialized equipment, or are not quantitative. Here we describe a simple algorithm that yields quantitative results from analysis of conventional four-dye-trace sequencing. We call this method Mquant and we compare it with the established laboratory method of combined bisulfite restriction assay (COBRA). This analysis of sequencing electropherograms provides a simple, easily applied method to quantify DNA methylation at specific CpG sites.
Model-assisted development of a laminography inspection system
NASA Astrophysics Data System (ADS)
Grandin, R.; Gray, J.
2012-05-01
Traditional computed tomography (CT) is an effective method of determining the internal structure of an object through non-destructive means; however, inspection of certain objects, such as those with planar geometrics or with limited access, requires an alternate approach. An alternative is laminography and has been the focus of a number of researchers in the past decade for both medical and industrial inspections. Many research efforts rely on geometrically-simple analytical models, such as the Shepp-Logan phantom, for the development of their algorithms. Recent work at the Center for Non-Destructive Evaluation makes extensive use of a forward model, XRSIM, to study artifacts arising from the reconstruction method, the effects of complex geometries and known issues such as high density features on the laminography reconstruction process. The use of a model provides full knowledge of all aspects of the geometry and provides a means to quantitatively evaluate the impact of methods designed to reduce artifacts generated by the reconstruction methods or that are result of the part geometry. We will illustrate the use of forward simulations to quantitatively assess reconstruction algorithm development and artifact reduction.
Mechanical behavior in living cells consistent with the tensegrity model
NASA Technical Reports Server (NTRS)
Wang, N.; Naruse, K.; Stamenovic, D.; Fredberg, J. J.; Mijailovich, S. M.; Tolic-Norrelykke, I. M.; Polte, T.; Mannix, R.; Ingber, D. E.
2001-01-01
Alternative models of cell mechanics depict the living cell as a simple mechanical continuum, porous filament gel, tensed cortical membrane, or tensegrity network that maintains a stabilizing prestress through incorporation of discrete structural elements that bear compression. Real-time microscopic analysis of cells containing GFP-labeled microtubules and associated mitochondria revealed that living cells behave like discrete structures composed of an interconnected network of actin microfilaments and microtubules when mechanical stresses are applied to cell surface integrin receptors. Quantitation of cell tractional forces and cellular prestress by using traction force microscopy confirmed that microtubules bear compression and are responsible for a significant portion of the cytoskeletal prestress that determines cell shape stability under conditions in which myosin light chain phosphorylation and intracellular calcium remained unchanged. Quantitative measurements of both static and dynamic mechanical behaviors in cells also were consistent with specific a priori predictions of the tensegrity model. These findings suggest that tensegrity represents a unified model of cell mechanics that may help to explain how mechanical behaviors emerge through collective interactions among different cytoskeletal filaments and extracellular adhesions in living cells.
More memory under evolutionary learning may lead to chaos
NASA Astrophysics Data System (ADS)
Diks, Cees; Hommes, Cars; Zeppini, Paolo
2013-02-01
We show that an increase of memory of past strategy performance in a simple agent-based innovation model, with agents switching between costly innovation and cheap imitation, can be quantitatively stabilising while at the same time qualitatively destabilising. As memory in the fitness measure increases, the amplitude of price fluctuations decreases, but at the same time a bifurcation route to chaos may arise. The core mechanism leading to the chaotic behaviour in this model with strategy switching is that the map obtained for the system with memory is a convex combination of an increasing linear function and a decreasing non-linear function.
A Fan-tastic Quantitative Exploration of Ohm's Law
NASA Astrophysics Data System (ADS)
Mitchell, Brandon; Ekey, Robert; McCullough, Roy; Reitz, William
2018-02-01
Teaching simple circuits and Ohm's law to students in the introductory classroom has been extensively investigated through the common practice of using incandescent light bulbs to help students develop a conceptual foundation before moving on to quantitative analysis. However, the bulb filaments' resistance has a large temperature dependence, which makes them less suitable as a tool for quantitative analysis. Some instructors show that light bulbs do not obey Ohm's law either outright or through inquiry-based laboratory experiments. Others avoid the subject altogether by using bulbs strictly for qualitative purposes and then later switching to resistors for a numerical analysis, or by changing the operating conditions of the bulb so that it is "barely" glowing. It seems incongruous to develop a conceptual basis for the behavior of simple circuits using bulbs only to later reveal that they do not follow Ohm's law. Recently, small computer fans were proposed as a suitable replacement of bulbs for qualitative analysis of simple circuits where the current is related to the rotational speed of the fans. In this contribution, we demonstrate that fans can also be used for quantitative measurements and provide suggestions for successful classroom implementation.
A mathematical function for the description of nutrient-response curve
Ahmadi, Hamed
2017-01-01
Several mathematical equations have been proposed to modeling nutrient-response curve for animal and human justified on the goodness of fit and/or on the biological mechanism. In this paper, a functional form of a generalized quantitative model based on Rayleigh distribution principle for description of nutrient-response phenomena is derived. The three parameters governing the curve a) has biological interpretation, b) may be used to calculate reliable estimates of nutrient response relationships, and c) provide the basis for deriving relationships between nutrient and physiological responses. The new function was successfully applied to fit the nutritional data obtained from 6 experiments including a wide range of nutrients and responses. An evaluation and comparison were also done based simulated data sets to check the suitability of new model and four-parameter logistic model for describing nutrient responses. This study indicates the usefulness and wide applicability of the new introduced, simple and flexible model when applied as a quantitative approach to characterizing nutrient-response curve. This new mathematical way to describe nutritional-response data, with some useful biological interpretations, has potential to be used as an alternative approach in modeling nutritional responses curve to estimate nutrient efficiency and requirements. PMID:29161271
Experimental verification of Pyragas-Schöll-Fiedler control.
von Loewenich, Clemens; Benner, Hartmut; Just, Wolfram
2010-09-01
We present an experimental realization of time-delayed feedback control proposed by Schöll and Fiedler. The scheme enables us to stabilize torsion-free periodic orbits in autonomous systems, and to overcome the so-called odd number limitation. The experimental control performance is in quantitative agreement with the bifurcation analysis of simple model systems. The results uncover some general features of the control scheme which are deemed to be relevant for a large class of setups.
Maximum current density and beam brightness achievable by laser-driven electron sources
NASA Astrophysics Data System (ADS)
Filippetto, D.; Musumeci, P.; Zolotorev, M.; Stupakov, G.
2014-02-01
This paper discusses the extension to different electron beam aspect ratio of the Child-Langmuir law for the maximum achievable current density in electron guns. Using a simple model, we derive quantitative formulas in good agreement with simulation codes. The new scaling laws for the peak current density of temporally long and transversely narrow initial beam distributions can be used to estimate the maximum beam brightness and suggest new paths for injector optimization.
Rapid Coarsening of Ion Beam Ripple Patterns by Defect Annihilation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Henri; Messlinger, Sebastian; Stoian, Georgiana
Ripple patterns formed on Pt(111) through grazing incidence ion beam erosion coarsen rapidly. At and below 450 K coarsening of the patterns is athermal and kinetic, unrelated to diffusion and surface free energy. Similar to the situation for sand dunes, coarsening takes place through annihilation reactions of mobile defects in the pattern. The defect velocity derived on the basis of a simple model agrees quantitatively with the velocity of monatomic steps illuminated by the ion beam.
Simple models for the simulation of submarine melt for a Greenland glacial system model
NASA Astrophysics Data System (ADS)
Beckmann, Johanna; Perrette, Mahé; Ganopolski, Andrey
2018-01-01
Two hundred marine-terminating Greenland outlet glaciers deliver more than half of the annually accumulated ice into the ocean and have played an important role in the Greenland ice sheet mass loss observed since the mid-1990s. Submarine melt may play a crucial role in the mass balance and position of the grounding line of these outlet glaciers. As the ocean warms, it is expected that submarine melt will increase, potentially driving outlet glaciers retreat and contributing to sea level rise. Projections of the future contribution of outlet glaciers to sea level rise are hampered by the necessity to use models with extremely high resolution of the order of a few hundred meters. That requirement in not only demanded when modeling outlet glaciers as a stand alone model but also when coupling them with high-resolution 3-D ocean models. In addition, fjord bathymetry data are mostly missing or inaccurate (errors of several hundreds of meters), which questions the benefit of using computationally expensive 3-D models for future predictions. Here we propose an alternative approach built on the use of a computationally efficient simple model of submarine melt based on turbulent plume theory. We show that such a simple model is in reasonable agreement with several available modeling studies. We performed a suite of experiments to analyze sensitivity of these simple models to model parameters and climate characteristics. We found that the computationally cheap plume model demonstrates qualitatively similar behavior as 3-D general circulation models. To match results of the 3-D models in a quantitative manner, a scaling factor of the order of 1 is needed for the plume models. We applied this approach to model submarine melt for six representative Greenland glaciers and found that the application of a line plume can produce submarine melt compatible with observational data. Our results show that the line plume model is more appropriate than the cone plume model for simulating the average submarine melting of real glaciers in Greenland.
Challenges in Developing Models Describing Complex Soil Systems
NASA Astrophysics Data System (ADS)
Simunek, J.; Jacques, D.
2014-12-01
Quantitative mechanistic models that consider basic physical, mechanical, chemical, and biological processes have the potential to be powerful tools to integrate our understanding of complex soil systems, and the soil science community has often called for models that would include a large number of these diverse processes. However, once attempts have been made to develop such models, the response from the community has not always been overwhelming, especially after it discovered that these models are consequently highly complex, requiring not only a large number of parameters, not all of which can be easily (or at all) measured and/or identified, and which are often associated with large uncertainties, but also requiring from their users deep knowledge of all/most of these implemented physical, mechanical, chemical and biological processes. Real, or perceived, complexity of these models then discourages users from using them even for relatively simple applications, for which they would be perfectly adequate. Due to the nonlinear nature and chemical/biological complexity of the soil systems, it is also virtually impossible to verify these types of models analytically, raising doubts about their applicability. Code inter-comparisons, which is then likely the most suitable method to assess code capabilities and model performance, requires existence of multiple models of similar/overlapping capabilities, which may not always exist. It is thus a challenge not only to developed models describing complex soil systems, but also to persuade the soil science community in using them. As a result, complex quantitative mechanistic models are still an underutilized tool in soil science research. We will demonstrate some of the challenges discussed above on our own efforts in developing quantitative mechanistic models (such as HP1/2) for complex soil systems.
ERIC Educational Resources Information Center
Sawicki, Charles A.
1996-01-01
Describes a simple, inexpensive system that allows students to have hands-on contact with simple experiments involving forces generated by induced currents. Discusses the use of a dynamic force sensor in making quantitative measurements of the forces generated. (JRH)
Proposal for a quantitative index of flood disasters.
Feng, Lihua; Luo, Gaoyuan
2010-07-01
Drawing on calculations of wind scale and earthquake magnitude, this paper develops a new quantitative method for measuring flood magnitude and disaster intensity. Flood magnitude is the quantitative index that describes the scale of a flood; the flood's disaster intensity is the quantitative index describing the losses caused. Both indices have numerous theoretical and practical advantages with definable concepts and simple applications, which lend them key practical significance.
Kim, Hee Seok; Lee, Dong Soo
2017-11-01
SimpleBox is an important multimedia model used to estimate the predicted environmental concentration for screening-level exposure assessment. The main objectives were (i) to quantitatively assess how the magnitude and nature of prediction bias of SimpleBox vary with the selection of observed concentration data set for optimization and (ii) to present the prediction performance of the optimized SimpleBox. The optimization was conducted using a total of 9604 observed multimedia data for 42 chemicals of four groups (i.e., polychlorinated dibenzo-p-dioxins/furans (PCDDs/Fs), polybrominated diphenyl ethers (PBDEs), phthalates, and polycyclic aromatic hydrocarbons (PAHs)). The model performance was assessed based on the magnitude and skewness of prediction bias. Monitoring data selection in terms of number of data and kind of chemicals plays a significant role in optimization of the model. The coverage of the physicochemical properties was found to be very important to reduce the prediction bias. This suggests that selection of observed data should be made such that the physicochemical property (such as vapor pressure, octanol-water partition coefficient, octanol-air partition coefficient, and Henry's law constant) range of the selected chemical groups be as wide as possible. With optimization, about 55%, 90%, and 98% of the total number of the observed concentration ratios were predicted within factors of three, 10, and 30, respectively, with negligible skewness. Copyright © 2017 Elsevier Ltd. All rights reserved.
Huang, Ruili; Southall, Noel; Xia, Menghang; Cho, Ming-Hsuang; Jadhav, Ajit; Nguyen, Dac-Trung; Inglese, James; Tice, Raymond R.; Austin, Christopher P.
2009-01-01
In support of the U.S. Tox21 program, we have developed a simple and chemically intuitive model we call weighted feature significance (WFS) to predict the toxicological activity of compounds, based on the statistical enrichment of structural features in toxic compounds. We trained and tested the model on the following: (1) data from quantitative high–throughput screening cytotoxicity and caspase activation assays conducted at the National Institutes of Health Chemical Genomics Center, (2) data from Salmonella typhimurium reverse mutagenicity assays conducted by the U.S. National Toxicology Program, and (3) hepatotoxicity data published in the Registry of Toxic Effects of Chemical Substances. Enrichments of structural features in toxic compounds are evaluated for their statistical significance and compiled into a simple additive model of toxicity and then used to score new compounds for potential toxicity. The predictive power of the model for cytotoxicity was validated using an independent set of compounds from the U.S. Environmental Protection Agency tested also at the National Institutes of Health Chemical Genomics Center. We compared the performance of our WFS approach with classical classification methods such as Naive Bayesian clustering and support vector machines. In most test cases, WFS showed similar or slightly better predictive power, especially in the prediction of hepatotoxic compounds, where WFS appeared to have the best performance among the three methods. The new algorithm has the important advantages of simplicity, power, interpretability, and ease of implementation. PMID:19805409
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.; Einaudi, Franco (Technical Monitor)
2000-01-01
Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote-sensing error and, in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that Root-mean-square (RMS) random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain-gauge and radar data. This relationship was examined using satellite SSM/I data obtained over the western equatorial Pacific during TOGA COARE. RMS error inferred directly from SSM/I rainfall estimates was found to be larger than predicted from surface data, and to depend less on local rain rate than was predicted. Preliminary examination of TRMM microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be directly computed from the satellite data.
Masurel, R J; Gelineau, P; Lequeux, F; Cantournet, S; Montes, H
2017-12-27
In this paper we focus on the role of dynamical heterogeneities on the non-linear response of polymers in the glass transition domain. We start from a simple coarse-grained model that assumes a random distribution of the initial local relaxation times and that quantitatively describes the linear viscoelasticity of a polymer in the glass transition regime. We extend this model to non-linear mechanics assuming a local Eyring stress dependence of the relaxation times. Implementing the model in a finite element mechanics code, we derive the mechanical properties and the local mechanical fields at the beginning of the non-linear regime. The model predicts a narrowing of distribution of relaxation times and the storage of a part of the mechanical energy --internal stress-- transferred to the material during stretching in this temperature range. We show that the stress field is not spatially correlated under and after loading and follows a Gaussian distribution. In addition the strain field exhibits shear bands, but the strain distribution is narrow. Hence, most of the mechanical quantities can be calculated analytically, in a very good approximation, with the simple assumption that the strain rate is constant.
Estimating tuberculosis incidence from primary survey data: a mathematical modeling approach
Chadha, V. K.; Laxminarayan, R.; Arinaminpathy, N.
2017-01-01
SUMMARY BACKGROUND: There is an urgent need for improved estimations of the burden of tuberculosis (TB). OBJECTIVE: To develop a new quantitative method based on mathematical modelling, and to demonstrate its application to TB in India. DESIGN: We developed a simple model of TB transmission dynamics to estimate the annual incidence of TB disease from the annual risk of tuberculous infection and prevalence of smear-positive TB. We first compared model estimates for annual infections per smear-positive TB case using previous empirical estimates from China, Korea and the Philippines. We then applied the model to estimate TB incidence in India, stratified by urban and rural settings. RESULTS: Study model estimates show agreement with previous empirical estimates. Applied to India, the model suggests an annual incidence of smear-positive TB of 89.8 per 100 000 population (95%CI 56.8–156.3). Results show differences in urban and rural TB: while an urban TB case infects more individuals per year, a rural TB case remains infectious for appreciably longer, suggesting the need for interventions tailored to these different settings. CONCLUSIONS: Simple models of TB transmission, in conjunction with necessary data, can offer approaches to burden estimation that complement those currently being used. PMID:28284250
A Review of Hemolysis Prediction Models for Computational Fluid Dynamics.
Yu, Hai; Engel, Sebastian; Janiga, Gábor; Thévenin, Dominique
2017-07-01
Flow-induced hemolysis is a crucial issue for many biomedical applications; in particular, it is an essential issue for the development of blood-transporting devices such as left ventricular assist devices, and other types of blood pumps. In order to estimate red blood cell (RBC) damage in blood flows, many models have been proposed in the past. Most models have been validated by their respective authors. However, the accuracy and the validity range of these models remains unclear. In this work, the most established hemolysis models compatible with computational fluid dynamics of full-scale devices are described and assessed by comparing two selected reference experiments: a simple rheometric flow and a more complex hemodialytic flow through a needle. The quantitative comparisons show very large deviations concerning hemolysis predictions, depending on the model and model parameter. In light of the current results, two simple power-law models deliver the best compromise between computational efficiency and obtained accuracy. Finally, hemolysis has been computed in an axial blood pump. The reconstructed geometry of a HeartMate II shows that hemolysis occurs mainly at the tip and leading edge of the rotor blades, as well as at the leading edge of the diffusor vanes. © 2017 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
Polygonal current models for polycyclic aromatic hydrocarbons and graphene sheets of various shapes.
Pelloni, Stefano; Lazzeretti, Paolo
2018-01-05
Assuming that graphene is an "infinite alternant" polycyclic aromatic hydrocarbon resulting from tessellation of a surface by only six-membered carbon rings, planar fragments of various size and shape (hexagon, triangle, rectangle, and rhombus) have been considered to investigate their response to a magnetic field applied perpendicularly. Allowing for simple polygonal current models, the diatropicity of a series of polycyclic textures has been reliably determined by comparing quantitative indicators, the π-electron contribution to I B , the magnetic field-induced current susceptibility of the peripheral circuit, to ξ∥ and to σ∥(CM)=-NICS∥(CM), respectively the out-of-plane components of the magnetizability tensor and of the magnetic shielding tensor at the center of mass. Extended numerical tests and the analysis based on the polygonal model demonstrate that (i) ξ∥ and σ∥(CM) yield inadequate and sometimes erroneous measures of diatropicity, as they are heavily flawed by spurious geometrical factors, (ii) I B values computed by simple polygonal models are valid quantitative indicators of aromaticity on the magnetic criterion, preferable to others presently available, whenever current susceptibility cannot be calculated ab initio as a flux integral, (iii) the hexagonal shape is the most effective to maximize the strength of π-electron currents over the molecular perimeter, (iv) the edge current strength of triangular and rhombic graphene fragments is usually much smaller than that of hexagonal ones, (v) doping by boron and nitrogen nuclei can regulate and even inhibit peripheral ring currents, (vi) only for very large rectangular fragments can substantial current strengths be expected. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
CP violation in multibody B decays from QCD factorization
NASA Astrophysics Data System (ADS)
Klein, Rebecca; Mannel, Thomas; Virto, Javier; Vos, K. Keri
2017-10-01
We test a data-driven approach based on QCD factorization for charmless three-body B-decays by confronting it to measurements of CP violation in B - → π - π + π -. While some of the needed non-perturbative objects can be directly extracted from data, some others can, so far, only be modelled. Although this approach is currently model dependent, we comment on the perspectives to reduce this model dependence. While our model naturally accommodates the gross features of the Dalitz distribution, it cannot quantitatively explain the details seen in the current experimental data on local CP asymmetries. We comment on possible refinements of our simple model and conclude by briefly discussing a possible extension of the model to large invariant masses, where large local CP asymmetries have been measured.
Lin, Abraham; Jimenez, Jose; Derr, Julien; Vera, Pedro; Manapat, Michael L; Esvelt, Kevin M; Villanueva, Laura; Liu, David R; Chen, Irene A
2011-01-01
Conjugation is the main mode of horizontal gene transfer that spreads antibiotic resistance among bacteria. Strategies for inhibiting conjugation may be useful for preserving the effectiveness of antibiotics and preventing the emergence of bacterial strains with multiple resistances. Filamentous bacteriophages were first observed to inhibit conjugation several decades ago. Here we investigate the mechanism of inhibition and find that the primary effect on conjugation is occlusion of the conjugative pilus by phage particles. This interaction is mediated primarily by phage coat protein g3p, and exogenous addition of the soluble fragment of g3p inhibited conjugation at low nanomolar concentrations. Our data are quantitatively consistent with a simple model in which association between the pili and phage particles or g3p prevents transmission of an F plasmid encoding tetracycline resistance. We also observe a decrease in the donor ability of infected cells, which is quantitatively consistent with a reduction in pili elaboration. Since many antibiotic-resistance factors confer susceptibility to phage infection through expression of conjugative pili (the receptor for filamentous phage), these results suggest that phage may be a source of soluble proteins that slow the spread of antibiotic resistance genes.
Lin, Abraham; Jimenez, Jose; Derr, Julien; Vera, Pedro; Manapat, Michael L.; Esvelt, Kevin M.; Villanueva, Laura; Liu, David R.; Chen, Irene A.
2011-01-01
Conjugation is the main mode of horizontal gene transfer that spreads antibiotic resistance among bacteria. Strategies for inhibiting conjugation may be useful for preserving the effectiveness of antibiotics and preventing the emergence of bacterial strains with multiple resistances. Filamentous bacteriophages were first observed to inhibit conjugation several decades ago. Here we investigate the mechanism of inhibition and find that the primary effect on conjugation is occlusion of the conjugative pilus by phage particles. This interaction is mediated primarily by phage coat protein g3p, and exogenous addition of the soluble fragment of g3p inhibited conjugation at low nanomolar concentrations. Our data are quantitatively consistent with a simple model in which association between the pili and phage particles or g3p prevents transmission of an F plasmid encoding tetracycline resistance. We also observe a decrease in the donor ability of infected cells, which is quantitatively consistent with a reduction in pili elaboration. Since many antibiotic-resistance factors confer susceptibility to phage infection through expression of conjugative pili (the receptor for filamentous phage), these results suggest that phage may be a source of soluble proteins that slow the spread of antibiotic resistance genes. PMID:21637841
Quantitative confirmation of diffusion-limited oxidation theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gillen, K.T.; Clough, R.L.
1990-01-01
Diffusion-limited (heterogeneous) oxidation effects are often important for studies of polymer degradation. Such effects are common in polymers subjected to ionizing radiation at relatively high dose rate. To better understand the underlying oxidation processes and to aid in the planning of accelerated aging studies, it would be desirable to be able to monitor and quantitatively understand these effects. In this paper, we briefly review a theoretical diffusion approach which derives model profiles for oxygen surrounded sheets of material by combining oxygen permeation rates with kinetically based oxygen consumption expressions. The theory leads to a simple governing expression involving the oxygenmore » consumption and permeation rates together with two model parameters {alpha} and {beta}. To test the theory, gamma-initiated oxidation of a sheet of commercially formulated EPDM rubber was performed under conditions which led to diffusion-limited oxidation. Profile shapes from the theoretical treatments are shown to accurately fit experimentally derived oxidation profiles. In addition, direct measurements on the same EPDM material of the oxygen consumption and permeation rates, together with values of {alpha} and {beta} derived from the fitting procedure, allow us to quantitatively confirm for the first time the governing theoretical relationship. 17 refs., 3 figs.« less
The impact of temporal sampling resolution on parameter inference for biological transport models.
Harrison, Jonathan U; Baker, Ruth E
2018-06-25
Imaging data has become an essential tool to explore key biological questions at various scales, for example the motile behaviour of bacteria or the transport of mRNA, and it has the potential to transform our understanding of important transport mechanisms. Often these imaging studies require us to compare biological species or mutants, and to do this we need to quantitatively characterise their behaviour. Mathematical models offer a quantitative description of a system that enables us to perform this comparison, but to relate mechanistic mathematical models to imaging data, we need to estimate their parameters. In this work we study how collecting data at different temporal resolutions impacts our ability to infer parameters of biological transport models; performing exact inference for simple velocity jump process models in a Bayesian framework. The question of how best to choose the frequency with which data is collected is prominent in a host of studies because the majority of imaging technologies place constraints on the frequency with which images can be taken, and the discrete nature of observations can introduce errors into parameter estimates. In this work, we mitigate such errors by formulating the velocity jump process model within a hidden states framework. This allows us to obtain estimates of the reorientation rate and noise amplitude for noisy observations of a simple velocity jump process. We demonstrate the sensitivity of these estimates to temporal variations in the sampling resolution and extent of measurement noise. We use our methodology to provide experimental guidelines for researchers aiming to characterise motile behaviour that can be described by a velocity jump process. In particular, we consider how experimental constraints resulting in a trade-off between temporal sampling resolution and observation noise may affect parameter estimates. Finally, we demonstrate the robustness of our methodology to model misspecification, and then apply our inference framework to a dataset that was generated with the aim of understanding the localization of RNA-protein complexes.
NASA Astrophysics Data System (ADS)
Bergasa-Caceres, Fernando; Rabitz, Herschel A.
2013-06-01
A model of protein folding kinetics is applied to study the effects of macromolecular crowding on protein folding rate and stability. Macromolecular crowding is found to promote a decrease of the entropic cost of folding of proteins that produces an increase of both the stability and the folding rate. The acceleration of the folding rate due to macromolecular crowding is shown to be a topology-dependent effect. The model is applied to the folding dynamics of the murine prion protein (121-231). The differential effect of macromolecular crowding as a function of protein topology suffices to make non-native configurations relatively more accessible.
The big challenges in modeling human and environmental well-being.
Tuljapurkar, Shripad
2016-01-01
This article is a selective review of quantitative research, historical and prospective, that is needed to inform sustainable development policy. I start with a simple framework to highlight how demography and productivity shape human well-being. I use that to discuss three sets of issues and corresponding challenges to modeling: first, population prehistory and early human development and their implications for the future; second, the multiple distinct dimensions of human and environmental well-being and the meaning of sustainability; and, third, inequality as a phenomenon triggered by development and models to examine changing inequality and its consequences. I conclude with a few words about other important factors: political, institutional, and cultural.
Simple mathematical law benchmarks human confrontations
Johnson, Neil F.; Medina, Pablo; Zhao, Guannan; Messinger, Daniel S.; Horgan, John; Gill, Paul; Bohorquez, Juan Camilo; Mattson, Whitney; Gangi, Devon; Qi, Hong; Manrique, Pedro; Velasquez, Nicolas; Morgenstern, Ana; Restrepo, Elvira; Johnson, Nicholas; Spagat, Michael; Zarama, Roberto
2013-01-01
Many high-profile societal problems involve an individual or group repeatedly attacking another – from child-parent disputes, sexual violence against women, civil unrest, violent conflicts and acts of terror, to current cyber-attacks on national infrastructure and ultrafast cyber-trades attacking stockholders. There is an urgent need to quantify the likely severity and timing of such future acts, shed light on likely perpetrators, and identify intervention strategies. Here we present a combined analysis of multiple datasets across all these domains which account for >100,000 events, and show that a simple mathematical law can benchmark them all. We derive this benchmark and interpret it, using a minimal mechanistic model grounded by state-of-the-art fieldwork. Our findings provide quantitative predictions concerning future attacks; a tool to help detect common perpetrators and abnormal behaviors; insight into the trajectory of a ‘lone wolf'; identification of a critical threshold for spreading a message or idea among perpetrators; an intervention strategy to erode the most lethal clusters; and more broadly, a quantitative starting point for cross-disciplinary theorizing about human aggression at the individual and group level, in both real and online worlds. PMID:24322528
Canard, Gabriel; Koeller, Sylvain; Bernardinelli, Gérald; Piguet, Claude
2008-01-23
The beneficial entropic effect, which may be expected from the connection of three tridentate binding units to a strain-free covalent tripod for complexing nine-coordinate cations (Mz+ = Ca2+, La3+, Eu3+, Lu3+), is quantitatively analyzed by using a simple thermodynamic additive model. The switch from pure intermolecular binding processes, characterizing the formation of the triple-helical complexes [M(L2)3]z+, to a combination of inter- and intramolecular complexation events in [M(L8)]z+ shows that the ideal structural fit observed in [M(L8)]z+ indeed masks large energetic constraints. This limitation is evidenced by the faint effective concentrations, ceff, which control the intramolecular ring-closing reactions operating in [M(L8)]z+. This predominence of the thermodynamic approach over the usual structural analysis agrees with the hierarchical relationships linking energetics and structures. Its simple estimation by using a single microscopic parameter, ceff, opens novel perspectives for the molecular tuning of specific receptors for the recognition of large cations, a crucial point for the programming of heterometallic f-f complexes under thermodynamic control.
NASA Astrophysics Data System (ADS)
Wang, Quanzeng; Cheng, Wei-Chung; Suresh, Nitin; Hua, Hong
2016-05-01
With improved diagnostic capabilities and complex optical designs, endoscopic technologies are advancing. As one of the several important optical performance characteristics, geometric distortion can negatively affect size estimation and feature identification related diagnosis. Therefore, a quantitative and simple distortion evaluation method is imperative for both the endoscopic industry and the medical device regulatory agent. However, no such method is available yet. While the image correction techniques are rather mature, they heavily depend on computational power to process multidimensional image data based on complex mathematical model, i.e., difficult to understand. Some commonly used distortion evaluation methods, such as the picture height distortion (DPH) or radial distortion (DRAD), are either too simple to accurately describe the distortion or subject to the error of deriving a reference image. We developed the basic local magnification (ML) method to evaluate endoscope distortion. Based on the method, we also developed ways to calculate DPH and DRAD. The method overcomes the aforementioned limitations, has clear physical meaning in the whole field of view, and can facilitate lesion size estimation during diagnosis. Most importantly, the method can facilitate endoscopic technology to market and potentially be adopted in an international endoscope standard.
Comment on 'Entropy lowering in ion-atom collisions'
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ostrovsky, V. N.
2006-01-15
The recent experimental result by Nguyen et al. [Phys. Rev. A 71, 062714 (2005)] on the ratio of cross sections for charge exchange processes Rb{sup +}+Rb(5s){yields}Rb(5p)+Rb{sup +} and Rb{sup +}+Rb(5p){yields}Rb(5s)+Rb{sup +} is quantitatively derived from simple considerations within the general framework of the quasimolecular theory. Contrary to the expectations, applicability of the Demkov model for charge exchange with small energy defect is not shattered.
Exchange magnon induced resistance asymmetry in permalloy spin-Hall oscillators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Langenfeld, S.; Walter Schottky Institut and Physik-Department, Technische Universität München, 85748 Garching; Tshitoyan, V.
2016-05-09
We investigate magnetization dynamics in a spin-Hall oscillator using a direct current measurement as well as conventional microwave spectrum analysis. When the current applies an anti-damping spin-transfer torque, we observe a change in resistance which we ascribe mainly to the excitation of incoherent exchange magnons. A simple model is developed based on the reduction of the effective saturation magnetization, quantitatively explaining the data. The observed phenomena highlight the importance of exchange magnons on the operation of spin-Hall oscillators.
Bird song: in vivo, in vitro, in silico
NASA Astrophysics Data System (ADS)
Mukherjee, Aryesh; Mandre, Shreyas; Mahadevan, Lakshminarayan
2010-11-01
Bird song, long since an inspiration for artists, writers and poets also poses challenges for scientists interested in dissecting the mechanisms underlying the neural, motor, learning and behavioral systems behind the beak and brain, as a way to recreate and synthesize it. We use a combination of quantitative visualization experiments with physical models and computational theories to understand the simplest aspects of these complex musical boxes, focusing on using the controllable elastohydrodynamic interactions to mimic aural gestures and simple songs.
Modeling the electrophoretic separation of short biological molecules in nanofluidic devices
NASA Astrophysics Data System (ADS)
Fayad, Ghassan; Hadjiconstantinou, Nicolas
2010-11-01
Via comparisons with Brownian Dynamics simulations of the worm-like-chain and rigid-rod models, and the experimental results of Fu et al. [Phys. Rev. Lett., 97, 018103 (2006)], we demonstrate that, for the purposes of low-to-medium field electrophoretic separation in periodic nanofilter arrays, sufficiently short biomolecules can be modeled as point particles, with their orientational degrees of freedom accounted for using partition coefficients. This observation is used in the present work to build a particularly simple and efficient Brownian Dynamics simulation method. Particular attention is paid to the model's ability to quantitatively capture experimental results using realistic values of all physical parameters. A variance-reduction method is developed for efficiently simulating arbitrarily small forcing electric fields.
The heliocentric evolution of cometary infrared spectra - Results from an organic grain model
NASA Technical Reports Server (NTRS)
Chyba, Christopher F.; Sagan, Carl; Mumma, Michael J.
1989-01-01
An emission feature peaking near 3.4 microns that is typical of C-H stretching in hydrocarbons and which fits a simple, two-component thermal emission model for dust in the cometary coma, has been noted in observations of Comets Halley and Wilson. A noteworthy consequence of this modeling is that, at about 1 AU, emission features at wavelengths longer than 3.4 microns come to be 'diluted' by continuum emission. A quantitative development of the model shows it to agree with observational data for Comet Halley for certain, plausible values of the optical constants; the observed heliocentric evolution of the 3.4-micron feature thereby furnishes information on the composition of the comet's organic grains.
Wang, Yi-Shan; Potts, Jonathan R
2017-03-07
Recent advances in animal tracking have allowed us to uncover the drivers of movement in unprecedented detail. This has enabled modellers to construct ever more realistic models of animal movement, which aid in uncovering detailed patterns of space use in animal populations. Partial differential equations (PDEs) provide a popular tool for mathematically analysing such models. However, their construction often relies on simplifying assumptions which may greatly affect the model outcomes. Here, we analyse the effect of various PDE approximations on the analysis of some simple movement models, including a biased random walk, central-place foraging processes and movement in heterogeneous landscapes. Perhaps the most commonly-used PDE method dates back to a seminal paper of Patlak from 1953. However, our results show that this can be a very poor approximation in even quite simple models. On the other hand, more recent methods, based on transport equation formalisms, can provide more accurate results, as long as the kernel describing the animal's movement is sufficiently smooth. When the movement kernel is not smooth, we show that both the older and newer methods can lead to quantitatively misleading results. Our detailed analysis will aid future researchers in the appropriate choice of PDE approximation for analysing models of animal movement. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Guha, Anirban
2017-11-01
Theoretical studies on linear shear instabilities as well as different kinds of wave interactions often use simple velocity and/or density profiles (e.g. constant, piecewise) for obtaining good qualitative and quantitative predictions of the initial disturbances. Moreover, such simple profiles provide a minimal model to obtain a mechanistic understanding of shear instabilities. Here we have extended this minimal paradigm into nonlinear domain using vortex method. Making use of unsteady Bernoulli's equation in presence of linear shear, and extending Birkhoff-Rott equation to multiple interfaces, we have numerically simulated the interaction between multiple fully nonlinear waves. This methodology is quite general, and has allowed us to simulate diverse problems that can be essentially reduced to the minimal system with interacting waves, e.g. spilling and plunging breakers, stratified shear instabilities (Holmboe, Taylor-Caulfield, stratified Rayleigh), jet flows, and even wave-topography interaction problem like Bragg resonance. We found that the minimal models capture key nonlinear features (e.g. wave breaking features like cusp formation and roll-ups) which are observed in experiments and/or extensive simulations with smooth, realistic profiles.
Quantitative proteomic analysis reveals a simple strategy of global resource allocation in bacteria
Hui, Sheng; Silverman, Josh M; Chen, Stephen S; Erickson, David W; Basan, Markus; Wang, Jilong; Hwa, Terence; Williamson, James R
2015-01-01
A central aim of cell biology was to understand the strategy of gene expression in response to the environment. Here, we study gene expression response to metabolic challenges in exponentially growing Escherichia coli using mass spectrometry. Despite enormous complexity in the details of the underlying regulatory network, we find that the proteome partitions into several coarse-grained sectors, with each sector's total mass abundance exhibiting positive or negative linear relations with the growth rate. The growth rate-dependent components of the proteome fractions comprise about half of the proteome by mass, and their mutual dependencies can be characterized by a simple flux model involving only two effective parameters. The success and apparent generality of this model arises from tight coordination between proteome partition and metabolism, suggesting a principle for resource allocation in proteome economy of the cell. This strategy of global gene regulation should serve as a basis for future studies on gene expression and constructing synthetic biological circuits. Coarse graining may be an effective approach to derive predictive phenomenological models for other ‘omics’ studies. PMID:25678603
Supernova shock breakout through a wind
NASA Astrophysics Data System (ADS)
Balberg, Shmuel; Loeb, Abraham
2011-06-01
The breakout of a supernova shock wave through the progenitor star's outer envelope is expected to appear as an X-ray flash. However, if the supernova explodes inside an optically thick wind, the breakout flash is delayed. We present a simple model for estimating the conditions at shock breakout in a wind based on the general observable quantities in the X-ray flash light curve; the total energy EX, and the diffusion time after the peak, tdiff. We base the derivation on the self-similar solution for the forward-reverse shock structure expected for an ejecta plowing through a pre-existing wind at large distances from the progenitor's surface. We find simple quantitative relations for the shock radius and velocity at breakout. By relating the ejecta density profile to the pre-explosion structure of the progenitor, the model can also be extended to constrain the combination of explosion energy and ejecta mass. For the observed case of XRO08109/SN2008D, our model provides reasonable constraints on the breakout radius, explosion energy and ejecta mass, and predicts a high shock velocity which naturally accounts for the observed non-thermal spectrum.
Filtration Isolation of Nucleic Acids: A Simple and Rapid DNA Extraction Method.
McFall, Sally M; Neto, Mário F; Reed, Jennifer L; Wagner, Robin L
2016-08-06
FINA, filtration isolation of nucleic acids, is a novel extraction method which utilizes vertical filtration via a separation membrane and absorbent pad to extract cellular DNA from whole blood in less than 2 min. The blood specimen is treated with detergent, mixed briefly and applied by pipet to the separation membrane. The lysate wicks into the blotting pad due to capillary action, capturing the genomic DNA on the surface of the separation membrane. The extracted DNA is retained on the membrane during a simple wash step wherein PCR inhibitors are wicked into the absorbent blotting pad. The membrane containing the entrapped DNA is then added to the PCR reaction without further purification. This simple method does not require laboratory equipment and can be easily implemented with inexpensive laboratory supplies. Here we describe a protocol for highly sensitive detection and quantitation of HIV-1 proviral DNA from 100 µl whole blood as a model for early infant diagnosis of HIV that could readily be adapted to other genetic targets.
Robust LOD scores for variance component-based linkage analysis.
Blangero, J; Williams, J T; Almasy, L
2000-01-01
The variance component method is now widely used for linkage analysis of quantitative traits. Although this approach offers many advantages, the importance of the underlying assumption of multivariate normality of the trait distribution within pedigrees has not been studied extensively. Simulation studies have shown that traits with leptokurtic distributions yield linkage test statistics that exhibit excessive Type I error when analyzed naively. We derive analytical formulae relating the deviation from the expected asymptotic distribution of the lod score to the kurtosis and total heritability of the quantitative trait. A simple correction constant yields a robust lod score for any deviation from normality and for any pedigree structure, and effectively eliminates the problem of inflated Type I error due to misspecification of the underlying probability model in variance component-based linkage analysis.
Multiwavelength UV/visible spectroscopy for the quantitative investigation of platelet quality
NASA Astrophysics Data System (ADS)
Mattley, Yvette D.; Leparc, German F.; Potter, Robert L.; Garcia-Rubio, Luis H.
1998-04-01
The quality of platelets transfused is vital to the effectiveness of the transfusion. Freshly prepared, discoid platelets are the most effective treatment for preventing spontaneous hemorrhage or for stopping an abnormal bleeding event. Current methodology for the routine testing of platelet quality involves random pH testing of platelet rich plasma and visual inspection of platelet rich plasma for a swirling pattern indicative of the discoid shape of the cells. The drawback to these methods is that they do not provide a quantitative and objective assay for platelet functionality that can be used on each platelet unit prior to transfusion. As part of a larger project aimed at characterizing whole blood and blood components with multiwavelength UV/vis spectroscopy, isolated platelets and platelet in platelet rich plasma have been investigated. Models based on Mie theory have been developed which allow for the extraction of quantitative information on platelet size, number and quality from multi-wavelength UV/vis spectra. These models have been used to quantify changes in platelet rich plasma during storage. The overall goal of this work is to develop a simple, rapid quantitative assay for platelet quality that can be used prior to platelet transfusion to ensure the effectiveness of the treatment. As a result of this work, the optical properties for isolated platelets, platelet rich plasma and leukodepleted platelet rich plasma have been determined.
A general model for the scaling of offspring size and adult size.
Falster, Daniel S; Moles, Angela T; Westoby, Mark
2008-09-01
Understanding evolutionary coordination among different life-history traits is a key challenge for ecology and evolution. Here we develop a general quantitative model predicting how offspring size should scale with adult size by combining a simple model for life-history evolution with a frequency-dependent survivorship model. The key innovation is that larger offspring are afforded three different advantages during ontogeny: higher survivorship per time, a shortened juvenile phase, and advantage during size-competitive growth. In this model, it turns out that size-asymmetric advantage during competition is the factor driving evolution toward larger offspring sizes. For simplified and limiting cases, the model is shown to produce the same predictions as the previously existing theory on which it is founded. The explicit treatment of different survival advantages has biologically important new effects, mainly through an interaction between total maternal investment in reproduction and the duration of competitive growth. This goes on to explain alternative allometries between log offspring size and log adult size, as observed in mammals (slope = 0.95) and plants (slope = 0.54). Further, it suggests how these differences relate quantitatively to specific biological processes during recruitment. In these ways, the model generalizes across previous theory and provides explanations for some differences between major taxa.
A Bayesian approach to reliability and confidence
NASA Technical Reports Server (NTRS)
Barnes, Ron
1989-01-01
The historical evolution of NASA's interest in quantitative measures of reliability assessment is outlined. The introduction of some quantitative methodologies into the Vehicle Reliability Branch of the Safety, Reliability and Quality Assurance (SR and QA) Division at Johnson Space Center (JSC) was noted along with the development of the Extended Orbiter Duration--Weakest Link study which will utilize quantitative tools for a Bayesian statistical analysis. Extending the earlier work of NASA sponsor, Richard Heydorn, researchers were able to produce a consistent Bayesian estimate for the reliability of a component and hence by a simple extension for a system of components in some cases where the rate of failure is not constant but varies over time. Mechanical systems in general have this property since the reliability usually decreases markedly as the parts degrade over time. While they have been able to reduce the Bayesian estimator to a simple closed form for a large class of such systems, the form for the most general case needs to be attacked by the computer. Once a table is generated for this form, researchers will have a numerical form for the general solution. With this, the corresponding probability statements about the reliability of a system can be made in the most general setting. Note that the utilization of uniform Bayesian priors represents a worst case scenario in the sense that as researchers incorporate more expert opinion into the model, they will be able to improve the strength of the probability calculations.
Quantitative Species Measurements In Microgravity Combustion Flames
NASA Technical Reports Server (NTRS)
Chen, Shin-Juh; Pilgrim, Jeffrey S.; Silver, Joel A.; Piltch, Nancy D.
2003-01-01
The capability of models and theories to accurately predict and describe the behavior of low gravity flames can only be verified by quantitative measurements. Although video imaging, simple temperature measurements, and velocimetry methods have provided useful information in many cases, there is still a need for quantitative species measurements. Over the past decade, we have been developing high sensitivity optical absorption techniques to permit in situ, non-intrusive, absolute concentration measurements for both major and minor flames species using diode lasers. This work has helped to establish wavelength modulation spectroscopy (WMS) as an important method for species detection within the restrictions of microgravity-based measurements. More recently, in collaboration with Prof. Dahm at the University of Michigan, a new methodology combining computed flame libraries with a single experimental measurement has allowed us to determine the concentration profiles for all species in a flame. This method, termed ITAC (Iterative Temperature with Assumed Chemistry) was demonstrated for a simple laminar nonpremixed methane-air flame at both 1-g and at 0-g in a vortex ring flame. In this paper, we report additional normal and microgravity experiments which further confirm the usefulness of this approach. We also present the development of a new type of laser. This is an external cavity diode laser (ECDL) which has the unique capability of high frequency modulation as well as a very wide tuning range. This will permit the detection of multiple species with one laser while using WMS detection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zabler, S.; Rack, T.; Nelson, K.
2010-10-15
Quantitative investigation of micrometer and submicrometer gaps between joining metal surfaces is applied to conical plug-socket connections in dental titanium implants. Microgaps of widths well beyond the resolving power of industrial x-ray systems are imaged by synchrotron phase contrast radiography. Furthermore, by using an analytical model for the relatively simple sample geometry and applying it to numerical forward simulations of the optical Fresnel propagation, we show that quantitative measurements of the microgap width down to 0.1 {mu}m are possible. Image data recorded at the BAMline (BESSY-II light source, Germany) are presented, with the resolving power of the imaging system beingmore » 4 {mu}m in absorption mode and {approx}14 {mu}m in phase contrast mode (z{sub 2}=0.74 m). Thus, phase contrast radiography, combined with numerical forward simulations, is capable of measuring the widths of gaps that are two orders of magnitude thinner than the conventional detection limit.« less
Govyadinov, Alexander A; Amenabar, Iban; Huth, Florian; Carney, P Scott; Hillenbrand, Rainer
2013-05-02
Scattering-type scanning near-field optical microscopy (s-SNOM) and Fourier transform infrared nanospectroscopy (nano-FTIR) are emerging tools for nanoscale chemical material identification. Here, we push s-SNOM and nano-FTIR one important step further by enabling them to quantitatively measure local dielectric constants and infrared absorption. Our technique is based on an analytical model, which allows for a simple inversion of the near-field scattering problem. It yields the dielectric permittivity and absorption of samples with 2 orders of magnitude improved spatial resolution compared to far-field measurements and is applicable to a large class of samples including polymers and biological matter. We verify the capabilities by determining the local dielectric permittivity of a PMMA film from nano-FTIR measurements, which is in excellent agreement with far-field ellipsometric data. We further obtain local infrared absorption spectra with unprecedented accuracy in peak position and shape, which is the key to quantitative chemometrics on the nanometer scale.
Crustal Gravitational Potential Energy Change and Subduction Earthquakes
NASA Astrophysics Data System (ADS)
Zhu, P. P.
2017-05-01
Crustal gravitational potential energy (GPE) change induced by earthquakes is an important subject in geophysics and seismology. For the past forty years the research on this subject stayed in the stage of qualitative estimate. In recent few years the 3D dynamic faulting theory provided a quantitative solution of this subject. The theory deduced a quantitative calculating formula for the crustal GPE change using the mathematic method of tensor analysis under the principal stresses system. This formula contains only the vertical principal stress, rupture area, slip, dip, and rake; it does not include the horizontal principal stresses. It is just involved in simple mathematical operations and does not hold complicated surface or volume integrals. Moreover, the hanging wall vertical moving (up or down) height has a very simple expression containing only slip, dip, and rake. The above results are significant to investigate crustal GPE change. Commonly, the vertical principal stress is related to the gravitational field, substituting the relationship between the vertical principal stress and gravitational force into the above formula yields an alternative formula of crustal GPE change. The alternative formula indicates that even with lack of in situ borehole measured stress data, scientists can still quantitatively calculate crustal GPE change. The 3D dynamic faulting theory can be used for research on continental fault earthquakes; it also can be applied to investigate subduction earthquakes between oceanic and continental plates. Subduction earthquakes hold three types: (a) crust only on the vertical up side of the rupture area; (b) crust and seawater both on the vertical up side of the rupture area; (c) crust only on the vertical up side of the partial rupture area, and crust and seawater both on the vertical up side of the remaining rupture area. For each type we provide its quantitative formula of the crustal GPE change. We also establish a simplified model (called CRW Model) as follows: for Type B and Type C subduction earthquakes, if the seawater average depth on the vertical up side of the rupture area is less than a tenth of the hypocenter depth, then take the approximation that the seawater above the continental plate is replaced by the upper crustal material of the continental plate. The formula of quantitative calculating the crustal GPE change is also provided for this model. Finally, for 16 September 2015 Mw 8.3 Illapel Chile earthquake, we apply CRW Model and obtain the following results: the crustal GPE change is equal to 1.8 × 1019 J, and the hanging wall vertical moving-up height is 1.9 m with respect to the footwall. We believe this paper might be the first report on the quantitative solution of the crustal GPE change for this subduction earthquake; our results and related method will be helpful in research into the earthquakes in Peru-Chile subduction zone and the Andean orogeny. In short, this study expounds a new method for quantitative determining the crustal GPE change caused by subduction earthquakes, which is different from other existing methods.
Field Assessment of Energy Audit Tools for Retrofit Programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edwards, J.; Bohac, D.; Nelson, C.
2013-07-01
This project focused on the use of home energy ratings as a tool to promote energy retrofits in existing homes. A home energy rating provides a quantitative appraisal of a home’s energy performance, usually compared to a benchmark such as the average energy use of similar homes in the same region. Rating systems based on energy performance models, the focus of this report, can establish a home’s achievable energy efficiency potential and provide a quantitative assessment of energy savings after retrofits are completed, although their accuracy needs to be verified by actual measurement or billing data. Ratings can also showmore » homeowners where they stand compared to their neighbors, thus creating social pressure to conform to or surpass others. This project field-tested three different building performance models of varying complexity, in order to assess their value as rating systems in the context of a residential retrofit program: Home Energy Score, SIMPLE, and REM/Rate.« less
NASA Technical Reports Server (NTRS)
Fu, Lee-Lueng; Vazquez, Jorge; Perigaud, Claire
1991-01-01
Free, equatorially trapped sinusoidal wave solutions to a linear model on an equatorial beta plane are used to fit the Geosat altimetric sea level observations in the tropical Pacific Ocean. The Kalman filter technique is used to estimate the wave amplitude and phase from the data. The estimation is performed at each time step by combining the model forecast with the observation in an optimal fashion utilizing the respective error covariances. The model error covariance is determined such that the performance of the model forecast is optimized. It is found that the dominant observed features can be described qualitatively by basin-scale Kelvin waves and the first meridional-mode Rossby waves. Quantitatively, however, only 23 percent of the signal variance can be accounted for by this simple model.
Mapping axonal density and average diameter using non-monotonic time-dependent gradient-echo MRI
NASA Astrophysics Data System (ADS)
Nunes, Daniel; Cruz, Tomás L.; Jespersen, Sune N.; Shemesh, Noam
2017-04-01
White Matter (WM) microstructures, such as axonal density and average diameter, are crucial to the normal function of the Central Nervous System (CNS) as they are closely related with axonal conduction velocities. Conversely, disruptions of these microstructural features may result in severe neurological deficits, suggesting that their noninvasive mapping could be an important step towards diagnosing and following pathophysiology. Whereas diffusion based MRI methods have been proposed to map these features, they typically entail the application of powerful gradients, which are rarely available in the clinic, or extremely long acquisition schemes to extract information from parameter-intensive models. In this study, we suggest that simple and time-efficient multi-gradient-echo (MGE) MRI can be used to extract the axon density from susceptibility-driven non-monotonic decay in the time-dependent signal. We show, both theoretically and with simulations, that a non-monotonic signal decay will occur for multi-compartmental microstructures - such as axons and extra-axonal spaces, which were here used as a simple model for the microstructure - and that, for axons parallel to the main magnetic field, the axonal density can be extracted. We then experimentally demonstrate in ex-vivo rat spinal cords that its different tracts - characterized by different microstructures - can be clearly contrasted using the MGE-derived maps. When the quantitative results are compared against ground-truth histology, they reflect the axonal fraction (though with a bias, as evident from Bland-Altman analysis). As well, the extra-axonal fraction can be estimated. The results suggest that our model is oversimplified, yet at the same time evidencing a potential and usefulness of the approach to map underlying microstructures using a simple and time-efficient MRI sequence. We further show that a simple general-linear-model can predict the average axonal diameters from the four model parameters, and map these average axonal diameters in the spinal cords. While clearly further modelling and theoretical developments are necessary, we conclude that salient WM microstructural features can be extracted from simple, SNR-efficient multi-gradient echo MRI, and that this paves the way towards easier estimation of WM microstructure in vivo.
Mapping axonal density and average diameter using non-monotonic time-dependent gradient-echo MRI.
Nunes, Daniel; Cruz, Tomás L; Jespersen, Sune N; Shemesh, Noam
2017-04-01
White Matter (WM) microstructures, such as axonal density and average diameter, are crucial to the normal function of the Central Nervous System (CNS) as they are closely related with axonal conduction velocities. Conversely, disruptions of these microstructural features may result in severe neurological deficits, suggesting that their noninvasive mapping could be an important step towards diagnosing and following pathophysiology. Whereas diffusion based MRI methods have been proposed to map these features, they typically entail the application of powerful gradients, which are rarely available in the clinic, or extremely long acquisition schemes to extract information from parameter-intensive models. In this study, we suggest that simple and time-efficient multi-gradient-echo (MGE) MRI can be used to extract the axon density from susceptibility-driven non-monotonic decay in the time-dependent signal. We show, both theoretically and with simulations, that a non-monotonic signal decay will occur for multi-compartmental microstructures - such as axons and extra-axonal spaces, which were here used as a simple model for the microstructure - and that, for axons parallel to the main magnetic field, the axonal density can be extracted. We then experimentally demonstrate in ex-vivo rat spinal cords that its different tracts - characterized by different microstructures - can be clearly contrasted using the MGE-derived maps. When the quantitative results are compared against ground-truth histology, they reflect the axonal fraction (though with a bias, as evident from Bland-Altman analysis). As well, the extra-axonal fraction can be estimated. The results suggest that our model is oversimplified, yet at the same time evidencing a potential and usefulness of the approach to map underlying microstructures using a simple and time-efficient MRI sequence. We further show that a simple general-linear-model can predict the average axonal diameters from the four model parameters, and map these average axonal diameters in the spinal cords. While clearly further modelling and theoretical developments are necessary, we conclude that salient WM microstructural features can be extracted from simple, SNR-efficient multi-gradient echo MRI, and that this paves the way towards easier estimation of WM microstructure in vivo. Copyright © 2017 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pincus, David; Ryan, Christopher J.; Smith, Richard D.
2013-03-12
Cell signaling systems transmit information by post-translationally modifying signaling proteins, often via phosphorylation. While thousands of sites of phosphorylation have been identified in proteomic studies, the vast majority of sites have no known function. Assigning functional roles to the catalog of uncharacterized phosphorylation sites is a key research challenge. Here we present a general approach to address this challenge and apply it to a prototypical signaling pathway, the pheromone response pathway in Saccharomyces cerevisiae. The pheromone pathway includes a mitogen activated protein kinase (MAPK) cascade activated by a G-protein coupled receptor (GPCR). We used mass spectrometry-based proteomics to identify sitesmore » whose phosphorylation changed when the system was active, and evolutionary conservation to assign priority to a list of candidate MAPK regulatory sites. We made targeted alterations in those sites, and measured the effects of the mutations on pheromone pathway output in single cells. Our work identified six new sites that quantitatively tuned system output. We developed simple computational models to find system architectures that recapitulated the quantitative phenotypes of the mutants. Our results identify a number of regulated phosphorylation events that contribute to adjust the input-output relationship of this model eukaryotic signaling system. We believe this combined approach constitutes a general means not only to reveal modification sites required to turn a pathway on and off, but also those required for more subtle quantitative effects that tune pathway output. Our results further suggest that relatively small quantitative influences from individual regulatory phosphorylation events endow signaling systems with plasticity that evolution may exploit to quantitatively tailor signaling outcomes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Hongyi; Sivapalan, Murugesu; Tian, Fuqiang
Inspired by the Dunne diagram, the climatic and landscape controls on the partitioning of annual runoff into its various components (Hortonian and Dunne overland flow and subsurface stormflow) are assessed quantitatively, from a purely theoretical perspective. A simple distributed hydrologic model has been built sufficient to simulate the effects of different combinations of climate, soil, and topography on the runoff generation processes. The model is driven by a sequence of simple hypothetical precipitation events, for a large combination of climate and landscape properties, and hydrologic responses at the catchment scale are obtained through aggregation of grid-scale responses. It is found,more » first, that the water balance responses, including relative contributions of different runoff generation mechanisms, could be related to a small set of dimensionless similarity parameters. These capture the competition between the wetting, drying, storage, and drainage functions underlying the catchment responses, and in this way, provide a quantitative approximation of the conceptual Dunne diagram. Second, only a subset of all hypothetical catchment/climate combinations is found to be ‘‘behavioral,’’ in terms of falling sufficiently close to the Budyko curve, describing mean annual runoff as a function of climate aridity. Furthermore, these behavioral combinations are mostly consistent with the qualitative picture presented in the Dunne diagram, indicating clearly the commonality between the Budyko curve and the Dunne diagram. These analyses also suggest clear interrelationships amongst the ‘‘behavioral’’ climate, soil, and topography parameter combinations, implying these catchment properties may be constrained to be codependent in order to satisfy the Budyko curve.« less
Physical basis of tap test as a quantitative imaging tool for composite structures on aircraft
NASA Astrophysics Data System (ADS)
Hsu, David K.; Barnard, Daniel J.; Peters, John J.; Dayal, Vinay
2000-05-01
Tap test is a simple but effective way for finding flaws in composite and honeycomb sandwich structures; it has been practiced in aircraft inspection for decades. The mechanics of tap test was extensively researched by P. Cawley et al., and several versions of instrumented tap test have emerged in recent years. This paper describes a quantitative study of the impact duration as a function of the mass, radius, velocity, and material property of the impactor. The impact response is compared to the predictions of Hertzian-type contact theory and a simple spring model. The electronically measured impact duration, τ, is used for generating images of the tapped region. Using the spring model, the images are converted into images of a spring constant, k, which is a measure of the local contact stiffness. The images of k, largely independent of tapper mass and impact velocity, reveal the size, shape and severity (cf. Percent stiffness reduction) of defects and damages, as well as the presence of substructures and the associated stiffness increase. The studies are carried out on a variety of real aircraft components and the results serve to guide the development of a fieldable tap test imaging system for aircraft inspection.—This material is based upon work supported by the Federal Aviation Administration under Contract #DTFA03-98-D-00008, Delivery Order No. IA016 and performed at Iowa State University's Center for NDE as part of the Center for Aviation Systems Reliability program.
Morris, Melody K.; Saez-Rodriguez, Julio; Lauffenburger, Douglas A.; Alexopoulos, Leonidas G.
2012-01-01
Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms. PMID:23226239
Quantitative model of the growth of floodplains by vertical accretion
Moody, J.A.; Troutman, B.M.
2000-01-01
A simple one-dimensional model is developed to quantitatively predict the change in elevation, over a period of decades, for vertically accreting floodplains. This unsteady model approximates the monotonic growth of a floodplain as an incremental but constant increase of net sediment deposition per flood for those floods of a partial duration series that exceed a threshold discharge corresponding to the elevation of the floodplain. Sediment deposition from each flood increases the elevation of the floodplain and consequently the magnitude of the threshold discharge resulting in a decrease in the number of floods and growth rate of the floodplain. Floodplain growth curves predicted by this model are compared to empirical growth curves based on dendrochronology and to direct field measurements at five floodplain sites. The model was used to predict the value of net sediment deposition per flood which best fits (in a least squares sense) the empirical and field measurements; these values fall within the range of independent estimates of the net sediment deposition per flood based on empirical equations. These empirical equations permit the application of the model to estimate of floodplain growth for other floodplains throughout the world which do not have detailed data of sediment deposition during individual floods. Copyright (C) 2000 John Wiley and Sons, Ltd.
Mitsos, Alexander; Melas, Ioannis N; Morris, Melody K; Saez-Rodriguez, Julio; Lauffenburger, Douglas A; Alexopoulos, Leonidas G
2012-01-01
Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms.
A quantitative method for defining high-arched palate using the Tcof1(+/-) mutant mouse as a model.
Conley, Zachary R; Hague, Molly; Kurosaka, Hiroshi; Dixon, Jill; Dixon, Michael J; Trainor, Paul A
2016-07-15
The palate functions as the roof of the mouth in mammals, separating the oral and nasal cavities. Its complex embryonic development and assembly poses unique susceptibilities to intrinsic and extrinsic disruptions. Such disruptions may cause failure of the developing palatal shelves to fuse along the midline resulting in a cleft. In other cases the palate may fuse at an arch, resulting in a vaulted oral cavity, termed high-arched palate. There are many models available for studying the pathogenesis of cleft palate but a relative paucity for high-arched palate. One condition exhibiting either cleft palate or high-arched palate is Treacher Collins syndrome, a congenital disorder characterized by numerous craniofacial anomalies. We quantitatively analyzed palatal perturbations in the Tcof1(+/-) mouse model of Treacher Collins syndrome, which phenocopies the condition in humans. We discovered that 46% of Tcof1(+/-) mutant embryos and new born pups exhibit either soft clefts or full clefts. In addition, 17% of Tcof1(+/-) mutants were found to exhibit high-arched palate, defined as two sigma above the corresponding wild-type population mean for height and angular based arch measurements. Furthermore, palatal shelf length and shelf width were decreased in all Tcof1(+/-) mutant embryos and pups compared to controls. Interestingly, these phenotypes were subsequently ameliorated through genetic inhibition of p53. The results of our study therefore provide a simple, reproducible and quantitative method for investigating models of high-arched palate. Copyright © 2015 Elsevier Inc. All rights reserved.
A quantitative method for defining high-arched palate using the Tcof1+/− mutant mouse as a model
Conley, Zachary R.; Hague, Molly; Kurosaka, Hiroshi; Dixon, Jill; Dixon, Michael J.; Trainor, Paul A.
2016-01-01
The palate functions as the roof of the mouth in mammals, separating the oral and nasal cavities. Its complex embryonic development and assembly poses unique susceptibilities to intrinsic and extrinsic disruptions. Such disruptions may cause failure of the developing palatal shelves to fuse along the midline resulting in a cleft. In other cases the palate may fuse at an arch, resulting in a vaulted oral cavity, termed high-arched palate. There are many models available for studying the pathogenesis of cleft palate but a relative paucity for high-arched palate. One condition exhibiting either cleft palate or high-arched palate is Treacher Collins syndrome, a congenital disorder characterized by numerous craniofacial anomalies. We quantitatively analyzed palatal perturbations in the Tcof1+/− mouse model of Treacher Collins syndrome, which phenocopies the condition in humans. We discovered that 46% of Tcof1+/− mutant embryos and new born pups exhibit either soft clefts or full clefts. In addition, 17% of Tcof1+/− mutants were found to exhibit high-arched palate, defined as two sigma above the corresponding wild-type population mean for height and angular based arch measurements. Furthermore, palatal shelf length and shelf width were decreased in all Tcof1+/− mutant embryos and pups compared to controls. Interestingly, these phenotypes were subsequently ameliorated through genetic inhibition of p53. The results of our study therefore provide a simple, reproducible and quantitative method for investigating models of high-arched palate. PMID:26772999
Models of globular proteins in aqueous solutions
NASA Astrophysics Data System (ADS)
Wentzel, Nathaniel James
Protein crystallization is a continuing area of research. Currently, there is no universal theory for the conditions required to crystallize proteins. A better understanding of protein crystallization will be helpful in determining protein structure and preventing and treating certain diseases. In this thesis, we will extend the understanding of globular proteins in aqueous solutions by analyzing various models for protein interactions. Experiments have shown that the liquid-liquid phase separation curves for lysozyme in solution with salt depend on salt type and salt concentration. We analyze a simple square well model for this system whose well depth depends on salt type and salt concentration, to determine the phase coexistence surfaces from experimental data. The surfaces, calculated from a single Monte Carlo simulation and a simple scaling argument, are shown as a function of temperature, salt concentration and protein concentration for two typical salts. Urate Oxidase from Asperigillus flavus is a protein used for studying the effects of polymers on the crystallization of large proteins. Experiments have determined some aspects of the phase diagram. We use Monte Carlo techniques and perturbation theory to predict the phase diagram for a model of urate oxidase in solution with PEG. The model used includes an electrostatic interaction, van der Waals attraction, and a polymerinduced depletion interaction. The results agree quantitatively with experiments. Anisotropy plays a role in globular protein interactions, including the formation of hemoglobin fibers in sickle cell disease. Also, the solvent conditions have been shown to play a strong role in the phase behavior of some aqueous protein solutions. Each has previously been treated separately in theoretical studies. Here we propose and analyze a simple, combined model that treats both anisotropy and solvent effects. We find that this model qualitatively explains some phase behavior, including the existence of a lower critical point under certain conditions.
Contact resonances of U-shaped atomic force microscope probes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rezaei, E.; Turner, J. A., E-mail: jaturner@unl.edu
Recent approaches used to characterize the elastic or viscoelastic properties of materials with nanoscale resolution have focused on the contact resonances of atomic force microscope (CR-AFM) probes. The experiments for these CR-AFM methods involve measurement of several contact resonances from which the resonant frequency and peak width are found. The contact resonance values are then compared with the noncontact values in order for the sample properties to be evaluated. The data analysis requires vibration models associated with the probe during contact in order for the beam response to be deconvolved from the measured spectra. To date, the majority of CR-AFMmore » research has used rectangular probes that have a relatively simple vibration response. Recently, U-shaped AFM probes have created much interest because they allow local sample heating. However, the vibration response of these probes is much more complex such that CR-AFM is still in its infancy. In this article, a simplified analytical model of U-shaped probes is evaluated for contact resonance applications relative to a more complex finite element (FE) computational model. The tip-sample contact is modeled using three orthogonal Kelvin-Voigt elements such that the resonant frequency and peak width of each mode are functions of the contact conditions. For the purely elastic case, the frequency results of the simple model are within 8% of the FE model for the lowest six modes over a wide range of contact stiffness values. Results for the viscoelastic contact problem for which the quality factor of the lowest six modes is compared show agreement to within 13%. These results suggest that this simple model can be used effectively to evaluate CR-AFM experimental results during AFM scanning such that quantitative mapping of viscoelastic properties may be possible using U-shaped probes.« less
Large-scale coastal and fluvial models constrain the late Holocene evolution of the Ebro Delta
NASA Astrophysics Data System (ADS)
Nienhuis, Jaap H.; Ashton, Andrew D.; Kettner, Albert J.; Giosan, Liviu
2017-09-01
The distinctive plan-view shape of the Ebro Delta coast reveals a rich morphologic history. The degree to which the form and depositional history of the Ebro and other deltas represent autogenic (internal) dynamics or allogenic (external) forcing remains a prominent challenge for paleo-environmental reconstructions. Here we use simple coastal and fluvial morphodynamic models to quantify paleo-environmental changes affecting the Ebro Delta over the late Holocene. Our findings show that these models are able to broadly reproduce the Ebro Delta morphology, with simple fluvial and wave climate histories. Based on numerical model experiments and the preserved and modern shape of the Ebro Delta plain, we estimate that a phase of rapid shoreline progradation began approximately 2100 years BP, requiring approximately a doubling in coarse-grained fluvial sediment supply to the delta. River profile simulations suggest that an instantaneous and sustained increase in coarse-grained sediment supply to the delta requires a combined increase in both flood discharge and sediment supply from the drainage basin. The persistence of rapid delta progradation throughout the last 2100 years suggests an anthropogenic control on sediment supply and flood intensity. Using proxy records of the North Atlantic Oscillation, we do not find evidence that changes in wave climate aided this delta expansion. Our findings highlight how scenario-based investigations of deltaic systems using simple models can assist first-order quantitative paleo-environmental reconstructions, elucidating the effects of past human influence and climate change, and allowing a better understanding of the future of deltaic landforms.
Nishimoto, Shinji; Gallant, Jack L.
2012-01-01
Area MT has been an important target for studies of motion processing. However, previous neurophysiological studies of MT have used simple stimuli that do not contain many of the motion signals that occur during natural vision. In this study we sought to determine whether views of area MT neurons developed using simple stimuli can account for MT responses under more naturalistic conditions. We recorded responses from macaque area MT neurons during stimulation with naturalistic movies. We then used a quantitative modeling framework to discover which specific mechanisms best predict neuronal responses under these challenging conditions. We find that the simplest model that accurately predicts responses of MT neurons consists of a bank of V1-like filters, each followed by a compressive nonlinearity, a divisive nonlinearity and linear pooling. Inspection of the fit models shows that the excitatory receptive fields of MT neurons tend to lie on a single plane within the three-dimensional spatiotemporal frequency domain, and suppressive receptive fields lie off this plane. However, most excitatory receptive fields form a partial ring in the plane and avoid low temporal frequencies. This receptive field organization ensures that most MT neurons are tuned for velocity but do not tend to respond to ambiguous static textures that are aligned with the direction of motion. In sum, MT responses to naturalistic movies are largely consistent with predictions based on simple stimuli. However, models fit using naturalistic stimuli reveal several novel properties of MT receptive fields that had not been shown in prior experiments. PMID:21994372
Development of quantitative risk acceptance criteria
DOE Office of Scientific and Technical Information (OSTI.GOV)
Griesmeyer, J. M.; Okrent, D.
Some of the major considerations for effective management of risk are discussed, with particular emphasis on risks due to nuclear power plant operations. Although there are impacts associated with the rest of the fuel cycle, they are not addressed here. Several previously published proposals for quantitative risk criteria are reviewed. They range from a simple acceptance criterion on individual risk of death to a quantitative risk management framework. The final section discussed some of the problems in the establishment of a framework for the quantitative management of risk.
A modeling study of a centrifugal compressor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Popovic, P.; Shapiro, H.N.
1998-12-31
A centrifugal compressor, which is part of a chlorofluorocarbon R-114 chiller installation, was investigated, operating with a new refrigerant, hydrofluorocarbon R-236ea, a proposed alternative to R-114. A large set of R-236ea operating data, as well as a limited amount of R-114 data, were available for this study. A relatively simple analytical compressor model was developed to describe compressor performance. The model was built upon a thorough literature search, experimental data, and some compressor design parameters. Two original empirical relations were developed, providing a new approach to the compressor modeling. The model was developed in a format that would permit itmore » to be easily incorporated into a complete chiller simulation. The model was found to improve somewhat on the quantitative and physical aspects of a compressor model of the same format found in the literature. It was found that the compressor model is specific to the particular refrigerant.« less
Diffusion models of the flanker task: Discrete versus gradual attentional selection
White, Corey N.; Ratcliff, Roger; Starns, Jeffrey S.
2011-01-01
The present study tested diffusion models of processing in the flanker task, in which participants identify a target that is flanked by items that indicate the same (congruent) or opposite response (incongruent). Single- and dual-process flanker models were implemented in a diffusion-model framework and tested against data from experiments that manipulated response bias, speed/accuracy tradeoffs, attentional focus, and stimulus configuration. There was strong mimcry among the models, and each captured the main trends in the data for the standard conditions. However, when more complex conditions were used, a single-process spotlight model captured qualitative and quantitative patterns that the dual-process models could not. Since the single-process model provided the best balance of fit quality and parsimony, the results indicate that processing in the simple versions of the flanker task is better described by gradual rather than discrete narrowing of attention. PMID:21964663
NASA Astrophysics Data System (ADS)
Sarout, Joël.
2012-04-01
For the first time, a comprehensive and quantitative analysis of the domains of validity of popular wave propagation theories for porous/cracked media is provided. The case of a simple, yet versatile rock microstructure is detailed. The microstructural parameters controlling the applicability of the scattering theories, the effective medium theories, the quasi-static (Gassmann limit) and dynamic (inertial) poroelasticity are analysed in terms of pores/cracks characteristic size, geometry and connectivity. To this end, a new permeability model is devised combining the hydraulic radius and percolation concepts. The predictions of this model are compared to published micromechanical models of permeability for the limiting cases of capillary tubes and penny-shaped cracks. It is also compared to published experimental data on natural rocks in these limiting cases. It explicitly accounts for pore space topology around the percolation threshold and far above it. Thanks to this permeability model, the scattering, squirt-flow and Biot cut-off frequencies are quantitatively compared. This comparison leads to an explicit mapping of the domains of validity of these wave propagation theories as a function of the rock's actual microstructure. How this mapping impacts seismic, geophysical and ultrasonic wave velocity data interpretation is discussed. The methodology demonstrated here and the outcomes of this analysis are meant to constitute a quantitative guide for the selection of the most suitable modelling strategy to be employed for prediction and/or interpretation of rocks elastic properties in laboratory-or field-scale applications when information regarding the rock's microstructure is available.
Palacio-Torralba, Javier; Hammer, Steven; Good, Daniel W; Alan McNeill, S; Stewart, Grant D; Reuben, Robert L; Chen, Yuhang
2015-01-01
Although palpation has been successfully employed for centuries to assess soft tissue quality, it is a subjective test, and is therefore qualitative and depends on the experience of the practitioner. To reproduce what the medical practitioner feels needs more than a simple quasi-static stiffness measurement. This paper assesses the capacity of dynamic mechanical palpation to measure the changes in viscoelastic properties that soft tissue can exhibit under certain pathological conditions. A diagnostic framework is proposed to measure elastic and viscous behaviors simultaneously using a reduced set of viscoelastic parameters, giving a reliable index for quantitative assessment of tissue quality. The approach is illustrated on prostate models reconstructed from prostate MRI scans. The examples show that the change in viscoelastic time constant between healthy and cancerous tissue is a key index for quantitative diagnostics using point probing. The method is not limited to any particular tissue or material and is therefore useful for tissue where defining a unique time constant is not trivial. The proposed framework of quantitative assessment could become a useful tool in clinical diagnostics for soft tissue. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Mapping Quantitative Traits in Unselected Families: Algorithms and Examples
Dupuis, Josée; Shi, Jianxin; Manning, Alisa K.; Benjamin, Emelia J.; Meigs, James B.; Cupples, L. Adrienne; Siegmund, David
2009-01-01
Linkage analysis has been widely used to identify from family data genetic variants influencing quantitative traits. Common approaches have both strengths and limitations. Likelihood ratio tests typically computed in variance component analysis can accommodate large families but are highly sensitive to departure from normality assumptions. Regression-based approaches are more robust but their use has primarily been restricted to nuclear families. In this paper, we develop methods for mapping quantitative traits in moderately large pedigrees. Our methods are based on the score statistic which in contrast to the likelihood ratio statistic, can use nonparametric estimators of variability to achieve robustness of the false positive rate against departures from the hypothesized phenotypic model. Because the score statistic is easier to calculate than the likelihood ratio statistic, our basic mapping methods utilize relatively simple computer code that performs statistical analysis on output from any program that computes estimates of identity-by-descent. This simplicity also permits development and evaluation of methods to deal with multivariate and ordinal phenotypes, and with gene-gene and gene-environment interaction. We demonstrate our methods on simulated data and on fasting insulin, a quantitative trait measured in the Framingham Heart Study. PMID:19278016
Zhang, Fen-Fen; Jiang, Meng-Hong; Sun, Lin-Lin; Zheng, Feng; Dong, Lei; Shah, Vishva; Shen, Wen-Bin; Ding, Ya
2015-01-07
To expand the application scope of nuclear magnetic resonance (NMR) technology in quantitative analysis of pharmaceutical ingredients, (19)F nuclear magnetic resonance ((19)F-NMR) spectroscopy has been employed as a simple, rapid, and reproducible approach for the detection of a fluorine-containing model drug, sitagliptin phosphate monohydrate (STG). ciprofloxacin (Cipro) has been used as the internal standard (IS). Influential factors, including the relaxation delay time (d1) and pulse angle, impacting the accuracy and precision of spectral data are systematically optimized. Method validation has been carried out in terms of precision and intermediate precision, linearity, limit of detection (LOD) and limit of quantification (LOQ), robustness, and stability. To validate the reliability and feasibility of the (19)F-NMR technology in quantitative analysis of pharmaceutical analytes, the assay result has been compared with that of (1)H-NMR. The statistical F-test and student t-test at 95% confidence level indicate that there is no significant difference between these two methods. Due to the advantages of (19)F-NMR, such as higher resolution and suitability for biological samples, it can be used as a universal technology for the quantitative analysis of other fluorine-containing pharmaceuticals and analytes.
Kafle, Amol; Klaene, Joshua; Hall, Adam B; Glick, James; Coy, Stephen L; Vouros, Paul
2013-07-15
There is continued interest in exploring new analytical technologies for the detection and quantitation of DNA adducts, biomarkers which provide direct evidence of exposure and genetic damage in cells. With the goal of reducing clean-up steps and improving sample throughput, a Differential Mobility Spectrometry/Mass Spectrometry (DMS/MS) platform has been introduced for adduct analysis. A DMS/MS platform has been utilized for the analysis of dG-ABP, the deoxyguanosine adduct of the bladder carcinogen 4-aminobiphenyl (4-ABP). After optimization of the DMS parameters, each sample was analyzed in just 30 s following a simple protein precipitation step of the digested DNA. A detection limit of one modification in 10^6 nucleosides has been achieved using only 2 µg of DNA. A brief comparison (quantitative and qualitative) with liquid chromatography/mass spectrometry is also presented highlighting the advantages of using the DMS/MS method as a high-throughput platform. The data presented demonstrate the successful application of a DMS/MS/MS platform for the rapid quantitation of DNA adducts using, as a model analyte, the deoxyguanosine adduct of the bladder carcinogen 4-aminobiphenyl. Copyright © 2013 John Wiley & Sons, Ltd.
Quantitative Evaluation of Musical Scale Tunings
ERIC Educational Resources Information Center
Hall, Donald E.
1974-01-01
The acoustical and mathematical basis of the problem of tuning the twelve-tone chromatic scale is reviewed. A quantitative measurement showing how well any tuning succeeds in providing just intonation for any specific piece of music is explained and applied to musical examples using a simple computer program. (DT)
Kovarich, Simona; Papa, Ester; Gramatica, Paola
2011-06-15
The identification of potential endocrine disrupting (ED) chemicals is an important task for the scientific community due to their diffusion in the environment; the production and use of such compounds will be strictly regulated through the authorization process of the REACH regulation. To overcome the problem of insufficient experimental data, the quantitative structure-activity relationship (QSAR) approach is applied to predict the ED activity of new chemicals. In the present study QSAR classification models are developed, according to the OECD principles, to predict the ED potency for a class of emerging ubiquitary pollutants, viz. brominated flame retardants (BFRs). Different endpoints related to ED activity (i.e. aryl hydrocarbon receptor agonism and antagonism, estrogen receptor agonism and antagonism, androgen and progesterone receptor antagonism, T4-TTR competition, E2SULT inhibition) are modeled using the k-NN classification method. The best models are selected by maximizing the sensitivity and external predictive ability. We propose simple QSARs (based on few descriptors) characterized by internal stability, good predictive power and with a verified applicability domain. These models are simple tools that are applicable to screen BFRs in relation to their ED activity, and also to design safer alternatives, in agreement with the requirements of REACH regulation at the authorization step. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Paillet, Frederick
2012-08-01
A simple mass-balance code allows effective modeling of conventional fluid column resistivity logs in dilution tests involving column replacement with either distilled water or dilute brine. Modeling a series of column profiles where the inflowing formation water introduces water quality interfaces propagating along the borehole gives effective estimates of the rate of borehole flow. Application of the dilution model yields estimates of borehole flow rates that agree with measurements made with the heat-pulse flowmeter under ambient and pumping conditions. Model dilution experiments are used to demonstrate how dilution logging can extend the range of borehole flow measurement at least an order of magnitude beyond that achieved with flowmeters. However, dilution logging has the same dynamic range limitation encountered with flowmeters because it is difficult to detect and characterize flow zones that contribute a small fraction of total flow when that contribution is superimposed on a larger flow. When the smaller contribution is located below the primary zone, ambient downflow may disguise the zone if pumping is not strong enough to reverse the outflow. This situation can be addressed by increased pumping. But this is likely to make the moveout of water quality interfaces too fast to measure in the upper part of the borehole, so that a combination of flowmeter and dilution method may be more appropriate. Numerical experiments show that the expected weak horizontal flow across the borehole at conductive zones would be almost impossible to recognize if any ambient vertical flow is present. In situations where natural water quality differences occur such as flowing boreholes or injection experiments, the simple mass-balance code can be used to quantitatively model the evolution of fluid column logs. Otherwise, dilution experiments can be combined with high-resolution flowmeter profiles to obtain results not attainable using either method alone.
NASA Astrophysics Data System (ADS)
Kalchenko, Vyacheslav; Molodij, Guillaume; Kuznetsov, Yuri; Smolyakov, Yuri; Israeli, David; Meglinski, Igor; Harmelin, Alon
2016-03-01
The use of fluorescence imaging of vascular permeability becomes a golden standard for assessing the inflammation process during experimental immune response in vivo. The use of the optical fluorescence imaging provides a very useful and simple tool to reach this purpose. The motivation comes from the necessity of a robust and simple quantification and data presentation of inflammation based on a vascular permeability. Changes of the fluorescent intensity, as a function of time is a widely accepted method to assess the vascular permeability during inflammation related to the immune response. In the present study we propose to bring a new dimension by applying a more sophisticated approach to the analysis of vascular reaction by using a quantitative analysis based on methods derived from astronomical observations, in particular by using a space-time Fourier filtering analysis followed by a polynomial orthogonal modes decomposition. We demonstrate that temporal evolution of the fluorescent intensity observed at certain pixels correlates quantitatively to the blood flow circulation at normal conditions. The approach allows to determine the regions of permeability and monitor both the fast kinetics related to the contrast material distribution in the circulatory system and slow kinetics associated with extravasation of the contrast material. Thus, we introduce a simple and convenient method for fast quantitative visualization of the leakage related to the inflammatory (immune) reaction in vivo.
Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L
2018-01-01
Aims A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R2), using R2 as the primary metric of assay agreement. However, the use of R2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. Methods We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Results Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. Conclusions The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. PMID:28747393
Karahashi, Minako; Fukuhara, Hiroto; Hoshina, Miki; Sakamoto, Takeshi; Yamazaki, Tohru; Mitsumoto, Atsushi; Kawashima, Yoichi; Kudo, Naomi
2014-01-01
Fibrates are used in biochemical and pharmacological studies as bioactive tools. Nevertheless, most studies have lacked information concerning the concentrations of fibric acids working inside tissues because a simple and sensitive method is not available for their quantitation. This study aimed to develop a simple and sensitive bioanalytical method for the quantitation of clofibric, bezafibric and fenofibric acids in samples of very small portions of tissues. Fibric acids were extracted into n-hexane-ethyl acetate from tissue homogenates (10 mg of liver, kidney or muscle) or serum (100 µL) and were derivatized with 4-bromomethyl-6,7-dimethoxycoumarin, followed by HPLC with fluorescence detection. These compounds were separated isocratically on a reversed phase with acetonitrile-water. Standard analytical curves were linear over the concentration range of 0.2-20 nmol/10 mg of liver. Precision and accuracy were within acceptable limits. Recovery from liver homogenates ranged from 93.03 to 112.29%. This method enabled the quantitation of fibric acids in 10 mg of liver from rats treated with clofibric acid, bezafibric acid or fenofibrate. From these analytical data, it became clear that there was no large difference in ratio of acyl-CoA oxidase 1 (Acox1) mRNA level to fibric acid content in the liver among the three fibric acids, suggesting that these three fibric acids have similar potency to increase expression of the Acox1 gene, which is a target of peroxisome proliferator-activated receptor α. Thus, the proposed method is a simple, sensitive and reliable tool for the quantitation of fibric acids working in vivo inside livers.
Hydrophobic ionic liquids for quantitative bacterial cell lysis with subsequent DNA quantification.
Fuchs-Telka, Sabine; Fister, Susanne; Mester, Patrick-Julian; Wagner, Martin; Rossmanith, Peter
2017-02-01
DNA is one of the most frequently analyzed molecules in the life sciences. In this article we describe a simple and fast protocol for quantitative DNA isolation from bacteria based on hydrophobic ionic liquid supported cell lysis at elevated temperatures (120-150 °C) for subsequent PCR-based analysis. From a set of five hydrophobic ionic liquids, 1-butyl-1-methylpyrrolidinium bis(trifluoromethylsulfonyl)imide was identified as the most suitable for quantitative cell lysis and DNA extraction because of limited quantitative PCR inhibition by the aqueous eluate as well as no detectable DNA uptake. The newly developed method was able to efficiently lyse Gram-negative bacterial cells, whereas Gram-positive cells were protected by their thick cell wall. The performance of the final protocol resulted in quantitative DNA extraction efficiencies for Gram-negative bacteria similar to those obtained with a commercial kit, whereas the number of handling steps, and especially the time required, was dramatically reduced. Graphical Abstract After careful evaluation of five hydrophobic ionic liquids, 1-butyl-1-methylpyrrolidinium bis(trifluoromethylsulfonyl)imide ([BMPyr + ][Ntf 2 - ]) was identified as the most suitable ionic liquid for quantitative cell lysis and DNA extraction. When used for Gram-negative bacteria, the protocol presented is simple and very fast and achieves DNA extraction efficiencies similar to those obtained with a commercial kit. ddH 2 O double-distilled water, qPCR quantitative PCR.
Helicopter Pilot Performance for Discrete-maneuver Flight Tasks
NASA Technical Reports Server (NTRS)
Heffley, R. K.; Bourne, S. M.; Hindson, W. S.
1984-01-01
This paper describes a current study of several basic helicopter flight maneuvers. The data base consists of in-flight measurements from instrumented helicopters using experienced pilots. The analysis technique is simple enough to apply without automatic data processing, and the results can be used to build quantitative matah models of the flight task and some aspects of the pilot control strategy. In addition to describing the performance measurement technqiue, some results are presented which define the aggressiveness and amplitude of maneuvering for several lateral maneuvers including turns and sidesteps.
Equilibrium and disequilibrium chemistry of adiabatic, solar-composition planetary atmospheres
NASA Technical Reports Server (NTRS)
Lewis, J. S.
1976-01-01
The impact of atmospheric and cloud-structure models on the nonequilibrium chemical behavior of the atmospheres of the Jovian planets is discussed. Quantitative constraints on photochemical, lightning, and charged-particle production of organic matter and chromophores are emphasized whenever available. These considerations imply that inorganic chromophore production is far more important than that of organic chromophores, and that lightning is probably a negligibly significant process relative to photochemistry on Jupiter. Production of complex molecules by gas-phase disequilibrium processes on Saturn, Uranus, and Neptune is severely limited by condensation of even simple intermediates.
Public–private interaction in pharmaceutical research
Cockburn, Iain; Henderson, Rebecca
1996-01-01
We empirically examine interaction between the public and private sectors in pharmaceutical research using qualitative data on the drug discovery process and quantitative data on the incidence of coauthorship between public and private institutions. We find evidence of significant reciprocal interaction, and reject a simple “linear” dichotomous model in which the public sector performs basic research and the private sector exploits it. Linkages to the public sector differ across firms, reflecting variation in internal incentives and policy choices, and the nature of these linkages correlates with their research performance. PMID:8917485
NASA Astrophysics Data System (ADS)
Zhang, Yi-Qing; Cui, Jing; Zhang, Shu-Min; Zhang, Qi; Li, Xiang
2016-02-01
Modelling temporal networks of human face-to-face contacts is vital both for understanding the spread of airborne pathogens and word-of-mouth spreading of information. Although many efforts have been devoted to model these temporal networks, there are still two important social features, public activity and individual reachability, have been ignored in these models. Here we present a simple model that captures these two features and other typical properties of empirical face-to-face contact networks. The model describes agents which are characterized by an attractiveness to slow down the motion of nearby people, have event-triggered active probability and perform an activity-dependent biased random walk in a square box with periodic boundary. The model quantitatively reproduces two empirical temporal networks of human face-to-face contacts which are testified by their network properties and the epidemic spread dynamics on them.
Water-dependent photonic bandgap in silica artificial opals.
Gallego-Gómez, Francisco; Blanco, Alvaro; Canalejas-Tejero, Victor; López, Cefe
2011-07-04
Some characteristics of silica--based structures-like the photonic properties of artificial opals formed by silica spheres--can be greatly affected by the presence of adsorbed water. The reversible modification of the water content of an opal is investigated here by moderate heating (below 300 °C) and measuring in situ the changes in the photonic bandgap. Due to reversible removal of interstitial water, large blueshifts of 30 nm and a bandgap narrowing of 7% are observed. The latter is particularly surprising, because water desorption increases the refractive index contrast, which should lead instead to bandgap broadening. A quantitative explanation of this experiment is provided using a simple model for water distribution in the opal that assumes a nonclose-packed fcc structure. This model further predicts that, at room temperature, about 50% of the interstitial water forms necks between nearest-neighbor spheres, which are separated by 5% of their diameter. Upon heating, dehydration predominantly occurs at the sphere surfaces (in the opal voids), so that above 65 °C the remaining water resides exclusively in the necks. A near-close-packed fcc arrangement is only achieved above 200 °C. The high sensitivity to water changes exhibited by silica opals, even under gentle heating of few degrees, must be taken into account for practical applications. Remarkably, accurate control of the distance between spheres--from 16 to 1 nm--is obtained with temperature. In this study, novel use of the optical properties of the opal is made to infer quantitative information about water distribution within silica beads and dehydration phenomena from simple reflection spectra. Taking advantage of the well-defined opal morphology, this approach offers a simple tool for the straightforward investigation of generic adsorption-desorption phenomena, which might be extrapolated to many other fields involving capillary condensation. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Predicting acute pain after cesarean delivery using three simple questions.
Pan, Peter H; Tonidandel, Ashley M; Aschenbrenner, Carol A; Houle, Timothy T; Harris, Lynne C; Eisenach, James C
2013-05-01
Interindividual variability in postoperative pain presents a clinical challenge. Preoperative quantitative sensory testing is useful but time consuming in predicting postoperative pain intensity. The current study was conducted to develop and validate a predictive model of acute postcesarean pain using a simple three-item preoperative questionnaire. A total of 200 women scheduled for elective cesarean delivery under subarachnoid anesthesia were enrolled (192 subjects analyzed). Patients were asked to rate the intensity of loudness of audio tones, their level of anxiety and anticipated pain, and analgesic need from surgery. Postoperatively, patients reported the intensity of evoked pain. Regression analysis was performed to generate a predictive model for pain from these measures. A validation cohort of 151 women was enrolled to test the reliability of the model (131 subjects analyzed). Responses from each of the three preoperative questions correlated moderately with 24-h evoked pain intensity (r = 0.24-0.33, P < 0.001). Audio tone rating added uniquely, but minimally, to the model and was not included in the predictive model. The multiple regression analysis yielded a statistically significant model (R = 0.20, P < 0.001), whereas the validation cohort showed reliably a very similar regression line (R = 0.18). In predicting the upper 20th percentile of evoked pain scores, the optimal cut point was 46.9 (z =0.24) such that sensitivity of 0.68 and specificity of 0.67 were as balanced as possible. This simple three-item questionnaire is useful to help predict postcesarean evoked pain intensity, and could be applied to further research and clinical application to tailor analgesic therapy to those who need it most.
Numerous extraction methods have been developed and used in the quantitation of both photopigments and mycosporine amino acids (MAAs) found in Symbiodinium sp. and zooanthellate metazoans. We have development of a simple, mild extraction procedure using methanol, which when coupl...
Four-point bending as a method for quantitatively evaluating spinal arthrodesis in a rat model.
Robinson, Samuel T; Svet, Mark T; Kanim, Linda A; Metzger, Melodie F
2015-02-01
The most common method of evaluating the success (or failure) of rat spinal fusion procedures is manual palpation testing. Whereas manual palpation provides only a subjective binary answer (fused or not fused) regarding the success of a fusion surgery, mechanical testing can provide more quantitative data by assessing variations in strength among treatment groups. We here describe a mechanical testing method to quantitatively assess single-level spinal fusion in a rat model, to improve on the binary and subjective nature of manual palpation as an end point for fusion-related studies. We tested explanted lumbar segments from Sprague-Dawley rat spines after single-level posterolateral fusion procedures at L4-L5. Segments were classified as 'not fused,' 'restricted motion,' or 'fused' by using manual palpation testing. After thorough dissection and potting of the spine, 4-point bending in flexion then was applied to the L4-L5 motion segment, and stiffness was measured as the slope of the moment-displacement curve. Results demonstrated statistically significant differences in stiffness among all groups, which were consistent with preliminary grading according to manual palpation. In addition, the 4-point bending results provided quantitative information regarding the quality of the bony union formed and therefore enabled the comparison of fused specimens. Our results demonstrate that 4-point bending is a simple, reliable, and effective way to describe and compare results among rat spines after fusion surgery.
A Fan-Tastic Quantitative Exploration of Ohm's Law
ERIC Educational Resources Information Center
Mitchell, Brandon; Ekey, Robert; McCullough, Roy; Reitz, William
2018-01-01
Teaching simple circuits and Ohm's law to students in the introductory classroom has been extensively investigated through the common practice of using incandescent light bulbs to help students develop a conceptual foundation before moving on to quantitative analysis. However, the bulb filaments' resistance has a large temperature dependence,…
A simple method for the quantitative determination of elemental sulfur on oxidized sulfide minerals is described. Extraction of elemental sulfur in perchloroethylene and subsequent analysis with high-performance liquid chromatography were used to ascertain the total elemental ...
Opportunistic fungal pathogens are a concern because of the increasing number of immunocompromised patients. The goal of this research was to test a simple extraction method and rapid quantitative PCR (QPCR) measurement of the occurrence of potential pathogens, Aspergillus fumiga...
Goel, Purva; Bapat, Sanket; Vyas, Renu; Tambe, Amruta; Tambe, Sanjeev S
2015-11-13
The development of quantitative structure-retention relationships (QSRR) aims at constructing an appropriate linear/nonlinear model for the prediction of the retention behavior (such as Kovats retention index) of a solute on a chromatographic column. Commonly, multi-linear regression and artificial neural networks are used in the QSRR development in the gas chromatography (GC). In this study, an artificial intelligence based data-driven modeling formalism, namely genetic programming (GP), has been introduced for the development of quantitative structure based models predicting Kovats retention indices (KRI). The novelty of the GP formalism is that given an example dataset, it searches and optimizes both the form (structure) and the parameters of an appropriate linear/nonlinear data-fitting model. Thus, it is not necessary to pre-specify the form of the data-fitting model in the GP-based modeling. These models are also less complex, simple to understand, and easy to deploy. The effectiveness of GP in constructing QSRRs has been demonstrated by developing models predicting KRIs of light hydrocarbons (case study-I) and adamantane derivatives (case study-II). In each case study, two-, three- and four-descriptor models have been developed using the KRI data available in the literature. The results of these studies clearly indicate that the GP-based models possess an excellent KRI prediction accuracy and generalization capability. Specifically, the best performing four-descriptor models in both the case studies have yielded high (>0.9) values of the coefficient of determination (R(2)) and low values of root mean squared error (RMSE) and mean absolute percent error (MAPE) for training, test and validation set data. The characteristic feature of this study is that it introduces a practical and an effective GP-based method for developing QSRRs in gas chromatography that can be gainfully utilized for developing other types of data-driven models in chromatography science. Copyright © 2015 Elsevier B.V. All rights reserved.
The evolution of labile traits in sex- and age-structured populations.
Childs, Dylan Z; Sheldon, Ben C; Rees, Mark
2016-03-01
Many quantitative traits are labile (e.g. somatic growth rate, reproductive timing and investment), varying over the life cycle as a result of behavioural adaptation, developmental processes and plastic responses to the environment. At the population level, selection can alter the distribution of such traits across age classes and among generations. Despite a growing body of theoretical research exploring the evolutionary dynamics of labile traits, a data-driven framework for incorporating such traits into demographic models has not yet been developed. Integral projection models (IPMs) are increasingly being used to understand the interplay between changes in labile characters, life histories and population dynamics. One limitation of the IPM approach is that it relies on phenotypic associations between parents and offspring traits to capture inheritance. However, it is well-established that many different processes may drive these associations, and currently, no clear consensus has emerged on how to model micro-evolutionary dynamics in an IPM framework. We show how to embed quantitative genetic models of inheritance of labile traits into age-structured, two-sex models that resemble standard IPMs. Commonly used statistical tools such as GLMs and their mixed model counterparts can then be used for model parameterization. We illustrate the methodology through development of a simple model of egg-laying date evolution, parameterized using data from a population of Great tits (Parus major). We demonstrate how our framework can be used to project the joint dynamics of species' traits and population density. We then develop a simple extension of the age-structured Price equation (ASPE) for two-sex populations, and apply this to examine the age-specific contributions of different processes to change in the mean phenotype and breeding value. The data-driven framework we outline here has the potential to facilitate greater insight into the nature of selection and its consequences in settings where focal traits vary over the lifetime through ontogeny, behavioural adaptation and phenotypic plasticity, as well as providing a potential bridge between theoretical and empirical studies of labile trait variation. © 2016 The Authors Journal of Animal Ecology published by John Wiley & Sons Ltd on behalf of British Ecological Society.
NASA Astrophysics Data System (ADS)
Edmiston, John Kearney
This work explores the field of continuum plasticity from two fronts. On the theory side, we establish a complete specification of a phenomenological theory of plasticity for single crystals. The model serves as an alternative to the popular crystal plasticity formulation. Such a model has been previously proposed in the literature; the new contribution made here is the constitutive framework and resulting simulations. We calibrate the model to available data and use a simple numerical method to explore resulting predictions in plane strain boundary value problems. Results show promise for further investigation of the plasticity model. Conveniently, this theory comes with a corresponding experimental tool in X-ray diffraction. Recent advances in hardware technology at synchrotron sources have led to an increased use of the technique for studies of plasticity in the bulk of materials. The method has been successful in qualitative observations of material behavior, but its use in quantitative studies seeking to extract material properties is open for investigation. Therefore in the second component of the thesis several contributions are made to synchrotron X-ray diffraction experiments, in terms of method development as well as the quantitative reporting of constitutive parameters. In the area of method development, analytical tools are developed to determine the available precision of this type of experiment—a crucial aspect to determine if the method is to be used for quantitative studies. We also extract kinematic information relating to intragranular inhomogeneity which is not accessible with traditional methods of data analysis. In the area of constitutive parameter identification, we use the method to extract parameters corresponding to the proposed formulation of plasticity for a titanium alloy (HCP) which is continuously sampled by X-ray diffraction during uniaxial extension. These results and the lessons learned from the efforts constitute early reporting of the quantitative profitability of undertaking such a line of experimentation for the study of plastic deformation processes.
Thresholds and the rising pion inclusive cross section
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, S.T.
In the context of the hypothesis of the Pomeron-f identity, it is shown that the rising pion inclusive cross section can be explained over a wide range of energies as a series of threshold effects. Low-mass thresholds are seen to be important. In order to understand the contributions of high-mass thresholds (flavoring), a simple two-channel multiperipheral model is examined. The analysis sheds light on the relation between thresholds and Mueller-Regge couplings. In particular, it is seen that inclusive-, and total-cross-section threshold mechanisms may differ. A quantitative model based on this idea and utilizing previous total-cross-section fits is seen to agreemore » well with experiment.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lappi, T.; Schenke, B.; Schlichting, S.
Here we examine the origins of azimuthal correlations observed in high energy proton-nucleus collisions by considering the simple example of the scattering of uncorrelated partons off color fields in a large nucleus. We demonstrate how the physics of fluctuating color fields in the color glass condensate (CGC) effective theory generates these azimuthal multiparticle correlations and compute the corresponding Fourier coefficients v n within different CGC approximation schemes. We discuss in detail the qualitative and quantitative differences between the different schemes. Lastly, we will show how a recently introduced color field domain model that captures key features of the observed azimuthalmore » correlations can be understood in the CGC effective theory as a model of non-Gaussian correlations in the target nucleus.« less
The modeling of the dynamic behavior of an unsymmetrical rotor
NASA Astrophysics Data System (ADS)
Pǎrǎuşanu, Ioan; Gheorghiu, Horia; Petre, Cristian; Jiga, Gabriel; Crişan, Nicoleta
2018-02-01
The purpose of this article is to present the modeling of the dynamic behaviour of unsymmetrical rotors in relatively simple quantitative terms. Numerical simulations show that the shaft orthotropy produces a peak of resonant vibration about half the regular critical speed and, for small damping, a range of possible unstable behavior between the two critical speeds. Rotors having the shaft and/or the disks with unequal diametral moments of inertia (e.g., two-bladed small airplane propellers, wind turbines and fans) are dynamically unstable above a certain speed and some of these may return to a stable condition at a sufficiently high speed, depending on the particular magnitudes of the gyroscopic coupling and the inertia inequality.
NASA Astrophysics Data System (ADS)
Shortell, Matthew P.; Althomali, Marwan A. M.; Wille, Marie-Luise; Langton, Christian M.
2017-11-01
We demonstrate a simple technique for quantitative ultrasound imaging of the cortical shell of long bone replicas. Traditional ultrasound computed tomography instruments use the transmitted or reflected waves for separate reconstructions but suffer from strong refraction artefacts in highly heterogenous samples such as bones in soft tissue. The technique described here simplifies the long bone to a two-component composite and uses both the transmitted and reflected waves for reconstructions, allowing the speed of sound and thickness of the cortical shell to be calculated accurately. The technique is simple to implement, computationally inexpensive and sample positioning errors are minimal.
Complex Geometric Models of Diffusion and Relaxation in Healthy and Damaged White Matter
Farrell, Jonathan A.D.; Smith, Seth A.; Reich, Daniel S.; Calabresi, Peter A.; van Zijl, Peter C.M.
2010-01-01
Which aspects of tissue microstructure affect diffusion weighted MRI signals? Prior models, many of which use Monte-Carlo simulations, have focused on relatively simple models of the cellular microenvironment and have not considered important anatomic details. With the advent of higher-order analysis models for diffusion imaging, such as high-angular-resolution diffusion imaging (HARDI), more realistic models are necessary. This paper presents and evaluates the reproducibility of simulations of diffusion in complex geometries. Our framework is quantitative, does not require specialized hardware, is easily implemented with little programming experience, and is freely available as open-source software. Models may include compartments with different diffusivities, permeabilities, and T2 time constants using both parametric (e.g., spheres and cylinders) and arbitrary (e.g., mesh-based) geometries. Three-dimensional diffusion displacement-probability functions are mapped with high reproducibility, and thus can be readily used to assess reproducibility of diffusion-derived contrasts. PMID:19739233
Generic distortion model for metrology under optical microscopes
NASA Astrophysics Data System (ADS)
Liu, Xingjian; Li, Zhongwei; Zhong, Kai; Chao, YuhJin; Miraldo, Pedro; Shi, Yusheng
2018-04-01
For metrology under optical microscopes, lens distortion is the dominant source of error. Previous distortion models and correction methods mostly rely on the assumption that parametric distortion models require a priori knowledge of the microscopes' lens systems. However, because of the numerous optical elements in a microscope, distortions can be hardly represented by a simple parametric model. In this paper, a generic distortion model considering both symmetric and asymmetric distortions is developed. Such a model is obtained by using radial basis functions (RBFs) to interpolate the radius and distortion values of symmetric distortions (image coordinates and distortion rays for asymmetric distortions). An accurate and easy to implement distortion correction method is presented. With the proposed approach, quantitative measurement with better accuracy can be achieved, such as in Digital Image Correlation for deformation measurement when used with an optical microscope. The proposed technique is verified by both synthetic and real data experiments.
Petterson, S R; Stenström, T A
2015-09-01
To support the implementation of quantitative microbial risk assessment (QMRA) for managing infectious risks associated with drinking water systems, a simple modeling approach for quantifying Log10 reduction across a free chlorine disinfection contactor was developed. The study was undertaken in three stages: firstly, review of the laboratory studies published in the literature; secondly, development of a conceptual approach to apply the laboratory studies to full-scale conditions; and finally implementation of the calculations for a hypothetical case study system. The developed model explicitly accounted for variability in residence time and pathogen specific chlorine sensitivity. Survival functions were constructed for a range of pathogens relying on the upper bound of the reported data transformed to a common metric. The application of the model within a hypothetical case study demonstrated the importance of accounting for variable residence time in QMRA. While the overall Log10 reduction may appear high, small parcels of water with short residence time can compromise the overall performance of the barrier. While theoretically simple, the approach presented is of great value for undertaking an initial assessment of a full-scale disinfection contactor based on limited site-specific information.
Measurements of PANs during the New England Air Quality Study 2002
NASA Astrophysics Data System (ADS)
Roberts, J. M.; Marchewka, M.; Bertman, S. B.; Sommariva, R.; Warneke, C.; de Gouw, J.; Kuster, W.; Goldan, P.; Williams, E.; Lerner, B. M.; Murphy, P.; Fehsenfeld, F. C.
2007-10-01
Measurements of peroxycarboxylic nitric anhydrides (PANs) were made during the New England Air Quality Study 2002 cruise of the NOAA RV Ronald H Brown. The four compounds observed, PAN, peroxypropionic nitric anhydride (PPN), peroxymethacrylic nitric anhydride (MPAN), and peroxyisobutyric nitric anhydride (PiBN) were compared with results from other continental and Gulf of Maine sites. Systematic changes in PPN/PAN ratio, due to differential thermal decomposition rates, were related quantitatively to air mass aging. At least one early morning period was observed when O3 seemed to have been lost probably due to NO3 and N2O5 chemistry. The highest O3 episode was observed in the combined plume of isoprene sources and anthropogenic volatile organic compounds (VOCs) and NOx sources from the greater Boston area. A simple linear combination model showed that the organic precursors leading to elevated O3 were roughly half from the biogenic and half from anthropogenic VOC regimes. An explicit chemical box model confirmed that the chemistry in the Boston plume is well represented by the simple linear combination model. This degree of biogenic hydrocarbon involvement in the production of photochemical ozone has significant implications for air quality control strategies in this region.
Architectural-level power estimation and experimentation
NASA Astrophysics Data System (ADS)
Ye, Wu
With the emergence of a plethora of embedded and portable applications and ever increasing integration levels, power dissipation of integrated circuits has moved to the forefront as a design constraint. Recent years have also seen a significant trend towards designs starting at the architectural (or RT) level. Those demand accurate yet fast RT level power estimation methodologies and tools. This thesis addresses issues and experiments associate with architectural level power estimation. An execution driven, cycle-accurate RT level power simulator, SimplePower, was developed using transition-sensitive energy models. It is based on the architecture of a five-stage pipelined RISC datapath for both 0.35mum and 0.8mum technology and can execute the integer subset of the instruction set of SimpleScalar . SimplePower measures the energy consumed in the datapath, memory and on-chip buses. During the development of SimplePower , a partitioning power modeling technique was proposed to model the energy consumed in complex functional units. The accuracy of this technique was validated with HSPICE simulation results for a register file and a shifter. A novel, selectively gated pipeline register optimization technique was proposed to reduce the datapath energy consumption. It uses the decoded control signals to selectively gate the data fields of the pipeline registers. Simulation results show that this technique can reduce the datapath energy consumption by 18--36% for a set of benchmarks. A low-level back-end compiler optimization, register relabeling, was applied to reduce the on-chip instruction cache data bus switch activities. Its impact was evaluated by SimplePower. Results show that it can reduce the energy consumed in the instruction data buses by 3.55--16.90%. A quantitative evaluation was conducted for the impact of six state-of-art high-level compilation techniques on both datapath and memory energy consumption. The experimental results provide a valuable insight for designers to develop future power-aware compilation frameworks for embedded systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saripalli, Prasad; Brown, Christopher F.; Lindberg, Michael J.
We report on a new Cellular Absorptive Tracers (CATs) method, for a simple, non-destructive characterization of bacterial mass in flow systems. Results show that adsorption of a CAT molecule into the cellular mass results in its retardation during flow, which is a good, quantitative measure of the biomass quantity and distribution. No such methods are currently available for a quantitative characterization of cell mass.
Dynamic self-assembly of charged colloidal strings and walls in simple fluid flows.
Abe, Yu; Zhang, Bo; Gordillo, Leonardo; Karim, Alireza Mohammad; Francis, Lorraine F; Cheng, Xiang
2017-02-22
Colloidal particles can self-assemble into various ordered structures in fluid flows that have potential applications in biomedicine, materials synthesis and encryption. These dynamic processes are also of fundamental interest for probing the general principles of self-assembly under non-equilibrium conditions. Here, we report a simple microfluidic experiment, where charged colloidal particles self-assemble into flow-aligned 1D strings with regular particle spacing near a solid boundary. Using high-speed confocal microscopy, we systematically investigate the influence of flow rates, electrostatics and particle polydispersity on the observed string structures. By studying the detailed dynamics of stable flow-driven particle pairs, we quantitatively characterize interparticle interactions. Based on the results, we construct a simple model that explains the intriguing non-equilibrium self-assembly process. Our study shows that the colloidal strings arise from a delicate balance between attractive hydrodynamic coupling and repulsive electrostatic interaction between particles. Finally, we demonstrate that, with the assistance of transverse electric fields, a similar mechanism also leads to the formation of 2D colloidal walls.
Chen, Zhijian; Craiu, Radu V; Bull, Shelley B
2014-11-01
In focused studies designed to follow up associations detected in a genome-wide association study (GWAS), investigators can proceed to fine-map a genomic region by targeted sequencing or dense genotyping of all variants in the region, aiming to identify a functional sequence variant. For the analysis of a quantitative trait, we consider a Bayesian approach to fine-mapping study design that incorporates stratification according to a promising GWAS tag SNP in the same region. Improved cost-efficiency can be achieved when the fine-mapping phase incorporates a two-stage design, with identification of a smaller set of more promising variants in a subsample taken in stage 1, followed by their evaluation in an independent stage 2 subsample. To avoid the potential negative impact of genetic model misspecification on inference we incorporate genetic model selection based on posterior probabilities for each competing model. Our simulation study shows that, compared to simple random sampling that ignores genetic information from GWAS, tag-SNP-based stratified sample allocation methods reduce the number of variants continuing to stage 2 and are more likely to promote the functional sequence variant into confirmation studies. © 2014 WILEY PERIODICALS, INC.
The Role of Wakes in Modelling Tidal Current Turbines
NASA Astrophysics Data System (ADS)
Conley, Daniel; Roc, Thomas; Greaves, Deborah
2010-05-01
The eventual proper development of arrays of Tidal Current Turbines (TCT) will require a balance which maximizes power extraction while minimizing environmental impacts. Idealized analytical analogues and simple 2-D models are useful tools for investigating questions of a general nature but do not represent a practical tool for application to realistic cases. Some form of 3-D numerical simulations will be required for such applications and the current project is designed to develop a numerical decision-making tool for use in planning large scale TCT projects. The project is predicated on the use of an existing regional ocean modelling framework (the Regional Ocean Modelling System - ROMS) which is modified to enable the user to account for the effects of TCTs. In such a framework where mixing processes are highly parametrized, the fidelity of the quantitative results is critically dependent on the parameter values utilized. In light of the early stage of TCT development and the lack of field scale measurements, the calibration of such a model is problematic. In the absence of explicit calibration data sets, the device wake structure has been identified as an efficient feature for model calibration. This presentation will discuss efforts to design an appropriate calibration scheme which focuses on wake decay and the motivation for this approach, techniques applied, validation results from simple test cases and limitations shall be presented.
Kellie, John F; Higgs, Richard E; Ryder, John W; Major, Anthony; Beach, Thomas G; Adler, Charles H; Merchant, Kalpana; Knierman, Michael D
2014-07-23
A robust top down proteomics method is presented for profiling alpha-synuclein species from autopsied human frontal cortex brain tissue from Parkinson's cases and controls. The method was used to test the hypothesis that pathology associated brain tissue will have a different profile of post-translationally modified alpha-synuclein than the control samples. Validation of the sample processing steps, mass spectrometry based measurements, and data processing steps were performed. The intact protein quantitation method features extraction and integration of m/z data from each charge state of a detected alpha-synuclein species and fitting of the data to a simple linear model which accounts for concentration and charge state variability. The quantitation method was validated with serial dilutions of intact protein standards. Using the method on the human brain samples, several previously unreported modifications in alpha-synuclein were identified. Low levels of phosphorylated alpha synuclein were detected in brain tissue fractions enriched for Lewy body pathology and were marginally significant between PD cases and controls (p = 0.03).
The Vermicelli Handling Test: A Simple Quantitative Measure of Dexterous Forepaw Function in Rats
Allred, Rachel P.; Adkins, DeAnna L.; Woodlee, Martin T.; Husbands, Lincoln C.; Maldonado, Mónica A.; Kane, Jacqueline R.; Schallert, Timothy; Jones, Theresa A.
2008-01-01
Loss of function in the hands occurs with many brain disorders, but there are few measures of skillful forepaw use in rats available to model these impairments that are both sensitive and simple to administer. Whishaw and Coles (1996) previously described the dexterous manner in which rats manipulate food items with their paws, including thin pieces of pasta. We set out to develop a measure of this food handling behavior that would be quantitative, easy to administer, sensitive to the effects of damage to sensory and motor systems of the CNS and useful for identifying the side of lateralized impairments. When rats handle 7 cm lengths of vermicelli, they manipulate the pasta by repeatedly adjusting the forepaw hold on the pasta piece. As operationally defined, these adjustments can be easily identified and counted by an experimenter without specialized equipment. After unilateral sensorimotor cortex (SMC) lesions, transient middle cerebral artery occlusion (MCAO) and striatal dopamine depleting (6-hydroxydopamine, 6-OHDA) lesions in adult rats, there were enduring reductions in adjustments made with the contralateral forepaw. Additional pasta handling characteristics distinguished between the lesion types. MCAO and 6-OHDA lesions increased the frequency of several identified atypical handling patterns. Severe dopamine depletion increased eating time and adjustments made with the ipsilateral forepaw. However, contralateral forepaw adjustment number most sensitively detected enduring impairments across lesion types. Because of its ease of administration and sensitivity to lateralized impairments in skilled forepaw use, this measure may be useful in rat models of upper extremity impairment. PMID:18325597
Asay window: A new spall diagnostic
NASA Astrophysics Data System (ADS)
McCluskey, Craig W.; Wilke, Mark D.; Anderson, William W.; Byers, Mark E.; Holtkamp, David B.; Rigg, Paulo A.; Furnish, Michael D.; Romero, Vincent T.
2006-11-01
By changing from the metallic foil of the Asay foil diagnostic, which can detect ejecta from a shocked surface, to a lithium fluoride (LiF) or polymethyl methacrylate (PMMA) window, it is possible to detect multiple spall layers and interlayer rubble. Past experiments to demonstrate this diagnostic have used high explosives (HEs) to shock metals to produce multiple spall layers. Because the exact characteristics of HE-induced spall layers cannot be predetermined, two issues exist in the quantitative interpretation of the data. First, to what level of fidelity is the Asay window method capable of providing quantitative information about spall layers, possibly separated by rubble, and second, contingent on the first, can an analytic technique be developed to convert the data to a meaningful description of spall from a given experiment? In this article, we address the first issue. A layered projectile fired from a gas gun was used to test the new diagnostic's accuracy and repeatability. We impacted a LiF or PMMA window viewed by a velocity interferometer system for any reflector (VISAR) probe with a projectile consisting of four thin stainless steel disks spaced apart 200μm with either vacuum or polyethylene. The window/surface interface velocity measured with a VISAR probe was compared with calculations. The good agreement observed between the adjusted calculation and the measured data indicates that, in principle and given enough prior information, it is possible to use the Asay window data to model a density distribution from spalled material with simple hydrodynamic models and only simple adjustments to nominal predictions.
NASA Astrophysics Data System (ADS)
Li, Hong-Yi; Sivapalan, Murugesu; Tian, Fuqiang; Harman, Ciaran
2014-12-01
Inspired by the Dunne diagram, the climatic and landscape controls on the partitioning of annual runoff into its various components (Hortonian and Dunne overland flow and subsurface stormflow) are assessed quantitatively, from a purely theoretical perspective. A simple distributed hydrologic model has been built sufficient to simulate the effects of different combinations of climate, soil, and topography on the runoff generation processes. The model is driven by a sequence of simple hypothetical precipitation events, for a large combination of climate and landscape properties, and hydrologic responses at the catchment scale are obtained through aggregation of grid-scale responses. It is found, first, that the water balance responses, including relative contributions of different runoff generation mechanisms, could be related to a small set of dimensionless similarity parameters. These capture the competition between the wetting, drying, storage, and drainage functions underlying the catchment responses, and in this way, provide a quantitative approximation of the conceptual Dunne diagram. Second, only a subset of all hypothetical catchment/climate combinations is found to be "behavioral," in terms of falling sufficiently close to the Budyko curve, describing mean annual runoff as a function of climate aridity. Furthermore, these behavioral combinations are mostly consistent with the qualitative picture presented in the Dunne diagram, indicating clearly the commonality between the Budyko curve and the Dunne diagram. These analyses also suggest clear interrelationships amongst the "behavioral" climate, soil, and topography parameter combinations, implying these catchment properties may be constrained to be codependent in order to satisfy the Budyko curve.
An Inexpensive and Simple Method to Demonstrate Soil Water and Nutrient Flow
ERIC Educational Resources Information Center
Nichols, K. A.; Samson-Liebig, S.
2011-01-01
Soil quality, soil health, and soil sustainability are concepts that are being widely used but are difficult to define and illustrate, especially to a non-technical audience. The objectives of this manuscript were to develop simple and inexpensive methodologies to both qualitatively and quantitatively estimate water infiltration rates (IR),…
Rotational Stability--An Amusing Physical Paradox
ERIC Educational Resources Information Center
Sendra, Carlos M.; Picca, Fabricio Della; Gil, Salvador
2007-01-01
Here we present a simple and amusing device that demonstrates some surprising results of the dynamics of the rotation of a symmetrical rigid body. This system allows for a qualitative demonstration or a quantitative study of the rotation stability of a symmetric top. A simple and inexpensive technique is proposed to carry out quantitative…
Quantitation & Case-Study-Driven Inquiry to Enhance Yeast Fermentation Studies
ERIC Educational Resources Information Center
Grammer, Robert T.
2012-01-01
We propose a procedure for the assay of fermentation in yeast in microcentrifuge tubes that is simple and rapid, permitting assay replicates, descriptive statistics, and the preparation of line graphs that indicate reproducibility. Using regression and simple derivatives to determine initial velocities, we suggest methods to compare the effects of…
Genetic models of homosexuality: generating testable predictions
Gavrilets, Sergey; Rice, William R
2006-01-01
Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality including: (i) chromosomal location, (ii) dominance among segregating alleles and (iii) effect sizes that distinguish between the two major models for their polymorphism: the overdominance and sexual antagonism models. We conclude that the measurement of the genetic characteristics of quantitative trait loci (QTLs) found in genomic screens for genes influencing homosexuality can be highly informative in resolving the form of natural selection maintaining their polymorphism. PMID:17015344
An engineering approach to modelling, decision support and control for sustainable systems.
Day, W; Audsley, E; Frost, A R
2008-02-12
Engineering research and development contributes to the advance of sustainable agriculture both through innovative methods to manage and control processes, and through quantitative understanding of the operation of practical agricultural systems using decision models. This paper describes how an engineering approach, drawing on mathematical models of systems and processes, contributes new methods that support decision making at all levels from strategy and planning to tactics and real-time control. The ability to describe the system or process by a simple and robust mathematical model is critical, and the outputs range from guidance to policy makers on strategic decisions relating to land use, through intelligent decision support to farmers and on to real-time engineering control of specific processes. Precision in decision making leads to decreased use of inputs, less environmental emissions and enhanced profitability-all essential to sustainable systems.
González-Ramírez, Laura R.; Ahmed, Omar J.; Cash, Sydney S.; Wayne, C. Eugene; Kramer, Mark A.
2015-01-01
Epilepsy—the condition of recurrent, unprovoked seizures—manifests in brain voltage activity with characteristic spatiotemporal patterns. These patterns include stereotyped semi-rhythmic activity produced by aggregate neuronal populations, and organized spatiotemporal phenomena, including waves. To assess these spatiotemporal patterns, we develop a mathematical model consistent with the observed neuronal population activity and determine analytically the parameter configurations that support traveling wave solutions. We then utilize high-density local field potential data recorded in vivo from human cortex preceding seizure termination from three patients to constrain the model parameters, and propose basic mechanisms that contribute to the observed traveling waves. We conclude that a relatively simple and abstract mathematical model consisting of localized interactions between excitatory cells with slow adaptation captures the quantitative features of wave propagation observed in the human local field potential preceding seizure termination. PMID:25689136
Liu, De Li; An, Min; Johnson, I.R.; Lovett, J.V.
2005-01-01
One of the main challenges to the research on allelopathy is technically the separation of allelopathic effect from competition, and quantitatively, the assessment of the contribution of each component to overall interference. A simple mathematical model is proposed to calculate the contribution of allelopathy and competition to interference. As an example of applying the quantitative model to interference by barley (Hordeum vulgare cv. Triumph), the approach used was an addition of allelopathic effect, by an equivalent amount, to the environment of the test plant (white mustard, Sinapis alba), rather than elimination of competition. Experiments were conducted in glasshouse to determine the magnitude of the contributions of allelopathy and competition to interference by barley. The leachates of living barley roots significantly reduced the total dry weight of white mustard. The model involved the calculation of adjusted densities to an equivalent basis for modelling the contribution of allelopathy and competition to total interference. The results showed that allelopathy contributed 40%, 37% and 43% to interference by barley at 6, 12 and 18 white mustard pot−1. The consistency in magnitude of the calculated contribution of allelopathic effect by barley across various densities of receiver plant suggested that the adjusted equivalent density is effective and that the model is able to assess the contribution of each component of interference regardless of the density of receiver plant. PMID:19330162
Solares, Santiago D.
2015-11-26
This study introduces a quasi-3-dimensional (Q3D) viscoelastic model and software tool for use in atomic force microscopy (AFM) simulations. The model is based on a 2-dimensional array of standard linear solid (SLS) model elements. The well-known 1-dimensional SLS model is a textbook example in viscoelastic theory but is relatively new in AFM simulation. It is the simplest model that offers a qualitatively correct description of the most fundamental viscoelastic behaviors, namely stress relaxation and creep. However, this simple model does not reflect the correct curvature in the repulsive portion of the force curve, so its application in the quantitative interpretationmore » of AFM experiments is relatively limited. In the proposed Q3D model the use of an array of SLS elements leads to force curves that have the typical upward curvature in the repulsive region, while still offering a very low computational cost. Furthermore, the use of a multidimensional model allows for the study of AFM tips having non-ideal geometries, which can be extremely useful in practice. Examples of typical force curves are provided for single- and multifrequency tappingmode imaging, for both of which the force curves exhibit the expected features. Lastly, a software tool to simulate amplitude and phase spectroscopy curves is provided, which can be easily modified to implement other controls schemes in order to aid in the interpretation of AFM experiments.« less
Solares, Santiago D
2015-01-01
This paper introduces a quasi-3-dimensional (Q3D) viscoelastic model and software tool for use in atomic force microscopy (AFM) simulations. The model is based on a 2-dimensional array of standard linear solid (SLS) model elements. The well-known 1-dimensional SLS model is a textbook example in viscoelastic theory but is relatively new in AFM simulation. It is the simplest model that offers a qualitatively correct description of the most fundamental viscoelastic behaviors, namely stress relaxation and creep. However, this simple model does not reflect the correct curvature in the repulsive portion of the force curve, so its application in the quantitative interpretation of AFM experiments is relatively limited. In the proposed Q3D model the use of an array of SLS elements leads to force curves that have the typical upward curvature in the repulsive region, while still offering a very low computational cost. Furthermore, the use of a multidimensional model allows for the study of AFM tips having non-ideal geometries, which can be extremely useful in practice. Examples of typical force curves are provided for single- and multifrequency tapping-mode imaging, for both of which the force curves exhibit the expected features. Finally, a software tool to simulate amplitude and phase spectroscopy curves is provided, which can be easily modified to implement other controls schemes in order to aid in the interpretation of AFM experiments.
NASA Astrophysics Data System (ADS)
Slezak, Thomas Joseph; Radebaugh, Jani; Christiansen, Eric
2017-10-01
The shapes of craterform morphology on planetary surfaces provides rich information about their origins and evolution. While morphologic information provides rich visual clues to geologic processes and properties, the ability to quantitatively communicate this information is less easily accomplished. This study examines the morphology of craterforms using the quantitative outline-based shape methods of geometric morphometrics, commonly used in biology and paleontology. We examine and compare landforms on planetary surfaces using shape, a property of morphology that is invariant to translation, rotation, and size. We quantify the shapes of paterae on Io, martian calderas, terrestrial basaltic shield calderas, terrestrial ash-flow calderas, and lunar impact craters using elliptic Fourier analysis (EFA) and the Zahn and Roskies (Z-R) shape function, or tangent angle approach to produce multivariate shape descriptors. These shape descriptors are subjected to multivariate statistical analysis including canonical variate analysis (CVA), a multiple-comparison variant of discriminant analysis, to investigate the link between craterform shape and classification. Paterae on Io are most similar in shape to terrestrial ash-flow calderas and the shapes of terrestrial basaltic shield volcanoes are most similar to martian calderas. The shapes of lunar impact craters, including simple, transitional, and complex morphology, are classified with a 100% rate of success in all models. Multiple CVA models effectively predict and classify different craterforms using shape-based identification and demonstrate significant potential for use in the analysis of planetary surfaces.
Assis, Sofia; Costa, Pedro; Rosas, Maria Jose; Vaz, Rui; Silva Cunha, Joao Paulo
2016-08-01
Intraoperative evaluation of the efficacy of Deep Brain Stimulation includes evaluation of the effect on rigidity. A subjective semi-quantitative scale is used, dependent on the examiner perception and experience. A system was proposed previously, aiming to tackle this subjectivity, using quantitative data and providing real-time feedback of the computed rigidity reduction, hence supporting the physician decision. This system comprised of a gyroscope-based motion sensor in a textile band, placed in the patients hand, which communicated its measurements to a laptop. The latter computed a signal descriptor from the angular velocity of the hand during wrist flexion in DBS surgery. The first approach relied on using a general rigidity reduction model, regardless of the initial severity of the symptom. Thus, to enhance the performance of the previously presented system, we aimed to develop models for high and low baseline rigidity, according to the examiner assessment before any stimulation. This would allow a more patient-oriented approach. Additionally, usability was improved by having in situ processing in a smartphone, instead of a computer. Such system has shown to be reliable, presenting an accuracy of 82.0% and a mean error of 3.4%. Relatively to previous results, the performance was similar, further supporting the importance of considering the cogwheel rigidity to better infer about the reduction in rigidity. Overall, we present a simple, wearable, mobile system, suitable for intra-operatory conditions during DBS, supporting a physician in decision-making when setting stimulation parameters.
Silva, Fabyano Fonseca; Tunin, Karen P.; Rosa, Guilherme J.M.; da Silva, Marcos V.B.; Azevedo, Ana Luisa Souza; da Silva Verneque, Rui; Machado, Marco Antonio; Packer, Irineu Umberto
2011-01-01
Now a days, an important and interesting alternative in the control of tick-infestation in cattle is to select resistant animals, and identify the respective quantitative trait loci (QTLs) and DNA markers, for posterior use in breeding programs. The number of ticks/animal is characterized as a discrete-counting trait, which could potentially follow Poisson distribution. However, in the case of an excess of zeros, due to the occurrence of several noninfected animals, zero-inflated Poisson and generalized zero-inflated distribution (GZIP) may provide a better description of the data. Thus, the objective here was to compare through simulation, Poisson and ZIP models (simple and generalized) with classical approaches, for QTL mapping with counting phenotypes under different scenarios, and to apply these approaches to a QTL study of tick resistance in an F2 cattle (Gyr × Holstein) population. It was concluded that, when working with zero-inflated data, it is recommendable to use the generalized and simple ZIP model for analysis. On the other hand, when working with data with zeros, but not zero-inflated, the Poisson model or a data-transformation-approach, such as square-root or Box-Cox transformation, are applicable. PMID:22215960
NASA Technical Reports Server (NTRS)
Cain, Bruce L.
1990-01-01
The problems of weld quality control and weld process dependability continue to be relevant issues in modern metal welding technology. These become especially important for NASA missions which may require the assembly or repair of larger orbiting platforms using automatic welding techniques. To extend present welding technologies for such applications, NASA/MSFC's Materials and Processes Lab is developing physical models of the arc welding process with the goal of providing both a basis for improved design of weld control systems, and a better understanding of how arc welding variables influence final weld properties. The physics of the plasma arc discharge is reasonably well established in terms of transport processes occurring in the arc column itself, although recourse to sophisticated numerical treatments is normally required to obtain quantitative results. Unfortunately the rigor of these numerical computations often obscures the physics of the underlying model due to its inherent complexity. In contrast, this work has focused on a relatively simple physical model of the arc discharge to describe the gross features observed in welding arcs. Emphasis was placed of deriving analytic expressions for the voltage along the arc axis as a function of known or measurable arc parameters. The model retains the essential physics for a straight polarity, diffusion dominated free burning arc in argon, with major simplifications of collisionless sheaths and simple energy balances at the electrodes.
The Effect of Pickling on Blue Borscht Gelatin and Other Interesting Diffusive Phenomena.
ERIC Educational Resources Information Center
Davis, Lawrence C.; Chou, Nancy C.
1998-01-01
Presents some simple demonstrations that students can construct for themselves in class to learn the difference between diffusion and convection rates. Uses cabbage leaves and gelatin and focuses on diffusion in ungelified media, a quantitative diffusion estimate with hydroxyl ions, and a quantitative diffusion estimate with photons. (DDR)
A Quantitative Analysis of Cognitive Strategy Usage in the Marking of Two GCSE Examinations
ERIC Educational Resources Information Center
Suto, W. M. Irenka; Greatorex, Jackie
2008-01-01
Diverse strategies for marking GCSE examinations have been identified, ranging from simple automatic judgements to complex cognitive operations requiring considerable expertise. However, little is known about patterns of strategy usage or how such information could be utilised by examiners. We conducted a quantitative analysis of previous verbal…
A rapid colorimetric assay for mold spore germination using XTT tetrazolium salt
Carol A. Clausen; Vina W. Yang
2011-01-01
Current laboratory test methods to measure efficacy of new mold inhibitors are time consuming, some require specialized test equipment and ratings are subjective. Rapid, simple quantitative assays to measure the efficacy of mold inhibitors are needed. A quantitative, colorimetric microassay was developed using XTT tetrazolium salt to metabolically assess mold spore...
Modern Projection of the Old Electroscope for Nuclear Radiation Quantitative Work and Demonstrations
ERIC Educational Resources Information Center
Bastos, Rodrigo Oliveira; Boch, Layara Baltokoski
2017-01-01
Although quantitative measurements in radioactivity teaching and research are only believed to be possible with high technology, early work in this area was fully accomplished with very simple apparatus such as zinc sulphide screens and electroscopes. This article presents an experimental practice using the electroscope, which is a very simple…
ERIC Educational Resources Information Center
Sigmann, Samuella B.; Wheeler, Dale E.
2004-01-01
The development of a simple spectro photometric method to quantitatively determine the quantity of FD&C color additives present in powdered drink mixes, are focused by the investigations. Samples containing single dyes of binary mixtures of dyes can be analyzed using this method.
An odor flux model for cattle feedlots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ormerod, R.J.
1994-12-31
Odor nuisance associated with cattle feedlots has been an issue of major interest and concern to regulators, rural communities and the beef industry in Australia over the past decade. Methods of assessing the likely impacts of new feedlots on community odor exposure are still being developed, but in the past few years much has been learnt about the processes of odor generation, flux and dispersion as well as the acceptability of feedlot odor to exposed communities. This paper outlines a model which simulates the complex physical and chemical processes leading to odor emissions in a simple and practical framework. Themore » model, named BULSMEL, has been developed as a response to regulatory requirements for quantitative assessments of odor impact. It will continue to be refined as more data are gathered.« less
First-passage time of Brownian motion with dry friction.
Chen, Yaming; Just, Wolfram
2014-02-01
We provide an analytic solution to the first-passage time (FPT) problem of a piecewise-smooth stochastic model, namely Brownian motion with dry friction, using two different but closely related approaches which are based on eigenfunction decompositions on the one hand and on the backward Kolmogorov equation on the other. For the simple case containing only dry friction, a phase-transition phenomenon in the spectrum is found which relates to the position of the exit point, and which affects the tail of the FPT distribution. For the model containing as well a driving force and viscous friction the impact of the corresponding stick-slip transition and of the transition to ballistic exit is evaluated quantitatively. The proposed model is one of the very few cases where FPT properties are accessible by analytical means.
Sequential Inverse Problems Bayesian Principles and the Logistic Map Example
NASA Astrophysics Data System (ADS)
Duan, Lian; Farmer, Chris L.; Moroz, Irene M.
2010-09-01
Bayesian statistics provides a general framework for solving inverse problems, but is not without interpretation and implementation problems. This paper discusses difficulties arising from the fact that forward models are always in error to some extent. Using a simple example based on the one-dimensional logistic map, we argue that, when implementation problems are minimal, the Bayesian framework is quite adequate. In this paper the Bayesian Filter is shown to be able to recover excellent state estimates in the perfect model scenario (PMS) and to distinguish the PMS from the imperfect model scenario (IMS). Through a quantitative comparison of the way in which the observations are assimilated in both the PMS and the IMS scenarios, we suggest that one can, sometimes, measure the degree of imperfection.
Le Châtelier reciprocal relations and the mechanical analog
NASA Astrophysics Data System (ADS)
Gilmore, Robert
1983-08-01
Le Châtelier's principle is discussed carefully in terms of two sets of simple thermodynamic examples. The principle is then formulated quantitatively for general thermodynamic systems. The formulation is in terms of a perturbation-response matrix, the Le Châtelier matrix [L]. Le Châtelier's principle is contained in the diagonal elements of this matrix, all of which exceed one. These matrix elements describe the response of a system to a perturbation of either its extensive or intensive variables. These response ratios are inverses of each other. The Le Châtelier matrix is symmetric, so that a new set of thermodynamic reciprocal relations is derived. This quantitative formulation is illustrated by a single simple example which includes the original examples and shows the reciprocities among them. The assumptions underlying this new quantitative formulation of Le Châtelier's principle are general and applicable to a wide variety of nonthermodynamic systems. Le Châtelier's principle is formulated quantitatively for mechanical systems in static equilibrium, and mechanical examples of this formulation are given.
A simulation assessment of the thermodynamics of dense ion-dipole mixtures with polarization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bastea, Sorin, E-mail: sbastea@llnl.gov
Molecular dynamics (MD) simulations are employed to ascertain the relative importance of various electrostatic interaction contributions, including induction interactions, to the thermodynamics of dense, hot ion-dipole mixtures. In the absence of polarization, we find that an MD-constrained free energy term accounting for the ion-dipole interactions, combined with well tested ionic and dipolar contributions, yields a simple, fairly accurate free energy form that may be a better option for describing the thermodynamics of such mixtures than the mean spherical approximation (MSA). Polarization contributions induced by the presence of permanent dipoles and ions are found to be additive to a good approximation,more » simplifying the thermodynamic modeling. We suggest simple free energy corrections that account for these two effects, based in part on standard perturbative treatments and partly on comparisons with MD simulation. Even though the proposed approximations likely need further study, they provide a first quantitative assessment of polarization contributions at high densities and temperatures and may serve as a guide for future modeling efforts.« less
Fire forbids fifty-fifty forest
Staal, Arie; Hantson, Stijn; Holmgren, Milena; Pueyo, Salvador; Bernardi, Rafael E.; Flores, Bernardo M.; Xu, Chi; Scheffer, Marten
2018-01-01
Recent studies have interpreted patterns of remotely sensed tree cover as evidence that forest with intermediate tree cover might be unstable in the tropics, as it will tip into either a closed forest or a more open savanna state. Here we show that across all continents the frequency of wildfires rises sharply as tree cover falls below ~40%. Using a simple empirical model, we hypothesize that the steepness of this pattern causes intermediate tree cover (30‒60%) to be unstable for a broad range of assumptions on tree growth and fire-driven mortality. We show that across all continents, observed frequency distributions of tropical tree cover are consistent with this hypothesis. We argue that percolation of fire through an open landscape may explain the remarkably universal rise of fire frequency around a critical tree cover, but we show that simple percolation models cannot predict the actual threshold quantitatively. The fire-driven instability of intermediate states implies that tree cover will not change smoothly with climate or other stressors and shifts between closed forest and a state of low tree cover will likely tend to be relatively sharp and difficult to reverse. PMID:29351323
Computer modeling of pulsed CO2 lasers for lidar applications
NASA Technical Reports Server (NTRS)
Spiers, Gary D.; Smithers, Martin E.; Murty, Rom
1991-01-01
The experimental results will enable a comparison of the numerical code output with experimental data. This will ensure verification of the validity of the code. The measurements were made on a modified commercial CO2 laser. Results are listed as following. (1) The pulse shape and energy dependence on gas pressure were measured. (2) The intrapulse frequency chirp due to plasma and laser induced medium perturbation effects were determined. A simple numerical model showed quantitative agreement with these measurements. The pulse to pulse frequency stability was also determined. (3) The dependence was measured of the laser transverse mode stability on cavity length. A simple analysis of this dependence in terms of changes to the equivalent fresnel number and the cavity magnification was performed. (4) An analysis was made of the discharge pulse shape which enabled the low efficiency of the laser to be explained in terms of poor coupling of the electrical energy into the vibrational levels. And (5) the existing laser resonator code was changed to allow it to run on the Cray XMP under the new operating system.
Fire forbids fifty-fifty forest.
van Nes, Egbert H; Staal, Arie; Hantson, Stijn; Holmgren, Milena; Pueyo, Salvador; Bernardi, Rafael E; Flores, Bernardo M; Xu, Chi; Scheffer, Marten
2018-01-01
Recent studies have interpreted patterns of remotely sensed tree cover as evidence that forest with intermediate tree cover might be unstable in the tropics, as it will tip into either a closed forest or a more open savanna state. Here we show that across all continents the frequency of wildfires rises sharply as tree cover falls below ~40%. Using a simple empirical model, we hypothesize that the steepness of this pattern causes intermediate tree cover (30‒60%) to be unstable for a broad range of assumptions on tree growth and fire-driven mortality. We show that across all continents, observed frequency distributions of tropical tree cover are consistent with this hypothesis. We argue that percolation of fire through an open landscape may explain the remarkably universal rise of fire frequency around a critical tree cover, but we show that simple percolation models cannot predict the actual threshold quantitatively. The fire-driven instability of intermediate states implies that tree cover will not change smoothly with climate or other stressors and shifts between closed forest and a state of low tree cover will likely tend to be relatively sharp and difficult to reverse.
Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L
2018-02-01
A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R 2 ), using R 2 as the primary metric of assay agreement. However, the use of R 2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
NASA Astrophysics Data System (ADS)
Kowligy, Abijith S.; Lind, Alex; Hickstein, Daniel D.; Carlson, David R.; Timmers, Henry; Nader, Nima; Cruz, Flavio C.; Ycas, Gabriel; Papp, Scott B.; Diddams, Scott A.
2018-04-01
We experimentally demonstrate a simple configuration for mid-infrared (MIR) frequency comb generation in quasi-phase-matched lithium niobate waveguides using the cascaded-$\\chi^{(2)}$ nonlinearity. With nanojoule-scale pulses from an Er:fiber laser, we observe octave-spanning supercontinuum in the near-infrared with dispersive-wave generation in the 2.5--3 $\\text{\\mu}$m region and intra-pulse difference-frequency generation in the 4--5 $\\text{\\mu}$m region. By engineering the quasi-phase-matched grating profiles, tunable, narrow-band MIR and broadband MIR spectra are both observed in this geometry. Finally, we perform numerical modeling using a nonlinear envelope equation, which shows good quantitative agreement with the experiment---and can be used to inform waveguide designs to tailor the MIR frequency combs. Our results identify a path to a simple single-branch approach to mid-infrared frequency comb generation in a compact platform using commercial Er:fiber technology.
Kowligy, Abijith S; Lind, Alex; Hickstein, Daniel D; Carlson, David R; Timmers, Henry; Nader, Nima; Cruz, Flavio C; Ycas, Gabriel; Papp, Scott B; Diddams, Scott A
2018-04-15
We experimentally demonstrate a simple configuration for mid-infrared (MIR) frequency comb generation in quasi-phase-matched lithium niobate waveguides using the cascaded-χ (2) nonlinearity. With nanojoule-scale pulses from an Er:fiber laser, we observe octave-spanning supercontinuum in the near-infrared with dispersive wave generation in the 2.5-3 μm region and intrapulse difference frequency generation in the 4-5 μm region. By engineering the quasi-phase-matched grating profiles, tunable, narrowband MIR and broadband MIR spectra are both observed in this geometry. Finally, we perform numerical modeling using a nonlinear envelope equation, which shows good quantitative agreement with the experiment-and can be used to inform waveguide designs to tailor the MIR frequency combs. Our results identify a path to a simple single-branch approach to mid-infrared frequency comb generation in a compact platform using commercial Er:fiber technology.
Critical power for self-focusing of optical beam in absorbing media
NASA Astrophysics Data System (ADS)
Qi, Pengfei; Zhang, Lin; Lin, Lie; Zhang, Nan; Wang, Yan; Liu, Weiwei
2018-04-01
Self-focusing effects are of central importance for most nonlinear optical effects. The critical power for self-focusing is commonly investigated theoretically without considering a material’s absorption. Although this is practicable for various materials, investigating the critical power for self-focusing in media with non-negligible absorption is also necessary, because this is the situation usually met in practice. In this paper, the simple analytical expressions describing the relationships among incident power, absorption coefficient and focal position are provided by a simple physical model based on the Fermat principle. Expressions for the absorption dependent critical power are also derived; these can play important roles in experimental and applied research on self-focusing-related nonlinear optical phenomena in absorbing media. Numerical results, based on the nonlinear wave equation—and which can predict experimental results perfectly—are also presented, and agree quantitatively with the analytical results proposed in this paper.
Paulke, Alexander; Proschak, Ewgenij; Sommer, Kai; Achenbach, Janosch; Wunder, Cora; Toennes, Stefan W
2016-03-14
The number of new synthetic psychoactive compounds increase steadily. Among the group of these psychoactive compounds, the synthetic cannabinoids (SCBs) are most popular and serve as a substitute of herbal cannabis. More than 600 of these substances already exist. For some SCBs the in vitro cannabinoid receptor 1 (CB1) affinity is known, but for the majority it is unknown. A quantitative structure-activity relationship (QSAR) model was developed, which allows the determination of the SCBs affinity to CB1 (expressed as binding constant (Ki)) without reference substances. The chemically advance template search descriptor was used for vector representation of the compound structures. The similarity between two molecules was calculated using the Feature-Pair Distribution Similarity. The Ki values were calculated using the Inverse Distance Weighting method. The prediction model was validated using a cross validation procedure. The predicted Ki values of some new SCBs were in a range between 20 (considerably higher affinity to CB1 than THC) to 468 (considerably lower affinity to CB1 than THC). The present QSAR model can serve as a simple, fast and cheap tool to get a first hint of the biological activity of new synthetic cannabinoids or of other new psychoactive compounds. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Estimation of hydrolysis rate constants for carbamates ...
Cheminformatics based tools, such as the Chemical Transformation Simulator under development in EPA’s Office of Research and Development, are being increasingly used to evaluate chemicals for their potential to degrade in the environment or be transformed through metabolism. Hydrolysis represents a major environmental degradation pathway; unfortunately, only a small fraction of hydrolysis rates for about 85,000 chemicals on the Toxic Substances Control Act (TSCA) inventory are in public domain, making it critical to develop in silico approaches to estimate hydrolysis rate constants. In this presentation, we compare three complementary approaches to estimate hydrolysis rates for carbamates, an important chemical class widely used in agriculture as pesticides, herbicides and fungicides. Fragment-based Quantitative Structure Activity Relationships (QSARs) using Hammett-Taft sigma constants are widely published and implemented for relatively simple functional groups such as carboxylic acid esters, phthalate esters, and organophosphate esters, and we extend these to carbamates. We also develop a pKa based model and a quantitative structure property relationship (QSPR) model, and evaluate them against measured rate constants using R square and root mean square (RMS) error. Our work shows that for our relatively small sample size of carbamates, a Hammett-Taft based fragment model performs best, followed by a pKa and a QSPR model. This presentation compares three comp
Estimating directional epistasis
Le Rouzic, Arnaud
2014-01-01
Epistasis, i.e., the fact that gene effects depend on the genetic background, is a direct consequence of the complexity of genetic architectures. Despite this, most of the models used in evolutionary and quantitative genetics pay scant attention to genetic interactions. For instance, the traditional decomposition of genetic effects models epistasis as noise around the evolutionarily-relevant additive effects. Such an approach is only valid if it is assumed that there is no general pattern among interactions—a highly speculative scenario. Systematic interactions generate directional epistasis, which has major evolutionary consequences. In spite of its importance, directional epistasis is rarely measured or reported by quantitative geneticists, not only because its relevance is generally ignored, but also due to the lack of simple, operational, and accessible methods for its estimation. This paper describes conceptual and statistical tools that can be used to estimate directional epistasis from various kinds of data, including QTL mapping results, phenotype measurements in mutants, and artificial selection responses. As an illustration, I measured directional epistasis from a real-life example. I then discuss the interpretation of the estimates, showing how they can be used to draw meaningful biological inferences. PMID:25071828
Lake Number, a quantitative indicator of mixing used to estimate changes in dissolved oxygen
Robertson, Dale M.; Imberger, Jorg
1994-01-01
Lake Number, LN, values are shown to be quantitative indicators of deep mixing in lakes and reservoirs that can be used to estimate changes in deep water dissolved oxygen (DO) concentrations. LN is a dimensionless parameter defined as the ratio of the moments about the center of volume of the water body, of the stabilizing force of gravity associated with density stratification to the destabilizing forces supplied by wind, cooling, inflow, outflow, and other artificial mixing devices. To demonstrate the universality of this parameter, LN values are used to describe the extent of deep mixing and are compared with changes in DO concentrations in three reservoirs in Australia and four lakes in the U.S.A., which vary in productivity and mixing regimes. A simple model is developed which relates changes in LN values, i.e., the extent of mixing, to changes in near bottom DO concentrations. After calibrating the model for a specific system, it is possible to use real-time LN values, calculated using water temperature profiles and surface wind velocities, to estimate changes in DO concentrations (assuming unchanged trophic conditions).
Dinç, Erdal; Ozdemir, Abdil
2005-01-01
Multivariate chromatographic calibration technique was developed for the quantitative analysis of binary mixtures enalapril maleate (EA) and hydrochlorothiazide (HCT) in tablets in the presence of losartan potassium (LST). The mathematical algorithm of multivariate chromatographic calibration technique is based on the use of the linear regression equations constructed using relationship between concentration and peak area at the five-wavelength set. The algorithm of this mathematical calibration model having a simple mathematical content was briefly described. This approach is a powerful mathematical tool for an optimum chromatographic multivariate calibration and elimination of fluctuations coming from instrumental and experimental conditions. This multivariate chromatographic calibration contains reduction of multivariate linear regression functions to univariate data set. The validation of model was carried out by analyzing various synthetic binary mixtures and using the standard addition technique. Developed calibration technique was applied to the analysis of the real pharmaceutical tablets containing EA and HCT. The obtained results were compared with those obtained by classical HPLC method. It was observed that the proposed multivariate chromatographic calibration gives better results than classical HPLC.
Kohn, K W; Ewig, R A
1979-03-28
DNA-protien crosslinks produced in mouse leukemia L1210 cells by trans-Pt(II)diamminedichloride were quantitated using the technique of DNA alkaline elution. DNA single-strand segments that were or were not linked to protein were separable into distinct components by alkaline elution after exposure of the cells to 2--15 kR of X-ray. Protein-linked DNA strands were separated on the basis of their retention of filters at pH 12 while free DNA strands of the size generated by 2--15 kR of X-ray passed rapidly through the filters. The retention of protein-linked DNA strands was attributable to adsorption of protein to the filter under the conditions of alkaline elution. The results obeyed a simple quantitative model according to which the frequency of DNA-protein crosslinks could be calculated.
Blank, Hartmut
2005-02-01
Traditionally, the causes of interference phenomena were sought in "real" or "hard" memory processes such as unlearning, response competition, or inhibition, which serve to reduce the accessibility of target items. I propose an alternative approach which does not deny the influence of such processes but highlights a second, equally important, source of interference-the conversion (Tulving, 1983) of accessible memory information into memory performance. Conversion is conceived as a problem-solving-like activity in which the rememberer tries to find solutions to a memory task. Conversion-based interference effects are traced to different conversion processes in the experimental and control conditions of interference designs. I present a simple theoretical model that quantitatively predicts the resulting amount of interference. In two paired-associate learning experiments using two different types of memory tests, these predictions were corroborated. Relations of the present approach to traditional accounts of interference phenomena and implications for eyewitness testimony are discussed.
Pflock, Tobias J; Oellerich, Silke; Southall, June; Cogdell, Richard J; Ullmann, G Matthias; Köhler, Jürgen
2011-07-21
We have employed time-resolved spectroscopy on the picosecond time scale in combination with dynamic Monte Carlo simulations to investigate the photophysical properties of light-harvesting 2 (LH2) complexes from the purple photosynthetic bacterium Rhodopseudomonas acidophila. The variations of the fluorescence transients were studied as a function of the excitation fluence, the repetition rate of the excitation and the sample preparation conditions. Here we present the results obtained on detergent solubilized LH2 complexes, i.e., avoiding intercomplex interactions, and show that a simple four-state model is sufficient to grasp the experimental observations quantitatively without the need for any free parameters. This approach allows us to obtain a quantitative measure for the singlet-triplet annihilation rate in isolated, noninteracting LH2 complexes.
Aging in complex interdependency networks.
Vural, Dervis C; Morrison, Greg; Mahadevan, L
2014-02-01
Although species longevity is subject to a diverse range of evolutionary forces, the mortality curves of a wide variety of organisms are rather similar. Here we argue that qualitative and quantitative features of aging can be reproduced by a simple model based on the interdependence of fault-prone agents on one other. In addition to fitting our theory to the empiric mortality curves of six very different organisms, we establish the dependence of lifetime and aging rate on initial conditions, damage and repair rate, and system size. We compare the size distributions of disease and death and see that they have qualitatively different properties. We show that aging patterns are independent of the details of interdependence network structure, which suggests that aging is a many-body effect, and that the qualitative and quantitative features of aging are not sensitively dependent on the details of dependency structure or its formation.
Naz, Saba; Sherazi, Sayed Tufail Hussain; Talpur, Farah N; Mahesar, Sarfaraz A; Kara, Huseyin
2012-01-01
A simple, rapid, economical, and environmentally friendly analytical method was developed for the quantitative assessment of free fatty acids (FFAs) present in deodorizer distillates and crude oils by single bounce-attenuated total reflectance-FTIR spectroscopy. Partial least squares was applied for the calibration model based on the peak region of the carbonyl group (C=O) from 1726 to 1664 cm(-1) associated with the FFAs. The proposed method totally avoided the use of organic solvents or costly standards and could be applied easily in the oil processing industry. The accuracy of the method was checked by comparison to a conventional standard American Oil Chemists' Society (AOCS) titrimetric procedure, which provided good correlation (R = 0.99980), with an SD of +/- 0.05%. Therefore, the proposed method could be used as an alternate to the AOCS titrimetric method for the quantitative determination of FFAs especially in deodorizer distillates.
A data-driven model for influenza transmission incorporating media effects.
Mitchell, Lewis; Ross, Joshua V
2016-10-01
Numerous studies have attempted to model the effect of mass media on the transmission of diseases such as influenza; however, quantitative data on media engagement has until recently been difficult to obtain. With the recent explosion of 'big data' coming from online social media and the like, large volumes of data on a population's engagement with mass media during an epidemic are becoming available to researchers. In this study, we combine an online dataset comprising millions of shared messages relating to influenza with traditional surveillance data on flu activity to suggest a functional form for the relationship between the two. Using this data, we present a simple deterministic model for influenza dynamics incorporating media effects, and show that such a model helps explain the dynamics of historical influenza outbreaks. Furthermore, through model selection we show that the proposed media function fits historical data better than other media functions proposed in earlier studies.
Quantitative prediction of phase transformations in silicon during nanoindentation
NASA Astrophysics Data System (ADS)
Zhang, Liangchi; Basak, Animesh
2013-08-01
This paper establishes the first quantitative relationship between the phases transformed in silicon and the shape characteristics of nanoindentation curves. Based on an integrated analysis using TEM and unit cell properties of phases, the volumes of the phases emerged in a nanoindentation are formulated as a function of pop-out size and depth of nanoindentation impression. This simple formula enables a fast, accurate and quantitative prediction of the phases in a nanoindentation cycle, which has been impossible before.
Self-consistent approach for neutral community models with speciation
NASA Astrophysics Data System (ADS)
Haegeman, Bart; Etienne, Rampal S.
2010-03-01
Hubbell’s neutral model provides a rich theoretical framework to study ecological communities. By incorporating both ecological and evolutionary time scales, it allows us to investigate how communities are shaped by speciation processes. The speciation model in the basic neutral model is particularly simple, describing speciation as a point-mutation event in a birth of a single individual. The stationary species abundance distribution of the basic model, which can be solved exactly, fits empirical data of distributions of species’ abundances surprisingly well. More realistic speciation models have been proposed such as the random-fission model in which new species appear by splitting up existing species. However, no analytical solution is available for these models, impeding quantitative comparison with data. Here, we present a self-consistent approximation method for neutral community models with various speciation modes, including random fission. We derive explicit formulas for the stationary species abundance distribution, which agree very well with simulations. We expect that our approximation method will be useful to study other speciation processes in neutral community models as well.
What Can We Learn from a Simple Physics-Based Earthquake Simulator?
NASA Astrophysics Data System (ADS)
Artale Harris, Pietro; Marzocchi, Warner; Melini, Daniele
2018-03-01
Physics-based earthquake simulators are becoming a popular tool to investigate on the earthquake occurrence process. So far, the development of earthquake simulators is commonly led by the approach "the more physics, the better". However, this approach may hamper the comprehension of the outcomes of the simulator; in fact, within complex models, it may be difficult to understand which physical parameters are the most relevant to the features of the seismic catalog at which we are interested. For this reason, here, we take an opposite approach and analyze the behavior of a purposely simple earthquake simulator applied to a set of California faults. The idea is that a simple simulator may be more informative than a complex one for some specific scientific objectives, because it is more understandable. Our earthquake simulator has three main components: the first one is a realistic tectonic setting, i.e., a fault data set of California; the second is the application of quantitative laws for earthquake generation on each single fault, and the last is the fault interaction modeling through the Coulomb Failure Function. The analysis of this simple simulator shows that: (1) the short-term clustering can be reproduced by a set of faults with an almost periodic behavior, which interact according to a Coulomb failure function model; (2) a long-term behavior showing supercycles of the seismic activity exists only in a markedly deterministic framework, and quickly disappears introducing a small degree of stochasticity on the recurrence of earthquakes on a fault; (3) faults that are strongly coupled in terms of Coulomb failure function model are synchronized in time only in a marked deterministic framework, and as before, such a synchronization disappears introducing a small degree of stochasticity on the recurrence of earthquakes on a fault. Overall, the results show that even in a simple and perfectly known earthquake occurrence world, introducing a small degree of stochasticity may blur most of the deterministic time features, such as long-term trend and synchronization among nearby coupled faults.
Zhao, Xinyan; Dong, Tao
2012-10-16
This study reports a quantitative nucleic acid sequence-based amplification (Q-NASBA) microfluidic platform composed of a membrane-based sampling module, a sample preparation cassette, and a 24-channel Q-NASBA chip for environmental investigations on aquatic microorganisms. This low-cost and highly efficient sampling module, having seamless connection with the subsequent steps of sample preparation and quantitative detection, is designed for the collection of microbial communities from aquatic environments. Eight kinds of commercial membrane filters are relevantly analyzed using Saccharomyces cerevisiae, Escherichia coli, and Staphylococcus aureus as model microorganisms. After the microorganisms are concentrated on the membrane filters, the retentate can be easily conserved in a transport medium (TM) buffer and sent to a remote laboratory. A Q-NASBA-oriented sample preparation cassette is originally designed to extract DNA/RNA molecules directly from the captured cells on the membranes. Sequentially, the extract is analyzed within Q-NASBA chips that are compatible with common microplate readers in laboratories. Particularly, a novel analytical algorithmic method is developed for simple but robust on-chip Q-NASBA assays. The reported multifunctional microfluidic system could detect a few microorganisms quantitatively and simultaneously. Further research should be conducted to simplify and standardize ecological investigations on aquatic environments.
Image-Based Quantification of Plant Immunity and Disease.
Laflamme, Bradley; Middleton, Maggie; Lo, Timothy; Desveaux, Darrell; Guttman, David S
2016-12-01
Measuring the extent and severity of disease is a critical component of plant pathology research and crop breeding. Unfortunately, existing visual scoring systems are qualitative, subjective, and the results are difficult to transfer between research groups, while existing quantitative methods can be quite laborious. Here, we present plant immunity and disease image-based quantification (PIDIQ), a quantitative, semi-automated system to rapidly and objectively measure disease symptoms in a biologically relevant context. PIDIQ applies an ImageJ-based macro to plant photos in order to distinguish healthy tissue from tissue that has yellowed due to disease. It can process a directory of images in an automated manner and report the relative ratios of healthy to diseased leaf area, thereby providing a quantitative measure of plant health that can be statistically compared with appropriate controls. We used the Arabidopsis thaliana-Pseudomonas syringae model system to show that PIDIQ is able to identify both enhanced plant health associated with effector-triggered immunity as well as elevated disease symptoms associated with effector-triggered susceptibility. Finally, we show that the quantitative results provided by PIDIQ correspond to those obtained via traditional in planta pathogen growth assays. PIDIQ provides a simple and effective means to nondestructively quantify disease from whole plants and we believe it will be equally effective for monitoring disease on excised leaves and stems.
DNA DAMAGE QUANTITATION BY ALKALINE GEL ELECTROPHORESIS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
SUTHERLAND,B.M.; BENNETT,P.V.; SUTHERLAND, J.C.
2004-03-24
Physical and chemical agents in the environment, those used in clinical applications, or encountered during recreational exposures to sunlight, induce damages in DNA. Understanding the biological impact of these agents requires quantitation of the levels of such damages in laboratory test systems as well as in field or clinical samples. Alkaline gel electrophoresis provides a sensitive (down to {approx} a few lesions/5Mb), rapid method of direct quantitation of a wide variety of DNA damages in nanogram quantities of non-radioactive DNAs from laboratory, field, or clinical specimens, including higher plants and animals. This method stems from velocity sedimentation studies of DNAmore » populations, and from the simple methods of agarose gel electrophoresis. Our laboratories have developed quantitative agarose gel methods, analytical descriptions of DNA migration during electrophoresis on agarose gels (1-6), and electronic imaging for accurate determinations of DNA mass (7-9). Although all these components improve sensitivity and throughput of large numbers of samples (7,8,10), a simple version using only standard molecular biology equipment allows routine analysis of DNA damages at moderate frequencies. We present here a description of the methods, as well as a brief description of the underlying principles, required for a simplified approach to quantitation of DNA damages by alkaline gel electrophoresis.« less
Towards quantitative assessment of calciphylaxis
NASA Astrophysics Data System (ADS)
Deserno, Thomas M.; Sárándi, István.; Jose, Abin; Haak, Daniel; Jonas, Stephan; Specht, Paula; Brandenburg, Vincent
2014-03-01
Calciphylaxis is a rare disease that has devastating conditions associated with high morbidity and mortality. Calciphylaxis is characterized by systemic medial calcification of the arteries yielding necrotic skin ulcerations. In this paper, we aim at supporting the installation of multi-center registries for calciphylaxis, which includes a photographic documentation of skin necrosis. However, photographs acquired in different centers under different conditions using different equipment and photographers cannot be compared quantitatively. For normalization, we use a simple color pad that is placed into the field of view, segmented from the image, and its color fields are analyzed. In total, 24 colors are printed on that scale. A least-squares approach is used to determine the affine color transform. Furthermore, the card allows scale normalization. We provide a case study for qualitative assessment. In addition, the method is evaluated quantitatively using 10 images of two sets of different captures of the same necrosis. The variability of quantitative measurements based on free hand photography is assessed regarding geometric and color distortions before and after our simple calibration procedure. Using automated image processing, the standard deviation of measurements is significantly reduced. The coefficients of variations yield 5-20% and 2-10% for geometry and color, respectively. Hence, quantitative assessment of calciphylaxis becomes practicable and will impact a better understanding of this rare but fatal disease.
NASA Astrophysics Data System (ADS)
Min, Junwei; Yao, Baoli; Ketelhut, Steffi; Kemper, Björn
2017-02-01
The modular combination of optical microscopes with digital holographic microscopy (DHM) has been proven to be a powerful tool for quantitative live cell imaging. The introduction of condenser and different microscope objectives (MO) simplifies the usage of the technique and makes it easier to measure different kinds of specimens with different magnifications. However, the high flexibility of illumination and imaging also causes variable phase aberrations that need to be eliminated for high resolution quantitative phase imaging. The existent phase aberrations compensation methods either require add additional elements into the reference arm or need specimen free reference areas or separate reference holograms to build up suitable digital phase masks. These inherent requirements make them unpractical for usage with highly variable illumination and imaging systems and prevent on-line monitoring of living cells. In this paper, we present a simple numerical method for phase aberration compensation based on the analysis of holograms in spatial frequency domain with capabilities for on-line quantitative phase imaging. From a single shot off-axis hologram, the whole phase aberration can be eliminated automatically without numerical fitting or pre-knowledge of the setup. The capabilities and robustness for quantitative phase imaging of living cancer cells are demonstrated.
Numerical simulation of steady three-dimensional flows in axial turbomachinery bladerows
NASA Astrophysics Data System (ADS)
Basson, Anton Herman
The formulation for and application of a numerical model for low Mach number steady three-dimensional flows in axial turbomachinery blade rows is presented. The formulation considered here includes an efficient grid generation scheme (particularly suited to computational grids for the analysis of turbulent turbomachinery flows) and a semi-implicit, pressure-based computational fluid dynamics scheme that directly includes artificial dissipation, applicable to viscous and inviscid flows. The grid generation technique uses a combination of algebraic and elliptic methods, in conjunction with the Minimal Residual Method, to economically generate smooth structured grids. For typical H-grids in turbomachinery bladerows, when compared to a purely elliptic grid generation scheme, the presented grid generation scheme produces grids with much improved smoothness near the leading and trailing edges, allows the use of small near wall grid spacing required by low Reynolds number turbulence models, and maintains orthogonality of the grid near the solid boundaries even for high flow angle cascades. A specialized embedded H-grid for application particularly to tip clearance flows is presented. This topology smoothly discretizes the domain without modifying the tip shape, while requiring only minor modifications to H-grid flow solvers. Better quantitative modeling of the tip clearance vortex structure than that obtained with a pinched tip approximation is demonstrated. The formulation of artificial dissipation terms for a semi-implicit, pressure-based (SIMPLE type) flow solver, is presented. It is applied to both the Euler and the Navier-Stokes equations, expressed in generalized coordinates using a non-staggered grid. This formulation is compared to some SIMPLE and time marching formulations, revealing the artificial dissipation inherent in some commonly used semi-implicit formulations. The effect of the amount of dissipation on the accuracy of the solution and the convergence rate is quantitatively demonstrated for a number of flow cases. The ability of the formulation to model complex steady turbomachinery flows is demonstrated, e.g. for pressure driven secondary flows, turbine nozzle wakes, turbulent boundary layers. The formulation's modeling of blade surface heat transfer is assessed. The numerical model is used to investigate the structure of phenomena associated with tip clearance flows in a turbine nozzle.
Wenk, H.-R.; Takeshita, T.; Bechler, E.; Erskine, B.G.; Matthies, S.
1987-01-01
The pattern of lattice preferred orientation (texture) in deformed rocks is an expression of the strain path and the acting deformation mechanisms. A first indication about the strain path is given by the symmetry of pole figures: coaxial deformation produces orthorhombic pole figures, while non-coaxial deformation yields monoclinic or triclinic pole figures. More quantitative information about the strain history can be obtained by comparing natural textures with experimental ones and with theoretical models. For this comparison, a representation in the sensitive three-dimensional orientation distribution space is extremely important and efforts are made to explain this concept. We have been investigating differences between pure shear and simple shear deformation incarbonate rocks and have found considerable agreement between textures produced in plane strain experiments and predictions based on the Taylor model. We were able to simulate the observed changes with strain history (coaxial vs non-coaxial) and the profound texture transition which occurs with increasing temperature. Two natural calcite textures were then selected which we interpreted by comparing them with the experimental and theoretical results. A marble from the Santa Rosa mylonite zone in southern California displays orthorhombic pole figures with patterns consistent with low temperature deformation in pure shear. A limestone from the Tanque Verde detachment fault in Arizona has a monoclinic fabric from which we can interpret that 60% of the deformation occurred by simple shear. ?? 1987.
Triggering up states in all-to-all coupled neurons
NASA Astrophysics Data System (ADS)
Ngo, H.-V. V.; Köhler, J.; Mayer, J.; Claussen, J. C.; Schuster, H. G.
2010-03-01
Slow-wave sleep in mammalians is characterized by a change of large-scale cortical activity currently paraphrased as cortical Up/Down states. A recent experiment demonstrated a bistable collective behaviour in ferret slices, with the remarkable property that the Up states can be switched on and off with pulses, or excitations, of same polarity; whereby the effect of the second pulse significantly depends on the time interval between the pulses. Here we present a simple time-discrete model of a neural network that exhibits this type of behaviour, as well as quantitatively reproduces the time dependence found in the experiments.
Self-diffusion in compressively strained Ge
NASA Astrophysics Data System (ADS)
Kawamura, Yoko; Uematsu, Masashi; Hoshi, Yusuke; Sawano, Kentarou; Myronov, Maksym; Shiraki, Yasuhiro; Haller, Eugene E.; Itoh, Kohei M.
2011-08-01
Under a compressive biaxial strain of ˜ 0.71%, Ge self-diffusion has been measured using an isotopically controlled Ge single-crystal layer grown on a relaxed Si0.2Ge0.8 virtual substrate. The self-diffusivity is enhanced by the compressive strain and its behavior is fully consistent with a theoretical prediction of a generalized activation volume model of a simple vacancy mediated diffusion, reported by Aziz et al. [Phys. Rev. B 73, 054101 (2006)]. The activation volume of (-0.65±0.21) times the Ge atomic volume quantitatively describes the observed enhancement due to the compressive biaxial strain very well.
Circular motion and Polish Doughnuts in NUT spacetime
NASA Astrophysics Data System (ADS)
Jefremov, Paul I.
The astrophysical relevance of the NUT spacetime(s) is a matter of debate due to pathological properties exhibited by this solution. However, if it is realised in nature, then we should look for the characteristic imprints of it on possible observations. One of the major sources of data on black hole astrophysics is the accretion process. Using a simple but fully analytical ``Polish Doughnuts'' model of accretion disk one gets both qualitative and quantitative differences from the Kerr spacetime produced by the presence of the gravitomagnetic charge. The present paper is based on our work Jefremov & Perlick (2016).
Fuel Property Determination of Biodiesel-Diesel Blends By Terahertz Spectrum
NASA Astrophysics Data System (ADS)
Zhao, Hui; Zhao, Kun; Bao, Rima
2012-05-01
The frequency-dependent absorption characteristics of biodiesel and its blends with conventional diesel fuel have been researched in the spectral range of 0.2-1.5 THz by the terahertz time-domain spectroscopy (THz-TDS). The absorption coefficient presented a regular increasing with biodiesel content. A nonlinear multivariate model that correlating cetane number and solidifying point of bio-diesel blends with absorption coefficient has been established, making the quantitative analysis of fuel properties simple. The results made the cetane number and solidifying point prediction possible by THz-TDS technology and indicated a bright future in practical application.
Mechanism of hologram formation in fixation-free rehalogenating bleaching processes.
Neipp, Cristian; Pascual, Inmaculada; Beléndez, Augusto
2002-07-10
The mechanism of hologram formation in fixation-free rehalogenating bleaching processes have been treated by different authors. The experiments carried out on Agfa 8E75 HD plates led to the conclusion that material transfer from the exposed to the unexposed zones is the main mechanism under theprocess. We present a simple model that explains the mechanism of hologram formation inside the emulsion. Also quantitative data obtained using both Agfa 8E75 HD and Slavich PFG-01 fine-grained red-sensitive emulsions are given and good agreement between theory and experiments are found.
QUANTITATION OF MENSTRUAL BLOOD LOSS: A RADIOACTIVE METHOD UTILIZING A COUNTING DOME
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tauxe, W.N.
A description has been given of a simple, accurate tech nique for the quantitation of menstrual blood loss, involving the determination of a three- dimensional isosensitivity curve and the fashioning of a lucite dome with cover to fit these specifications. Ten normal subjects lost no more than 50 ml each per menstrual period. (auth)
McMullin, Brian T; Leung, Ming-Ying; Shanbhag, Arun S; McNulty, Donald; Mabrey, Jay D; Agrawal, C Mauli
2006-02-01
A total of 750 images of individual ultra-high molecular weight polyethylene (UHMWPE) particles isolated from periprosthetic failed hip, knee, and shoulder arthroplasties were extracted from archival scanning electron micrographs. Particle size and morphology was subsequently analyzed using computerized image analysis software utilizing five descriptors found in ASTM F1877-98, a standard for quantitative description of wear debris. An online survey application was developed to display particle images, and allowed ten respondents to classify particle morphologies according to commonly used terminology as fibers, flakes, or granules. Particles were categorized based on a simple majority of responses. All descriptors were evaluated using a one-way ANOVA and Tukey-Kramer test for all-pairs comparison among each class of particles. A logistic regression model using half of the particles included in the survey was then used to develop a mathematical scheme to predict whether a given particle should be classified as a fiber, flake, or granule based on its quantitative measurements. The validity of the model was then assessed using the other half of the survey particles and compared with human responses. Comparison of the quantitative measurements of isolated particles showed that the morphologies of each particle type classified by respondents were statistically different from one another (p<0.05). The average agreement between mathematical prediction and human respondents was 83.5% (standard error 0.16%). These data suggest that computerized descriptors can be feasibly correlated with subjective terminology, thus providing a basis for a common vocabulary for particle description which can be translated into quantitative dimensions.
McMullin, Brian T.; Leung, Ming-Ying; Shanbhag, Arun S.; McNulty, Donald; Mabrey, Jay D.; Agrawal, C. Mauli
2014-01-01
A total of 750 images of individual ultra-high molecular weight polyethylene (UHMWPE) particles isolated from periprosthetic failed hip, knee, and shoulder arthroplasties were extracted from archival scanning electron micrographs. Particle size and morphology was subsequently analyzed using computerized image analysis software utilizing five descriptors found in ASTM F1877-98, a standard for quantitative description of wear debris. An online survey application was developed to display particle images, and allowed ten respondents to classify particle morphologies according to commonly used terminology as fibers, flakes, or granules. Particles were categorized based on a simple majority of responses. All descriptors were evaluated using a one-way ANOVA and Tukey–Kramer test for all-pairs comparison among each class of particles. A logistic regression model using half of the particles included in the survey was then used to develop a mathematical scheme to predict whether a given particle should be classified as a fiber, flake, or granule based on its quantitative measurements. The validity of the model was then assessed using the other half of the survey particles and compared with human responses. Comparison of the quantitative measurements of isolated particles showed that the morphologies of each particle type classified by respondents were statistically different from one another (po0:05). The average agreement between mathematical prediction and human respondents was 83.5% (standard error 0.16%). These data suggest that computerized descriptors can be feasibly correlated with subjective terminology, thus providing a basis for a common vocabulary for particle description which can be translated into quantitative dimensions. PMID:16112725
Tracing the origin of azimuthal gluon correlations in the color glass condensate
NASA Astrophysics Data System (ADS)
Lappi, T.; Schenke, B.; Schlichting, S.; Venugopalan, R.
2016-01-01
We examine the origins of azimuthal correlations observed in high energy proton-nucleus collisions by considering the simple example of the scattering of uncorrelated partons off color fields in a large nucleus. We demonstrate how the physics of fluctuating color fields in the color glass condensate (CGC) effective theory generates these azimuthal multiparticle correlations and compute the corresponding Fourier coefficients v n within different CGC approximation schemes. We discuss in detail the qualitative and quantitative differences between the different schemes. We will show how a recently introduced color field domain model that captures key features of the observed azimuthal correlations can be understood in the CGC effective theory as a model of non-Gaussian correlations in the target nucleus.
Miller, Joshua D
2012-12-01
In this article, the development of Five-Factor Model (FFM) personality disorder (PD) prototypes for the assessment of DSM-IV PDs are reviewed, as well as subsequent procedures for scoring individuals' FFM data with regard to these PD prototypes, including similarity scores and simple additive counts that are based on a quantitative prototype matching methodology. Both techniques, which result in very strongly correlated scores, demonstrate convergent and discriminant validity, and provide clinically useful information with regard to various forms of functioning. The techniques described here for use with FFM data are quite different from the prototype matching methods used elsewhere. © 2012 The Author. Journal of Personality © 2012, Wiley Periodicals, Inc.
Spinal decompression sickness: mechanical studies and a model.
Hills, B A; James, P B
1982-09-01
Six experimental investigations of various mechanical aspects of the spinal cord are described relevant to its injury by gas deposited from solution by decompression. These show appreciable resistances to gas pockets dissipating by tracking along tissue boundaries or distending tissue, the back pressure often exceeding the probable blood perfusion pressure--particularly in the watershed zones. This leads to a simple mechanical model of spinal decompression sickness based on the vascular "waterfall" that is consistent with the pathology, the major quantitative aspects, and the symptomatology--especially the reversibility with recompression that is so difficult to explain by an embolic mechanism. The hypothesis is that autochthonous gas separating from solution in the spinal cord can reach sufficient local pressure to exceed the perfusion pressure and thus occlude blood flow.
An experimental approach to the fundamental principles of hemodynamics.
Pontiga, Francisco; Gaytán, Susana P
2005-09-01
An experimental model has been developed to give students hands-on experience with the fundamental laws of hemodynamics. The proposed experimental setup is of simple construction but permits the precise measurements of physical variables involved in the experience. The model consists in a series of experiments where different basic phenomena are quantitatively investigated, such as the pressure drop in a long straight vessel and in an obstructed vessel, the transition from laminar to turbulent flow, the association of vessels in vascular networks, or the generation of a critical stenosis. Through these experiments, students acquire a direct appreciation of the importance of the parameters involved in the relationship between pressure and flow rate, thus facilitating the comprehension of more complex problems in hemodynamics.
The Biotin/Avidin complex adhesion force
NASA Astrophysics Data System (ADS)
Balsera, Manel A.; Izrailev, Sergei; Stepaniants, Sergey; Oono, Yoshitsugu; Schulten, Klaus
1997-03-01
The vitamin Biotin and the protein avidin form one of the strongest non-covalent bonds between biological molecules. We have performed molecular and stochastic dynamic modeling of the unbinding of this complex(Izrailev et al., Biophysical Journal, In press). These simulations provide insight into the effect of particular residues and water on the tight binding of the system. With the aid of simple phenomenological models we have related qualitatively our results to Atomic Force Microscopy adhesion force measurements (E.-L. Florin, V. T. Moy and H. E. Gaub Science) 264:415-417 and kinetic dissociation experiments( A. Chilcotti and P. S. Stayton, J. Am. Chem. Soc.) 117:10622-10628. We will discuss the difficulties preventing a more quantitative understanding of the unbinding force and kinetics.
Spectroscopy as a tool for geochemical modeling
NASA Astrophysics Data System (ADS)
Kopacková, Veronika; Chevrel, Stephane; Bourguignon, Anna
2011-11-01
This study focused on testing the feasibility of up-scaling ground-spectra-derived parameters to HyMap spectral and spatial resolution and whether they could be further used for a quantitative determination of the following geochemical parameters: As, pH and Clignite content. The study was carried on the Sokolov lignite mine as it represents a site with extreme material heterogeneity and high heavy-metal gradients. A new segmentation method based on the unique spectral properties of acid materials was developed and applied to the multi-line HyMap image data corrected for BRDF and atmospheric effects. The quantitative parameters were calculated for multiple absorption features identified within the VIS/VNIR/SWIR regions (simple band ratios, absorption band depth and quantitative spectral feature parameters calculated dynamically for each spectral measurement (centre of the absorption band (λ), depth of the absorption band (D), width of the absorption band (Width), and asymmetry of the absorption band (S)). The degree of spectral similarity between the ground and image spectra was assessed. The linear models for pH, As and the Clignite content of the whole and segmented images were cross-validated on the selected homogenous areas defined in the HS images using ground truth. For the segmented images, reliable results were achieved as follows: As: R2=0.84, Clignite: R2=0.88 and R2 pH: R2= 0.57.
Mechanical model for a collagen fibril pair in extracellular matrix.
Chan, Yue; Cox, Grant M; Haverkamp, Richard G; Hill, James M
2009-04-01
In this paper, we model the mechanics of a collagen pair in the connective tissue extracellular matrix that exists in abundance throughout animals, including the human body. This connective tissue comprises repeated units of two main structures, namely collagens as well as axial, parallel and regular anionic glycosaminoglycan between collagens. The collagen fibril can be modeled by Hooke's law whereas anionic glycosaminoglycan behaves more like a rubber-band rod and as such can be better modeled by the worm-like chain model. While both computer simulations and continuum mechanics models have been investigated for the behavior of this connective tissue typically, authors either assume a simple form of the molecular potential energy or entirely ignore the microscopic structure of the connective tissue. Here, we apply basic physical methodologies and simple applied mathematical modeling techniques to describe the collagen pair quantitatively. We found that the growth of fibrils was intimately related to the maximum length of the anionic glycosaminoglycan and the relative displacement of two adjacent fibrils, which in return was closely related to the effectiveness of anionic glycosaminoglycan in transmitting forces between fibrils. These reveal the importance of the anionic glycosaminoglycan in maintaining the structural shape of the connective tissue extracellular matrix and eventually the shape modulus of human tissues. We also found that some macroscopic properties, like the maximum molecular energy and the breaking fraction of the collagen, were also related to the microscopic characteristics of the anionic glycosaminoglycan.
Time-resolved quantitative-phase microscopy of laser-material interactions using a wavefront sensor.
Gallais, Laurent; Monneret, Serge
2016-07-15
We report on a simple and efficient technique based on a wavefront sensor to obtain time-resolved amplitude and phase images of laser-material interactions. The main interest of the technique is to obtain quantitative self-calibrated phase measurements in one shot at the femtosecond time-scale, with high spatial resolution. The technique is used for direct observation and quantitative measurement of the Kerr effect in a fused silica substrate and free electron generation by photo-ionization processes in an optical coating.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kruzic, Jamie J.; Evans, T. Matthew; Greaney, P. Alex
The report describes the development of a discrete element method (DEM) based modeling approach to quantitatively predict deformation and failure of typical nickel based superalloys. A series of experimental data, including microstructure and mechanical property characterization at 600°C, was collected for a relatively simple, model solid solution Ni-20Cr alloy (Nimonic 75) to determine inputs for the model and provide data for model validation. Nimonic 75 was considered ideal for this study because it is a certified tensile and creep reference material. A series of new DEM modeling approaches were developed to capture the complexity of metal deformation, including cubic elasticmore » anisotropy and plastic deformation both with and without strain hardening. Our model approaches were implemented into a commercially available DEM code, PFC3D, that is commonly used by engineers. It is envisioned that once further developed, this new DEM modeling approach can be adapted to a wide range of engineering applications.« less
Comparing models of the periodic variations in spin-down and beamwidth for PSR B1828-11
NASA Astrophysics Data System (ADS)
Ashton, G.; Jones, D. I.; Prix, R.
2016-05-01
We build a framework using tools from Bayesian data analysis to evaluate models explaining the periodic variations in spin-down and beamwidth of PSR B1828-11. The available data consist of the time-averaged spin-down rate, which displays a distinctive double-peaked modulation, and measurements of the beamwidth. Two concepts exist in the literature that are capable of explaining these variations; we formulate predictive models from these and quantitatively compare them. The first concept is phenomenological and stipulates that the magnetosphere undergoes periodic switching between two metastable states as first suggested by Lyne et al. The second concept, precession, was first considered as a candidate for the modulation of B1828-11 by Stairs et al. We quantitatively compare models built from these concepts using a Bayesian odds ratio. Because the phenomenological switching model itself was informed by these data in the first place, it is difficult to specify appropriate parameter-space priors that can be trusted for an unbiased model comparison. Therefore, we first perform a parameter estimation using the spin-down data, and then use the resulting posterior distributions as priors for model comparison on the beamwidth data. We find that a precession model with a simple circular Gaussian beam geometry fails to appropriately describe the data, while allowing for a more general beam geometry provides a good fit to the data. The resulting odds between the precession model (with a general beam geometry) and the switching model are estimated as 102.7±0.5 in favour of the precession model.
Gamal El-Dien, Omnia; Ratcliffe, Blaise; Klápště, Jaroslav; Porth, Ilga; Chen, Charles; El-Kassaby, Yousry A.
2016-01-01
The open-pollinated (OP) family testing combines the simplest known progeny evaluation and quantitative genetics analyses as candidates’ offspring are assumed to represent independent half-sib families. The accuracy of genetic parameter estimates is often questioned as the assumption of “half-sibling” in OP families may often be violated. We compared the pedigree- vs. marker-based genetic models by analysing 22-yr height and 30-yr wood density for 214 white spruce [Picea glauca (Moench) Voss] OP families represented by 1694 individuals growing on one site in Quebec, Canada. Assuming half-sibling, the pedigree-based model was limited to estimating the additive genetic variances which, in turn, were grossly overestimated as they were confounded by very minor dominance and major additive-by-additive epistatic genetic variances. In contrast, the implemented genomic pairwise realized relationship models allowed the disentanglement of additive from all nonadditive factors through genetic variance decomposition. The marker-based models produced more realistic narrow-sense heritability estimates and, for the first time, allowed estimating the dominance and epistatic genetic variances from OP testing. In addition, the genomic models showed better prediction accuracies compared to pedigree models and were able to predict individual breeding values for new individuals from untested families, which was not possible using the pedigree-based model. Clearly, the use of marker-based relationship approach is effective in estimating the quantitative genetic parameters of complex traits even under simple and shallow pedigree structure. PMID:26801647
NASA Astrophysics Data System (ADS)
Emiliannur, E.; Hamidah, I.; Zainul, A.; Wulan, A. R.
2017-09-01
Performance Assessment Model (PAM) has been developed to represent the physics concepts which able to be devided into five experiments: 1) acceleration due to gravity; 2) Hooke’s law; 3) simple harmonic motion; 4) work-energy concepts; and 5) the law of momentum conservation. The aim of this study was to determine the contribution of PAM in physics laboratory to increase students’ Critical Thinking Disposition (CTD) at senior high school. Subject of the study were 11th grade consist 32 students of a senior high school in Lubuk Sikaping, West Sumatera. The research used one group pretest-postest design. Data was collected through essay test and questionnaire about CTD. Data was analyzed using quantitative way with N-gain value. This study concluded that performance assessmet model effectively increases the N-gain at medium category. It means students’ critical thinking disposition significant increase after implementation of performance assessment model in physics laboratory.
Computational modeling of in vitro biological responses on polymethacrylate surfaces
Ghosh, Jayeeta; Lewitus, Dan Y; Chandra, Prafulla; Joy, Abraham; Bushman, Jared; Knight, Doyle; Kohn, Joachim
2011-01-01
The objective of this research was to examine the capabilities of QSPR (Quantitative Structure Property Relationship) modeling to predict specific biological responses (fibrinogen adsorption, cell attachment and cell proliferation index) on thin films of different polymethacrylates. Using 33 commercially available monomers it is theoretically possible to construct a library of over 40,000 distinct polymer compositions. A subset of these polymers were synthesized and solvent cast surfaces were prepared in 96 well plates for the measurement of fibrinogen adsorption. NIH 3T3 cell attachment and proliferation index were measured on spin coated thin films of these polymers. Based on the experimental results of these polymers, separate models were built for homo-, co-, and terpolymers in the library with good correlation between experiment and predicted values. The ability to predict biological responses by simple QSPR models for large numbers of polymers has important implications in designing biomaterials for specific biological or medical applications. PMID:21779132
Possible Ceres bow shock surfaces based on fluid models
NASA Astrophysics Data System (ADS)
Jia, Y.-D.; Villarreal, M. N.; Russell, C. T.
2017-05-01
The hot electron beams that Dawn detected at Ceres can be explained by fast-Fermi acceleration at a temporary bow shock. A shock forms when the solar wind encounters a temporary atmosphere, similar to a cometary coma. We use a magnetohydrodynamic model to quantitatively reproduce the 3-D shock surface at Ceres and deduce the atmosphere characteristics that are required to create such a shock. Our most simple model requires about 1.8 kg/s, or 6 × 1025/s water vapor production rate to form such a shock. Such an estimate relies on characteristics of the solar wind-Ceres interaction. We present several case studies to show how these conditions affect our estimate. In addition, we contrast these cases with the smaller and narrower shock caused by a subsurface induction. Our multifluid model reveals the asymmetry introduced by the large gyroradius of the heavy pickup ions and further constrains the IMF direction during the events.
Modeling Human Dynamics of Face-to-Face Interaction Networks
NASA Astrophysics Data System (ADS)
Starnini, Michele; Baronchelli, Andrea; Pastor-Satorras, Romualdo
2013-04-01
Face-to-face interaction networks describe social interactions in human gatherings, and are the substrate for processes such as epidemic spreading and gossip propagation. The bursty nature of human behavior characterizes many aspects of empirical data, such as the distribution of conversation lengths, of conversations per person, or of interconversation times. Despite several recent attempts, a general theoretical understanding of the global picture emerging from data is still lacking. Here we present a simple model that reproduces quantitatively most of the relevant features of empirical face-to-face interaction networks. The model describes agents that perform a random walk in a two-dimensional space and are characterized by an attractiveness whose effect is to slow down the motion of people around them. The proposed framework sheds light on the dynamics of human interactions and can improve the modeling of dynamical processes taking place on the ensuing dynamical social networks.
Li, Jing; Wang, Min-Yan; Zhang, Jian; He, Wan-Qing; Nie, Lei; Shao, Xia
2013-12-01
VOCs emission from petrochemical storage tanks is one of the important emission sources in the petrochemical industry. In order to find out the VOCs emission amount of petrochemical storage tanks, Tanks 4.0.9d model is utilized to calculate the VOCs emission from different kinds of storage tanks. VOCs emissions from a horizontal tank, a vertical fixed roof tank, an internal floating roof tank and an external floating roof tank were calculated as an example. The consideration of the site meteorological information, the sealing information, the tank content information and unit conversion by using Tanks 4.0.9d model in China was also discussed. Tanks 4.0.9d model can be used to estimate VOCs emissions from petrochemical storage tanks in China as a simple and highly accurate method.
NASA Astrophysics Data System (ADS)
Durst, Phillip J.; Gray, Wendell; Trentini, Michael
2013-05-01
A simple, quantitative measure for encapsulating the autonomous capabilities of unmanned systems (UMS) has yet to be established. Current models for measuring a UMS's autonomy level require extensive, operational level testing, and provide a means for assessing the autonomy level for a specific mission/task and operational environment. A more elegant technique for quantifying autonomy using component level testing of the robot platform alone, outside of mission and environment contexts, is desirable. Using a high level framework for UMS architectures, such a model for determining a level of autonomy has been developed. The model uses a combination of developmental and component level testing for each aspect of the UMS architecture to define a non-contextual autonomous potential (NCAP). The NCAP provides an autonomy level, ranging from fully non- autonomous to fully autonomous, in the form of a single numeric parameter describing the UMS's performance capabilities when operating at that level of autonomy.
TRACING THE EVOLUTION OF HIGH-REDSHIFT GALAXIES USING STELLAR ABUNDANCES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crosby, Brian D.; O’Shea, Brian W.; Beers, Timothy C.
2016-03-20
This paper presents the first results from a model for chemical evolution that can be applied to N-body cosmological simulations and quantitatively compared to measured stellar abundances from large astronomical surveys. This model convolves the chemical yield sets from a range of stellar nucleosynthesis calculations (including asymptotic giant branch stars, Type Ia and II supernovae, and stellar wind models) with a user-specified stellar initial mass function (IMF) and metallicity to calculate the time-dependent chemical evolution model for a “simple stellar population” (SSP) of uniform metallicity and formation time. These SSP models are combined with a semianalytic model for galaxy formation andmore » evolution that uses merger trees from N-body cosmological simulations to track several α- and iron-peak elements for the stellar and multiphase interstellar medium components of several thousand galaxies in the early (z ≥ 6) universe. The simulated galaxy population is then quantitatively compared to two complementary data sets of abundances in the Milky Way stellar halo and is capable of reproducing many of the observed abundance trends. The observed abundance ratio distributions are best reproduced with a Chabrier IMF, a chemically enriched star formation efficiency of 0.2, and a redshift of reionization of 7. Many abundances are qualitatively well matched by our model, but our model consistently overpredicts the carbon-enhanced fraction of stars at low metallicities, likely owing to incomplete coverage of Population III stellar yields and supernova models and the lack of dust as a component of our model.« less
Land management in the American southwest: a state-and-transition approach to ecosystem complexity.
Bestelmeyer, Brandon T; Herrick, Jeffrey E; Brown, Joel R; Trujillo, David A; Havstad, Kris M
2004-07-01
State-and-transition models are increasingly being used to guide rangeland management. These models provide a relatively simple, management-oriented way to classify land condition (state) and to describe the factors that might cause a shift to another state (a transition). There are many formulations of state-and-transition models in the literature. The version we endorse does not adhere to any particular generalities about ecosystem dynamics, but it includes consideration of several kinds of dynamics and management response to them. In contrast to previous uses of state-and-transition models, we propose that models can, at present, be most effectively used to specify and qualitatively compare the relative benefits and potential risks of different management actions (e.g., fire and grazing) and other factors (e.g., invasive species and climate change) on specified areas of land. High spatial and temporal variability and complex interactions preclude the meaningful use of general quantitative models. Forecasts can be made on a case-by-case basis by interpreting qualitative and quantitative indicators, historical data, and spatially structured monitoring data based on conceptual models. We illustrate how science- based conceptual models are created using several rangeland examples that vary in complexity. In doing so, we illustrate the implications of designating plant communities and states in models, accounting for varying scales of pattern in vegetation and soils, interpreting the presence of plant communities on different soils and dealing with our uncertainty about how those communities were assembled and how they will change in the future. We conclude with observations about how models have helped to improve management decision-making.
Renormalization of spin excitations in hexagonal HoMnO3 by magnon-phonon coupling
NASA Astrophysics Data System (ADS)
Kim, Taehun; Leiner, Jonathan C.; Park, Kisoo; Oh, Joosung; Sim, Hasung; Iida, Kazuki; Kamazawa, Kazuya; Park, Je-Geun
2018-05-01
Hexagonal HoMnO3, a two-dimensional Heisenberg antiferromagnet, has been studied via inelastic neutron scattering. A simple Heisenberg model with a single-ion anisotropy describes most features of the spin-wave dispersion curves. However, there is shown to be a renormalization of the magnon energies located at around 11 meV. Since both the magnon-magnon interaction and magnon-phonon coupling can affect the renormalization in a noncollinear magnet, we have accounted for both of these couplings by using a Heisenberg XXZ model with 1 /S expansions [1] and the Einstein site phonon model [13], respectively. This quantitative analysis leads to the conclusion that the renormalization effect primarily originates from the magnon-phonon coupling, while the spontaneous magnon decay due to the magnon-magnon interaction is suppressed by strong two-ion anisotropy.
On agent-based modeling and computational social science.
Conte, Rosaria; Paolucci, Mario
2014-01-01
In the first part of the paper, the field of agent-based modeling (ABM) is discussed focusing on the role of generative theories, aiming at explaining phenomena by growing them. After a brief analysis of the major strengths of the field some crucial weaknesses are analyzed. In particular, the generative power of ABM is found to have been underexploited, as the pressure for simple recipes has prevailed and shadowed the application of rich cognitive models. In the second part of the paper, the renewal of interest for Computational Social Science (CSS) is focused upon, and several of its variants, such as deductive, generative, and complex CSS, are identified and described. In the concluding remarks, an interdisciplinary variant, which takes after ABM, reconciling it with the quantitative one, is proposed as a fundamental requirement for a new program of the CSS.
Scaling and efficiency determine the irreversible evolution of a market
Baldovin, F.; Stella, A. L.
2007-01-01
In setting up a stochastic description of the time evolution of a financial index, the challenge consists in devising a model compatible with all stylized facts emerging from the analysis of financial time series and providing a reliable basis for simulating such series. Based on constraints imposed by market efficiency and on an inhomogeneous-time generalization of standard simple scaling, we propose an analytical model which accounts simultaneously for empirical results like the linear decorrelation of successive returns, the power law dependence on time of the volatility autocorrelation function, and the multiscaling associated to this dependence. In addition, our approach gives a justification and a quantitative assessment of the irreversible character of the index dynamics. This irreversibility enters as a key ingredient in a novel simulation strategy of index evolution which demonstrates the predictive potential of the model.
On agent-based modeling and computational social science
Conte, Rosaria; Paolucci, Mario
2014-01-01
In the first part of the paper, the field of agent-based modeling (ABM) is discussed focusing on the role of generative theories, aiming at explaining phenomena by growing them. After a brief analysis of the major strengths of the field some crucial weaknesses are analyzed. In particular, the generative power of ABM is found to have been underexploited, as the pressure for simple recipes has prevailed and shadowed the application of rich cognitive models. In the second part of the paper, the renewal of interest for Computational Social Science (CSS) is focused upon, and several of its variants, such as deductive, generative, and complex CSS, are identified and described. In the concluding remarks, an interdisciplinary variant, which takes after ABM, reconciling it with the quantitative one, is proposed as a fundamental requirement for a new program of the CSS. PMID:25071642
A two-dimensional model of water: Theory and computer simulations
NASA Astrophysics Data System (ADS)
Urbič, T.; Vlachy, V.; Kalyuzhnyi, Yu. V.; Southall, N. T.; Dill, K. A.
2000-02-01
We develop an analytical theory for a simple model of liquid water. We apply Wertheim's thermodynamic perturbation theory (TPT) and integral equation theory (IET) for associative liquids to the MB model, which is among the simplest models of water. Water molecules are modeled as 2-dimensional Lennard-Jones disks with three hydrogen bonding arms arranged symmetrically, resembling the Mercedes-Benz (MB) logo. The MB model qualitatively predicts both the anomalous properties of pure water and the anomalous solvation thermodynamics of nonpolar molecules. IET is based on the orientationally averaged version of the Ornstein-Zernike equation. This is one of the main approximations in the present work. IET correctly predicts the pair correlation function of the model water at high temperatures. Both TPT and IET are in semi-quantitative agreement with the Monte Carlo values of the molar volume, isothermal compressibility, thermal expansion coefficient, and heat capacity. A major advantage of these theories is that they require orders of magnitude less computer time than the Monte Carlo simulations.
NASA Astrophysics Data System (ADS)
Liu, Yang; Uttam, Shikhar; Pham, Hoa V.; Hartman, Douglas J.
2017-02-01
Pathology remains the gold standard for cancer diagnosis and in some cases prognosis, in which trained pathologists examine abnormality in tissue architecture and cell morphology characteristic of cancer cells with a bright-field microscope. The limited resolution of conventional microscope can result in intra-observer variation, missed early-stage cancers, and indeterminate cases that often result in unnecessary invasive procedures in the absence of cancer. Assessment of nanoscale structural characteristics via quantitative phase represents a promising strategy for identifying pre-cancerous or cancerous cells, due to its nanoscale sensitivity to optical path length, simple sample preparation (i.e., label-free) and low cost. I will present the development of quantitative phase microscopy system in transmission and reflection configuration to detect the structural changes in nuclear architecture, not be easily identifiable by conventional pathology. Specifically, we will present the use of transmission-mode quantitative phase imaging to improve diagnostic accuracy of urine cytology and the nuclear dry mass is progressively correlate with negative, atypical, suspicious and positive cytological diagnosis. In a second application, we will present the use of reflection-mode quantitative phase microscopy for depth-resolved nanoscale nuclear architecture mapping (nanoNAM) of clinically prepared formalin-fixed, paraffin-embedded tissue sections. We demonstrated that the quantitative phase microscopy system detects a gradual increase in the density alteration of nuclear architecture during malignant transformation in animal models of colon carcinogenesis and in human patients with ulcerative colitis, even in tissue that appears histologically normal according to pathologists. We evaluated the ability of nanoNAM to predict "future" cancer progression in patients with ulcerative colitis.
BioModels Database: a repository of mathematical models of biological processes.
Chelliah, Vijayalakshmi; Laibe, Camille; Le Novère, Nicolas
2013-01-01
BioModels Database is a public online resource that allows storing and sharing of published, peer-reviewed quantitative, dynamic models of biological processes. The model components and behaviour are thoroughly checked to correspond the original publication and manually curated to ensure reliability. Furthermore, the model elements are annotated with terms from controlled vocabularies as well as linked to relevant external data resources. This greatly helps in model interpretation and reuse. Models are stored in SBML format, accepted in SBML and CellML formats, and are available for download in various other common formats such as BioPAX, Octave, SciLab, VCML, XPP and PDF, in addition to SBML. The reaction network diagram of the models is also available in several formats. BioModels Database features a search engine, which provides simple and more advanced searches. Features such as online simulation and creation of smaller models (submodels) from the selected model elements of a larger one are provided. BioModels Database can be accessed both via a web interface and programmatically via web services. New models are available in BioModels Database at regular releases, about every 4 months.
USDA-ARS?s Scientific Manuscript database
Grey Leaf Spot (GLS) is a detrimental disease of perennial ryegrass caused by a host-specialized form of Magnaporthe oryzae (Mot). In order to improve turf management, a quantitative loop-mediated isothermal amplification (LAMP) assay coupled with a simple spore trap is being developed to monitor GL...
Predicting perturbation patterns from the topology of biological networks.
Santolini, Marc; Barabási, Albert-László
2018-06-20
High-throughput technologies, offering an unprecedented wealth of quantitative data underlying the makeup of living systems, are changing biology. Notably, the systematic mapping of the relationships between biochemical entities has fueled the rapid development of network biology, offering a suitable framework to describe disease phenotypes and predict potential drug targets. However, our ability to develop accurate dynamical models remains limited, due in part to the limited knowledge of the kinetic parameters underlying these interactions. Here, we explore the degree to which we can make reasonably accurate predictions in the absence of the kinetic parameters. We find that simple dynamically agnostic models are sufficient to recover the strength and sign of the biochemical perturbation patterns observed in 87 biological models for which the underlying kinetics are known. Surprisingly, a simple distance-based model achieves 65% accuracy. We show that this predictive power is robust to topological and kinetic parameter perturbations, and we identify key network properties that can increase up to 80% the recovery rate of the true perturbation patterns. We validate our approach using experimental data on the chemotactic pathway in bacteria, finding that a network model of perturbation spreading predicts with ∼80% accuracy the directionality of gene expression and phenotype changes in knock-out and overproduction experiments. These findings show that the steady advances in mapping out the topology of biochemical interaction networks opens avenues for accurate perturbation spread modeling, with direct implications for medicine and drug development.
a Proposed Benchmark Problem for Scatter Calculations in Radiographic Modelling
NASA Astrophysics Data System (ADS)
Jaenisch, G.-R.; Bellon, C.; Schumm, A.; Tabary, J.; Duvauchelle, Ph.
2009-03-01
Code Validation is a permanent concern in computer modelling, and has been addressed repeatedly in eddy current and ultrasonic modeling. A good benchmark problem is sufficiently simple to be taken into account by various codes without strong requirements on geometry representation capabilities, focuses on few or even a single aspect of the problem at hand to facilitate interpretation and to avoid that compound errors compensate themselves, yields a quantitative result and is experimentally accessible. In this paper we attempt to address code validation for one aspect of radiographic modeling, the scattered radiation prediction. Many NDT applications can not neglect scattered radiation, and the scatter calculation thus is important to faithfully simulate the inspection situation. Our benchmark problem covers the wall thickness range of 10 to 50 mm for single wall inspections, with energies ranging from 100 to 500 keV in the first stage, and up to 1 MeV with wall thicknesses up to 70 mm in the extended stage. A simple plate geometry is sufficient for this purpose, and the scatter data is compared on a photon level, without a film model, which allows for comparisons with reference codes like MCNP. We compare results of three Monte Carlo codes (McRay, Sindbad and Moderato) as well as an analytical first order scattering code (VXI), and confront them to results obtained with MCNP. The comparison with an analytical scatter model provides insights into the application domain where this kind of approach can successfully replace Monte-Carlo calculations.
López-Ferrer, Daniel; Hixson, Kim K.; Smallwood, Heather; Squier, Thomas C.; Petritis, Konstantinos; Smith, Richard D.
2009-01-01
A new method that uses immobilized trypsin concomitant with ultrasonic irradiation results in ultra-rapid digestion and thorough 18O labeling for quantitative protein comparisons. The reproducible and highly efficient method provided effective digestions in <1 min with a minimized amount of enzyme required compared to traditional methods. This method was demonstrated for digestion of both simple and complex protein mixtures, including bovine serum albumin, a global proteome extract from the bacteria Shewanella oneidensis, and mouse plasma, as well as 18O labeling of such complex protein mixtures, which validated the application of this method for differential proteomic measurements. This approach is simple, reproducible, cost effective, rapid, and thus well-suited for automation. PMID:19555078
Xiao, Linda; Alder, Rhiannon; Mehta, Megha; Krayem, Nadine; Cavasinni, Bianca; Laracy, Sean; Cameron, Shane; Fu, Shanlin
2018-04-01
Cocaine trafficking in the form of textile impregnation is routinely encountered as a concealment method. Raman spectroscopy has been a popular and successful testing method used for in situ screening of cocaine in textiles and other matrices. Quantitative analysis of cocaine in these matrices using Raman spectroscopy has not been reported to date. This study aimed to develop a simple Raman method for quantifying cocaine using atropine as the model analogue in various types of textiles. Textiles were impregnated with solutions of atropine in methanol. The impregnated atropine was extracted using less hazardous acidified water with the addition of potassium thiocyanate (KSCN) as an internal standard for Raman analysis. Despite the presence of background matrix signals arising from the textiles, the cocaine analogue could easily be identified by its characteristic Raman bands. The successful use of KSCN normalised the analyte signal response due to different textile matrix background interferences and thus removed the need for a matrix-matched calibration. The method was linear over a concentration range of 6.25-37.5 mg/cm 2 with a coefficient of determination (R 2 ) at 0.975 and acceptable precision and accuracy. A simple and accurate Raman spectroscopy method for the analysis and quantification of a cocaine analogue impregnated in textiles has been developed and validated for the first time. This proof-of-concept study has demonstrated that atropine can act as an ideal model compound to study the problem of cocaine impregnation in textile. The method has the potential to be further developed and implemented in real world forensic cases. Copyright © 2017 John Wiley & Sons, Ltd.
A simple microstructure return model explaining microstructure noise and Epps effects
NASA Astrophysics Data System (ADS)
Saichev, A.; Sornette, D.
2014-01-01
We present a novel simple microstructure model of financial returns that combines (i) the well-known ARFIMA process applied to tick-by-tick returns, (ii) the bid-ask bounce effect, (iii) the fat tail structure of the distribution of returns and (iv) the non-Poissonian statistics of inter-trade intervals. This model allows us to explain both qualitatively and quantitatively important stylized facts observed in the statistics of both microstructure and macrostructure returns, including the short-ranged correlation of returns, the long-ranged correlations of absolute returns, the microstructure noise and Epps effects. According to the microstructure noise effect, volatility is a decreasing function of the time-scale used to estimate it. The Epps effect states that cross correlations between asset returns are increasing functions of the time-scale at which the returns are estimated. The microstructure noise is explained as the result of the negative return correlations inherent in the definition of the bid-ask bounce component (ii). In the presence of a genuine correlation between the returns of two assets, the Epps effect is due to an average statistical overlap of the momentum of the returns of the two assets defined over a finite time-scale in the presence of the long memory process (i).
Kar, Supratik; Gajewicz, Agnieszka; Puzyn, Tomasz; Roy, Kunal; Leszczynski, Jerzy
2014-09-01
Nanotechnology has evolved as a frontrunner in the development of modern science. Current studies have established toxicity of some nanoparticles to human and environment. Lack of sufficient data and low adequacy of experimental protocols hinder comprehensive risk assessment of nanoparticles (NPs). In the present work, metal electronegativity (χ), the charge of the metal cation corresponding to a given oxide (χox), atomic number and valence electron number of the metal have been used as simple molecular descriptors to build up quantitative structure-toxicity relationship (QSTR) models for prediction of cytotoxicity of metal oxide NPs to bacteria Escherichia coli. These descriptors can be easily obtained from molecular formula and information acquired from periodic table in no time. It has been shown that a simple molecular descriptor χox can efficiently encode cytotoxicity of metal oxides leading to models with high statistical quality as well as interpretability. Based on this model and previously published experimental results, we have hypothesized the most probable mechanism of the cytotoxicity of metal oxide nanoparticles to E. coli. Moreover, the required information for descriptor calculation is independent of size range of NPs, nullifying a significant problem that various physical properties of NPs change for different size ranges. Copyright © 2014 Elsevier Inc. All rights reserved.
Direct identification of predator-prey dynamics in gyrokinetic simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kobayashi, Sumire, E-mail: sumire.kobayashi@lpp.polytechnique.fr; Gürcan, Özgür D; Diamond, Patrick H.
2015-09-15
The interaction between spontaneously formed zonal flows and small-scale turbulence in nonlinear gyrokinetic simulations is explored in a shearless closed field line geometry. It is found that when clear limit cycle oscillations prevail, the observed turbulent dynamics can be quantitatively captured by a simple Lotka-Volterra type predator-prey model. Fitting the time traces of full gyrokinetic simulations by such a reduced model allows extraction of the model coefficients. Scanning physical plasma parameters, such as collisionality and density gradient, it was observed that the effective growth rates of turbulence (i.e., the prey) remain roughly constant, in spite of the higher and varyingmore » level of primary mode linear growth rates. The effective growth rate that was extracted corresponds roughly to the zonal-flow-modified primary mode growth rate. It was also observed that the effective damping of zonal flows (i.e., the predator) in the parameter range, where clear predator-prey dynamics is observed, (i.e., near marginal stability) agrees with the collisional damping expected in these simulations. This implies that the Kelvin-Helmholtz-like instability may be negligible in this range. The results imply that when the tertiary instability plays a role, the dynamics becomes more complex than a simple Lotka-Volterra predator prey.« less
Test Driven Development of a Parameterized Ice Sheet Component
NASA Astrophysics Data System (ADS)
Clune, T.
2011-12-01
Test driven development (TDD) is a software development methodology that offers many advantages over traditional approaches including reduced development and maintenance costs, improved reliability, and superior design quality. Although TDD is widely accepted in many software communities, the suitability to scientific software is largely undemonstrated and warrants a degree of skepticism. Indeed, numerical algorithms pose several challenges to unit testing in general, and TDD in particular. Among these challenges are the need to have simple, non-redundant closed-form expressions to compare against the results obtained from the implementation as well as realistic error estimates. The necessity for serial and parallel performance raises additional concerns for many scientific applicaitons. In previous work I demonstrated that TDD performed well for the development of a relatively simple numerical model that simulates the growth of snowflakes, but the results were anecdotal and of limited relevance to far more complex software components typical of climate models. This investigation has now been extended by successfully applying TDD to the implementation of a substantial portion of a new parameterized ice sheet component within a full climate model. After a brief introduction to TDD, I will present techniques that address some of the obstacles encountered with numerical algorithms. I will conclude with some quantitative and qualitative comparisons against climate components developed in a more traditional manner.
Lande, R
2014-05-01
Quantitative genetic models of evolution of phenotypic plasticity are used to derive environmental tolerance curves for a population in a changing environment, providing a theoretical foundation for integrating physiological and community ecology with evolutionary genetics of plasticity and norms of reaction. Plasticity is modelled for a labile quantitative character undergoing continuous reversible development and selection in a fluctuating environment. If there is no cost of plasticity, a labile character evolves expected plasticity equalling the slope of the optimal phenotype as a function of the environment. This contrasts with previous theory for plasticity influenced by the environment at a critical stage of early development determining a constant adult phenotype on which selection acts, for which the expected plasticity is reduced by the environmental predictability over the discrete time lag between development and selection. With a cost of plasticity in a labile character, the expected plasticity depends on the cost and on the environmental variance and predictability averaged over the continuous developmental time lag. Environmental tolerance curves derived from this model confirm traditional assumptions in physiological ecology and provide new insights. Tolerance curve width increases with larger environmental variance, but can only evolve within a limited range. The strength of the trade-off between tolerance curve height and width depends on the cost of plasticity. Asymmetric tolerance curves caused by male sterility at high temperature are illustrated. A simple condition is given for a large transient increase in plasticity and tolerance curve width following a sudden change in average environment. © 2014 The Author. Journal of Evolutionary Biology © 2014 European Society For Evolutionary Biology.
The neural optimal control hierarchy for motor control
NASA Astrophysics Data System (ADS)
DeWolf, T.; Eliasmith, C.
2011-10-01
Our empirical, neuroscientific understanding of biological motor systems has been rapidly growing in recent years. However, this understanding has not been systematically mapped to a quantitative characterization of motor control based in control theory. Here, we attempt to bridge this gap by describing the neural optimal control hierarchy (NOCH), which can serve as a foundation for biologically plausible models of neural motor control. The NOCH has been constructed by taking recent control theoretic models of motor control, analyzing the required processes, generating neurally plausible equivalent calculations and mapping them on to the neural structures that have been empirically identified to form the anatomical basis of motor control. We demonstrate the utility of the NOCH by constructing a simple model based on the identified principles and testing it in two ways. First, we perturb specific anatomical elements of the model and compare the resulting motor behavior with clinical data in which the corresponding area of the brain has been damaged. We show that damaging the assigned functions of the basal ganglia and cerebellum can cause the movement deficiencies seen in patients with Huntington's disease and cerebellar lesions. Second, we demonstrate that single spiking neuron data from our model's motor cortical areas explain major features of single-cell responses recorded from the same primate areas. We suggest that together these results show how NOCH-based models can be used to unify a broad range of data relevant to biological motor control in a quantitative, control theoretic framework.
Genome Scale Modeling in Systems Biology: Algorithms and Resources
Najafi, Ali; Bidkhori, Gholamreza; Bozorgmehr, Joseph H.; Koch, Ina; Masoudi-Nejad, Ali
2014-01-01
In recent years, in silico studies and trial simulations have complemented experimental procedures. A model is a description of a system, and a system is any collection of interrelated objects; an object, moreover, is some elemental unit upon which observations can be made but whose internal structure either does not exist or is ignored. Therefore, any network analysis approach is critical for successful quantitative modeling of biological systems. This review highlights some of most popular and important modeling algorithms, tools, and emerging standards for representing, simulating and analyzing cellular networks in five sections. Also, we try to show these concepts by means of simple example and proper images and graphs. Overall, systems biology aims for a holistic description and understanding of biological processes by an integration of analytical experimental approaches along with synthetic computational models. In fact, biological networks have been developed as a platform for integrating information from high to low-throughput experiments for the analysis of biological systems. We provide an overview of all processes used in modeling and simulating biological networks in such a way that they can become easily understandable for researchers with both biological and mathematical backgrounds. Consequently, given the complexity of generated experimental data and cellular networks, it is no surprise that researchers have turned to computer simulation and the development of more theory-based approaches to augment and assist in the development of a fully quantitative understanding of cellular dynamics. PMID:24822031
Quantitative estimation of pesticide-likeness for agrochemical discovery.
Avram, Sorin; Funar-Timofei, Simona; Borota, Ana; Chennamaneni, Sridhar Rao; Manchala, Anil Kumar; Muresan, Sorel
2014-12-01
The design of chemical libraries, an early step in agrochemical discovery programs, is frequently addressed by means of qualitative physicochemical and/or topological rule-based methods. The aim of this study is to develop quantitative estimates of herbicide- (QEH), insecticide- (QEI), fungicide- (QEF), and, finally, pesticide-likeness (QEP). In the assessment of these definitions, we relied on the concept of desirability functions. We found a simple function, shared by the three classes of pesticides, parameterized particularly, for six, easy to compute, independent and interpretable, molecular properties: molecular weight, logP, number of hydrogen bond acceptors, number of hydrogen bond donors, number of rotatable bounds and number of aromatic rings. Subsequently, we describe the scoring of each pesticide class by the corresponding quantitative estimate. In a comparative study, we assessed the performance of the scoring functions using extensive datasets of patented pesticides. The hereby-established quantitative assessment has the ability to rank compounds whether they fail well-established pesticide-likeness rules or not, and offer an efficient way to prioritize (class-specific) pesticides. These findings are valuable for the efficient estimation of pesticide-likeness of vast chemical libraries in the field of agrochemical discovery. Graphical AbstractQuantitative models for pesticide-likeness were derived using the concept of desirability functions parameterized for six, easy to compute, independent and interpretable, molecular properties: molecular weight, logP, number of hydrogen bond acceptors, number of hydrogen bond donors, number of rotatable bounds and number of aromatic rings.
Al Feteisi, Hajar; Achour, Brahim; Rostami-Hodjegan, Amin; Barber, Jill
2015-01-01
Drug-metabolizing enzymes and transporters play an important role in drug absorption, distribution, metabolism and excretion and, consequently, they influence drug efficacy and toxicity. Quantification of drug-metabolizing enzymes and transporters in various tissues is therefore essential for comprehensive elucidation of drug absorption, distribution, metabolism and excretion. Recent advances in liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS) have improved the quantification of pharmacologically relevant proteins. This report presents an overview of mass spectrometry-based methods currently used for the quantification of drug-metabolizing enzymes and drug transporters, mainly focusing on applications and cost associated with various quantitative strategies based on stable isotope-labeled standards (absolute quantification peptide standards, quantification concatemers, protein standards for absolute quantification) and label-free analysis. In mass spectrometry, there is no simple relationship between signal intensity and analyte concentration. Proteomic strategies are therefore complex and several factors need to be considered when selecting the most appropriate method for an intended application, including the number of proteins and samples. Quantitative strategies require appropriate mass spectrometry platforms, yet choice is often limited by the availability of appropriate instrumentation. Quantitative proteomics research requires specialist practical skills and there is a pressing need to dedicate more effort and investment to training personnel in this area. Large-scale multicenter collaborations are also needed to standardize quantitative strategies in order to improve physiologically based pharmacokinetic models.
Quantitative assessment of Pb sources in isotopic mixtures using a Bayesian mixing model.
Longman, Jack; Veres, Daniel; Ersek, Vasile; Phillips, Donald L; Chauvel, Catherine; Tamas, Calin G
2018-04-18
Lead (Pb) isotopes provide valuable insights into the origin of Pb within a sample, typically allowing for reliable fingerprinting of their source. This is useful for a variety of applications, from tracing sources of pollution-related Pb, to the origins of Pb in archaeological artefacts. However, current approaches investigate source proportions via graphical means, or simple mixing models. As such, an approach, which quantitatively assesses source proportions and fingerprints the signature of analysed Pb, especially for larger numbers of sources, would be valuable. Here we use an advanced Bayesian isotope mixing model for three such applications: tracing dust sources in pre-anthropogenic environmental samples, tracking changing ore exploitation during the Roman period, and identifying the source of Pb in a Roman-age mining artefact. These examples indicate this approach can understand changing Pb sources deposited during both pre-anthropogenic times, when natural cycling of Pb dominated, and the Roman period, one marked by significant anthropogenic pollution. Our archaeometric investigation indicates clear input of Pb from Romanian ores previously speculated, but not proven, to have been the Pb source. Our approach can be applied to a range of disciplines, providing a new method for robustly tracing sources of Pb observed within a variety of environments.
NASA Astrophysics Data System (ADS)
Cao, Qian; Thawait, Gaurav; Gang, Grace J.; Zbijewski, Wojciech; Reigel, Thomas; Brown, Tyler; Corner, Brian; Demehri, Shadpour; Siewerdsen, Jeffrey H.
2015-02-01
Joint space morphology can be indicative of the risk, presence, progression, and/or treatment response of disease or trauma. We describe a novel methodology of characterizing joint space morphology in high-resolution 3D images (e.g. cone-beam CT (CBCT)) using a model based on elementary electrostatics that overcomes a variety of basic limitations of existing 2D and 3D methods. The method models each surface of a joint as a conductor at fixed electrostatic potential and characterizes the intra-articular space in terms of the electric field lines resulting from the solution of Gauss’ Law and the Laplace equation. As a test case, the method was applied to discrimination of healthy and osteoarthritic subjects (N = 39) in 3D images of the knee acquired on an extremity CBCT system. The method demonstrated improved diagnostic performance (area under the receiver operating characteristic curve, AUC > 0.98) compared to simpler methods of quantitative measurement and qualitative image-based assessment by three expert musculoskeletal radiologists (AUC = 0.87, p-value = 0.007). The method is applicable to simple (e.g. the knee or elbow) or multi-axial joints (e.g. the wrist or ankle) and may provide a useful means of quantitatively assessing a variety of joint pathologies.
NASA Astrophysics Data System (ADS)
Galanzha, Ekaterina I.; Tuchin, Valery V.; Chowdhury, Parimal; Zharov, Vladimir P.
2004-08-01
The digital transmission microscopy is very informative, noninvasive for vessels, simple and available method for studying and measuring lymph microvessels function in vivo. Rat mesentery can use as promising animal model of lymph microvessels in vivo. Such imaging system allowed visualizing the entire lymphangion (with input and output valves), its wall, lymphatic valves, lymph flow as well as single cells in flow; obtaining anew basic information on lymph microcirculation and quantitative data on lymphatic function including indexes of phasic contractions and valve function, the quantitative parameters of lymph-flow velocity. Rat mesentery is good model to create different types of lymphedemas in acute and chronic experiments. The obtained data revealed that significant edema started immediately after lymph node dissection in one-half of cases and was accompanied by lymphatic disturbances. The greatest degree of edema was found after 1 week. After 4 weeks, the degree of edema sometimes decreased, but functional lymphatic disturbances progressed. Nicotine had significant direct dose-dependent effect on microlymphatic function at the acute local application, but the same dose of this drug was not effect on microcirculation in chronic intoxication. Despite yielding interesting data, transmittance microscopy had some limitations when applied to microcirculation studies. The problems could be solved at the application of integrated measuring technique.
Kusunose, Jiro; Zhang, Hua; Gagnon, M. Karen J.; Pan, Tingrui; Simon, Scott I.; Ferrara, Katherine W.
2012-01-01
The identification of novel, synthetic targeting ligands to endothelial receptors has led to the rapid development of targeted nanoparticles for drug, gene and imaging probe delivery. Central to development and optimization are effective models for assessing particle binding in vitro. Here, we developed a simple and cost effective method to quantitatively assess nanoparticle accumulation under physiologically-relevant laminar flow. We designed reversibly vacuum–sealed PDMS microfluidic chambers compatible with 35 mm petri dishes, which deliver uniform or gradient shear stress. These chambers have sufficient surface area for facile cell collection for particle accumulation quantitation through FACS. We tested this model by synthesizing and flowing liposomes coated with APN (KD ~ 300 µM) and VCAM-1-targeting (KD ~ 30 µM) peptides over HUVEC. Particle binding significantly increased with ligand concentration (up to 6 mol%) and decreased with excess PEG. While the accumulation of particles with the lower affinity ligand decreased with shear, accumulation of those with the higher affinity ligand was highest in a low shear environment (2.4 dyne/cm2), as compared with greater shear or the absence of shear. We describe here a robust flow chamber model that is applied to optimize the properties of 100 nm liposomes targeted to inflamed endothelium. PMID:22855121
Rapid Quantitative Detection of Lactobacillus sakei in Meat and Fermented Sausages by Real-Time PCR
Martín, Belén; Jofré, Anna; Garriga, Margarita; Pla, Maria; Aymerich, Teresa
2006-01-01
A quick and simple method for quantitative detection of Lactobacillus sakei in fermented sausages was successfully developed. It is based on Chelex-100-based DNA purification and real-time PCR enumeration using a TaqMan fluorescence probe. Primers and probes were designed in the L. sakei 16S-23S rRNA intergenic transcribed spacer region, and the assay was evaluated using L. sakei genomic DNA and an artificially inoculated sausage model. The detection limit of this technique was approximately 3 cells per reaction mixture using both purified DNA and the inoculated sausage model. The quantification limit was established at 30 cells per reaction mixture in both models. The assay was then applied to enumerate L. sakei in real samples, and the results were compared to the MRS agar count method followed by confirmation of the percentage of L. sakei colonies. The results obtained by real-time PCR were not statistically significantly different than those obtained by plate count on MRS agar (P > 0.05), showing a satisfactory agreement between both methods. Therefore, the real-time PCR assay developed can be considered a promising rapid alternative method for the quantification of L. sakei and evaluation of the implantation of starter strains of L. sakei in fermented sausages. PMID:16957227
Rapid quantitative detection of Lactobacillus sakei in meat and fermented sausages by real-time PCR.
Martín, Belén; Jofré, Anna; Garriga, Margarita; Pla, Maria; Aymerich, Teresa
2006-09-01
A quick and simple method for quantitative detection of Lactobacillus sakei in fermented sausages was successfully developed. It is based on Chelex-100-based DNA purification and real-time PCR enumeration using a TaqMan fluorescence probe. Primers and probes were designed in the L. sakei 16S-23S rRNA intergenic transcribed spacer region, and the assay was evaluated using L. sakei genomic DNA and an artificially inoculated sausage model. The detection limit of this technique was approximately 3 cells per reaction mixture using both purified DNA and the inoculated sausage model. The quantification limit was established at 30 cells per reaction mixture in both models. The assay was then applied to enumerate L. sakei in real samples, and the results were compared to the MRS agar count method followed by confirmation of the percentage of L. sakei colonies. The results obtained by real-time PCR were not statistically significantly different than those obtained by plate count on MRS agar (P > 0.05), showing a satisfactory agreement between both methods. Therefore, the real-time PCR assay developed can be considered a promising rapid alternative method for the quantification of L. sakei and evaluation of the implantation of starter strains of L. sakei in fermented sausages.
Quantitative DIC microscopy using an off-axis self-interference approach.
Fu, Dan; Oh, Seungeun; Choi, Wonshik; Yamauchi, Toyohiko; Dorn, August; Yaqoob, Zahid; Dasari, Ramachandra R; Feld, Michael S
2010-07-15
Traditional Normarski differential interference contrast (DIC) microscopy is a very powerful method for imaging nonstained biological samples. However, one of its major limitations is the nonquantitative nature of the imaging. To overcome this problem, we developed a quantitative DIC microscopy method based on off-axis sample self-interference. The digital holography algorithm is applied to obtain quantitative phase gradients in orthogonal directions, which leads to a quantitative phase image through a spiral integration of the phase gradients. This method is practically simple to implement on any standard microscope without stringent requirements on polarization optics. Optical sectioning can be obtained through enlarged illumination NA.
Deleterious Mutations, Apparent Stabilizing Selection and the Maintenance of Quantitative Variation
Kondrashov, A. S.; Turelli, M.
1992-01-01
Apparent stabilizing selection on a quantitative trait that is not causally connected to fitness can result from the pleiotropic effects of unconditionally deleterious mutations, because as N. Barton noted, ``... individuals with extreme values of the trait will tend to carry more deleterious alleles ....'' We use a simple model to investigate the dependence of this apparent selection on the genomic deleterious mutation rate, U; the equilibrium distribution of K, the number of deleterious mutations per genome; and the parameters describing directional selection against deleterious mutations. Unlike previous analyses, we allow for epistatic selection against deleterious alleles. For various selection functions and realistic parameter values, the distribution of K, the distribution of breeding values for a pleiotropically affected trait, and the apparent stabilizing selection function are all nearly Gaussian. The additive genetic variance for the quantitative trait is kQa(2), where k is the average number of deleterious mutations per genome, Q is the proportion of deleterious mutations that affect the trait, and a(2) is the variance of pleiotropic effects for individual mutations that do affect the trait. In contrast, when the trait is measured in units of its additive standard deviation, the apparent fitness function is essentially independent of Q and a(2); and β, the intensity of selection, measured as the ratio of additive genetic variance to the ``variance'' of the fitness curve, is very close to s = U/k, the selection coefficient against individual deleterious mutations at equilibrium. Therefore, this model predicts appreciable apparent stabilizing selection if s exceeds about 0.03, which is consistent with various data. However, the model also predicts that β must equal V(m)/V(G), the ratio of new additive variance for the trait introduced each generation by mutation to the standing additive variance. Most, although not all, estimates of this ratio imply apparent stabilizing selection weaker than generally observed. A qualitative argument suggests that even when direct selection is responsible for most of the selection observed on a character, it may be essentially irrelevant to the maintenance of variation for the character by mutation-selection balance. Simple experiments can indicate the fraction of observed stabilizing selection attributable to the pleiotropic effects of deleterious mutations. PMID:1427047
ERIC Educational Resources Information Center
Salinas, Dino G.; Reyes, Juan G.
2015-01-01
Qualitative questions are proposed to assess the understanding of solubility and some of its applications. To improve those results, a simple quantitative problem on the precipitation of proteins is proposed.
Fagan, William F; Lutscher, Frithjof
2006-04-01
Spatially explicit models for populations are often difficult to tackle mathematically and, in addition, require detailed data on individual movement behavior that are not easily obtained. An approximation known as the "average dispersal success" provides a tool for converting complex models, which may include stage structure and a mechanistic description of dispersal, into a simple matrix model. This simpler matrix model has two key advantages. First, it is easier to parameterize from the types of empirical data typically available to conservation biologists, such as survivorship, fecundity, and the fraction of juveniles produced in a study area that also recruit within the study area. Second, it is more amenable to theoretical investigation. Here, we use the average dispersal success approximation to develop estimates of the critical reserve size for systems comprising single patches or simple metapopulations. The quantitative approach can be used for both plants and animals; however, to provide a concrete example of the technique's utility, we focus on a special case pertinent to animals. Specifically, for territorial animals, we can characterize such an estimate of minimum viable habitat area in terms of the number of home ranges that the reserve contains. Consequently, the average dispersal success framework provides a framework through which home range size, natal dispersal distances, and metapopulation dynamics can be linked to reserve design. We briefly illustrate the approach using empirical data for the swift fox (Vulpes velox).
Fischer, Thomas; Fischer, Susanne; Himmel, Wolfgang; Kochen, Michael M; Hummers-Pradier, Eva
2008-01-01
The influence of patient characteristics on family practitioners' (FPs') diagnostic decision making has mainly been investigated using indirect methods such as vignettes or questionnaires. Direct observation-borrowed from social and cultural anthropology-may be an alternative method for describing FPs' real-life behavior and may help in gaining insight into how FPs diagnose respiratory tract infections, which are frequent in primary care. To clarify FPs' diagnostic processes when treating patients suffering from symptoms of respiratory tract infection. This direct observation study was performed in 30 family practices using a checklist for patient complaints, history taking, physical examination, and diagnoses. The influence of patients' symptoms and complaints on the FPs' physical examination and diagnosis was calculated by logistic regression analyses. Dummy variables based on combinations of symptoms and complaints were constructed and tested against saturated (full) and backward regression models. In total, 273 patients (median age 37 years, 51% women) were included. The median number of symptoms described was 4 per patient, and most information was provided at the patients' own initiative. Multiple logistic regression analysis showed a strong association between patients' complaints and the physical examination. Frequent diagnoses were upper respiratory tract infection (URTI)/common cold (43%), bronchitis (26%), sinusitis (12%), and tonsillitis (11%). There were no significant statistical differences between "simple heuristic'' models and saturated regression models in the diagnoses of bronchitis, sinusitis, and tonsillitis, indicating that simple heuristics are probably used by the FPs, whereas "URTI/common cold'' was better explained by the full model. FPs tended to make their diagnosis based on a few patient symptoms and a limited physical examination. Simple heuristic models were almost as powerful in explaining most diagnoses as saturated models. Direct observation allowed for the study of decision making under real conditions, yielding both quantitative data and "qualitative'' information about the FPs' performance. It is important for investigators to be aware of the specific disadvantages of the method (e.g., a possible observer effect).
Applicability of PM3 to transphosphorylation reaction path: Toward designing a minimal ribozyme
NASA Technical Reports Server (NTRS)
Manchester, John I.; Shibata, Masayuki; Setlik, Robert F.; Ornstein, Rick L.; Rein, Robert
1993-01-01
A growing body of evidence shows that RNA can catalyze many of the reactions necessary both for replication of genetic material and the possible transition into the modern protein-based world. However, contemporary ribozymes are too large to have self-assembled from a prebiotic oligonucleotide pool. Still, it is likely that the major features of the earliest ribozymes have been preserved as molecular fossils in the catalytic RNA of today. Therefore, the search for a minimal ribozyme has been aimed at finding the necessary structural features of a modern ribozyme (Beaudry and Joyce, 1990). Both a three-dimensional model and quantum chemical calculations are required to quantitatively determine the effects of structural features of the ribozyme on the reaction it catalyzes. Using this model, quantum chemical calculations must be performed to determine quantitatively the effects of structural features on catalysis. Previous studies of the reaction path have been conducted at the ab initio level, but these methods are limited to small models due to enormous computational requirements. Semiempirical methods have been applied to large systems in the past; however, the accuracy of these methods depends largely on a simple model of the ribozyme-catalyzed reaction, or hydrolysis of phosphoric acid. We find that the results are qualitatively similar to ab initio results using large basis sets. Therefore, PM3 is suitable for studying the reaction path of the ribozyme-catalyzed reaction.
NASA Astrophysics Data System (ADS)
Mehta, Dalip Singh; Sharma, Anuradha; Dubey, Vishesh; Singh, Veena; Ahmad, Azeem
2016-03-01
We present a single-shot white light interference microscopy for the quantitative phase imaging (QPI) of biological cells and tissues. A common path white light interference microscope is developed and colorful white light interferogram is recorded by three-chip color CCD camera. The recorded white light interferogram is decomposed into the red, green and blue color wavelength component interferograms and processed it to find out the RI for different color wavelengths. The decomposed interferograms are analyzed using local model fitting (LMF)" algorithm developed for reconstructing the phase map from single interferogram. LMF is slightly off-axis interferometric QPI method which is a single-shot method that employs only a single image, so it is fast and accurate. The present method is very useful for dynamic process where path-length changes at millisecond level. From the single interferogram a wavelength-dependent quantitative phase imaging of human red blood cells (RBCs) are reconstructed and refractive index is determined. The LMF algorithm is simple to implement and is efficient in computation. The results are compared with the conventional phase shifting interferometry and Hilbert transform techniques.
Kang, Jonghoon; Park, Seyeon; Venkat, Aarya; Gopinath, Adarsh
2015-12-01
New interdisciplinary biological sciences like bioinformatics, biophysics, and systems biology have become increasingly relevant in modern science. Many papers have suggested the importance of adding these subjects, particularly bioinformatics, to an undergraduate curriculum; however, most of their assertions have relied on qualitative arguments. In this paper, we will show our metadata analysis of a scientific literature database (PubMed) that quantitatively describes the importance of the subjects of bioinformatics, systems biology, and biophysics as compared with a well-established interdisciplinary subject, biochemistry. Specifically, we found that the development of each subject assessed by its publication volume was well described by a set of simple nonlinear equations, allowing us to characterize them quantitatively. Bioinformatics, which had the highest ratio of publications produced, was predicted to grow between 77% and 93% by 2025 according to the model. Due to the large number of publications produced in bioinformatics, which nearly matches the number published in biochemistry, it can be inferred that bioinformatics is almost equal in significance to biochemistry. Based on our analysis, we suggest that bioinformatics be added to the standard biology undergraduate curriculum. Adding this course to an undergraduate curriculum will better prepare students for future research in biology.
A test of the double-shearing model of flow for granular materials
Savage, J.C.; Lockner, D.A.
1997-01-01
The double-shearing model of flow attributes plastic deformation in granular materials to cooperative slip on conjugate Coulomb shears (surfaces upon which the Coulomb yield condition is satisfied). The strict formulation of the double-shearing model then requires that the slip lines in the material coincide with the Coulomb shears. Three different experiments that approximate simple shear deformation in granular media appear to be inconsistent with this strict formulation. For example, the orientation of the principal stress axes in a layer of sand driven in steady, simple shear was measured subject to the assumption that the Coulomb failure criterion was satisfied on some surfaces (orientation unspecified) within the sand layer. The orientation of the inferred principal compressive axis was then compared with the orientations predicted by the double-shearing model. The strict formulation of the model [Spencer, 1982] predicts that the principal stress axes should rotate in a sense opposite to that inferred from the experiments. A less restrictive formulation of the double-shearing model by de Josselin de Jong [1971] does not completely specify the solution but does prescribe limits on the possible orientations of the principal stress axes. The orientations of the principal compression axis inferred from the experiments are probably within those limits. An elastoplastic formulation of the double-shearing model [de Josselin de Jong, 1988] is reasonably consistent with the experiments, although quantitative agreement was not attained. Thus we conclude that the double-shearing model may be a viable law to describe deformation of granular materials, but the macroscopic slip surfaces will not in general coincide with the Coulomb shears.
NASA Astrophysics Data System (ADS)
Wardani, Devy P.; Arifin, Muhammad; Suharyadi, Edi; Abraha, Kamsul
2015-05-01
Gelatin is a biopolymer derived from collagen that is widely used in food and pharmaceutical products. Due to some religion restrictions and health issues regarding the gelatin consumption which is extracted from certain species, it is necessary to establish a robust, reliable, sensitive and simple quantitative method to detect gelatin from different parent collagen species. To the best of our knowledge, there has not been a gelatin differentiation method based on optical sensor that could detect gelatin from different species quantitatively. Surface plasmon resonance (SPR) based biosensor is known to be a sensitive, simple and label free optical method for detecting biomaterials that is able to do quantitative detection. Therefore, we have utilized SPR-based biosensor to detect the differentiation between bovine and porcine gelatin in various concentration, from 0% to 10% (w/w). Here, we report the ability of SPR-based biosensor to detect difference between both gelatins, its sensitivity toward the gelatin concentration change, its reliability and limit of detection (LOD) and limit of quantification (LOQ) of the sensor. The sensor's LOD and LOQ towards bovine gelatin concentration are 0.38% and 1.26% (w/w), while towards porcine gelatin concentration are 0.66% and 2.20% (w/w), respectively. The results show that SPR-based biosensor is a promising tool for detecting gelatin from different raw materials quantitatively.
A semantic web framework to integrate cancer omics data with biological knowledge.
Holford, Matthew E; McCusker, James P; Cheung, Kei-Hoi; Krauthammer, Michael
2012-01-25
The RDF triple provides a simple linguistic means of describing limitless types of information. Triples can be flexibly combined into a unified data source we call a semantic model. Semantic models open new possibilities for the integration of variegated biological data. We use Semantic Web technology to explicate high throughput clinical data in the context of fundamental biological knowledge. We have extended Corvus, a data warehouse which provides a uniform interface to various forms of Omics data, by providing a SPARQL endpoint. With the querying and reasoning tools made possible by the Semantic Web, we were able to explore quantitative semantic models retrieved from Corvus in the light of systematic biological knowledge. For this paper, we merged semantic models containing genomic, transcriptomic and epigenomic data from melanoma samples with two semantic models of functional data - one containing Gene Ontology (GO) data, the other, regulatory networks constructed from transcription factor binding information. These two semantic models were created in an ad hoc manner but support a common interface for integration with the quantitative semantic models. Such combined semantic models allow us to pose significant translational medicine questions. Here, we study the interplay between a cell's molecular state and its response to anti-cancer therapy by exploring the resistance of cancer cells to Decitabine, a demethylating agent. We were able to generate a testable hypothesis to explain how Decitabine fights cancer - namely, that it targets apoptosis-related gene promoters predominantly in Decitabine-sensitive cell lines, thus conveying its cytotoxic effect by activating the apoptosis pathway. Our research provides a framework whereby similar hypotheses can be developed easily.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quashie, Edwin E.; Saha, Bidhan C.; Correa, Alfredo A.
Here, we present an ab initio study of the electronic stopping power of protons in copper over a wide range of proton velocities v = 0.02–10a.u. where we take into account nonlinear effects. Time-dependent density functional theory coupled with molecular dynamics is used to study electronic excitations produced by energetic protons. A plane-wave pseudopotential scheme is employed to solve the time-dependent Kohn-Sham equations for a moving ion in a periodic crystal. The electronic excitations and the band structure determine the stopping power of the material and alter the interatomic forces for both channeling and off-channeling trajectories. Our off-channeling results aremore » in quantitative agreement with experiments, and at low velocity they unveil a crossover region of superlinear velocity dependence (with a power of ~1.5) in the velocity range v = 0.07–0.3a.u., which we associate to the copper crystalline electronic band structure. The results are rationalized by simple band models connecting two separate regimes. We find that the limit of electronic stopping v → 0 is not as simple as phenomenological models suggest and it is plagued by band-structure effects.« less
Dama, Elisa; Tillhon, Micol; Bertalot, Giovanni; de Santis, Francesca; Troglio, Flavia; Pessina, Simona; Passaro, Antonio; Pece, Salvatore; de Marinis, Filippo; Dell'Orto, Patrizia; Viale, Giuseppe; Spaggiari, Lorenzo; Di Fiore, Pier Paolo; Bianchi, Fabrizio; Barberis, Massimo; Vecchi, Manuela
2016-06-14
Accurate detection of altered anaplastic lymphoma kinase (ALK) expression is critical for the selection of lung cancer patients eligible for ALK-targeted therapies. To overcome intrinsic limitations and discrepancies of currently available companion diagnostics for ALK, we developed a simple, affordable and objective PCR-based predictive model for the quantitative measurement of any ALK fusion as well as wild-type ALK upregulation. This method, optimized for low-quantity/-quality RNA from FFPE samples, combines cDNA pre-amplification with ad hoc generated calibration curves. All the models we derived yielded concordant predictions when applied to a cohort of 51 lung tumors, and correctly identified all 17 ALK FISH-positive and 33 of the 34 ALK FISH-negative samples. The one discrepant case was confirmed as positive by IHC, thus raising the accuracy of our test to 100%. Importantly, our method was accurate when using low amounts of input RNA (10 ng), also in FFPE samples with limited tumor cellularity (5-10%) and in FFPE cytology specimens. Thus, our test is an easily implementable diagnostic tool for the rapid, efficacious and cost-effective screening of ALK status in patients with lung cancer.
Solitary Wave in One-dimensional Buckyball System at Nanoscale
Xu, Jun; Zheng, Bowen; Liu, Yilun
2016-01-01
We have studied the stress wave propagation in one-dimensional (1-D) nanoscopic buckyball (C60) system by molecular dynamics (MD) simulation and quantitative modeling. Simulation results have shown that solitary waves are generated and propagating in the buckyball system through impacting one buckyball at one end of the buckyball chain. We have found the solitary wave behaviors are closely dependent on the initial temperature and impacting speed of the buckyball chain. There are almost no dispersion and dissipation of the solitary waves (stationary solitary wave) for relatively low temperature and high impacting speed. While for relatively high temperature and low impacting speed the profile of the solitary waves is highly distorted and dissipated after propagating several tens of buckyballs. A phase diagram is proposed to describe the effect of the temperature and impacting speed on the solitary wave behaviors in buckyball system. In order to quantitatively describe the wave behavior in buckyball system, a simple nonlinear-spring model is established, which can describe the MD simulation results at low temperature very well. The results presented in this work may lay a solid step towards the further understanding and manipulation of stress wave propagation and impact energy mitigation at nanoscale. PMID:26891624
Increased Accuracy of Ligand Sensing by Receptor Internalization and Lateral Receptor Diffusion
NASA Astrophysics Data System (ADS)
Aquino, Gerardo; Endres, Robert
2010-03-01
Many types of cells can sense external ligand concentrations with cell-surface receptors at extremely high accuracy. Interestingly, ligand-bound receptors are often internalized, a process also known as receptor-mediated endocytosis. While internalization is involved in a vast number of important functions for the life of a cell, it was recently also suggested to increase the accuracy of sensing ligand as overcounting of the same ligand molecules is reduced. A similar role may be played by receptor diffusion om the cell membrane. Fast, lateral receptor diffusion is known to be relevant in neurotransmission initiated by release of neurotransmitter glutamate in the synaptic cleft between neurons. By binding ligand and removal by diffusion from the region of release of the neurotransmitter, diffusing receptors can be reasonably expected to reduce the local overcounting of the same ligand molecules in the region of signaling. By extending simple ligand-receptor models to out-of-equilibrium thermodynamics, we show that both receptor internalization and lateral diffusion increase the accuracy with which cells can measure ligand concentrations in the external environment. We confirm this with our model and give quantitative predictions for experimental parameters values. We give quantitative predictions, which compare favorably to experimental data of real receptors.
Modeling continuum of epithelial mesenchymal transition plasticity.
Mandal, Mousumi; Ghosh, Biswajoy; Anura, Anji; Mitra, Pabitra; Pathak, Tanmaya; Chatterjee, Jyotirmoy
2016-02-01
Living systems respond to ambient pathophysiological changes by altering their phenotype, a phenomenon called 'phenotypic plasticity'. This program contains information about adaptive biological dynamism. Epithelial-mesenchymal transition (EMT) is one such process found to be crucial in development, wound healing, and cancer wherein the epithelial cells with restricted migratory potential develop motile functions by acquiring mesenchymal characteristics. In the present study, phase contrast microscopy images of EMT induced HaCaT cells were acquired at 24 h intervals for 96 h. The expression study of relevant pivotal molecules viz. F-actin, vimentin, fibronectin and N-cadherin was carried out to confirm the EMT process. Cells were intuitively categorized into five distinct morphological phenotypes. A population of 500 cells for each temporal point was selected to quantify their frequency of occurrence. The plastic interplay of cell phenotypes from the observations was described as a Markovian process. A model was formulated empirically using simple linear algebra, to depict the possible mechanisms of cellular transformation among the five phenotypes. This work employed qualitative, semi-quantitative and quantitative tools towards illustration and establishment of the EMT continuum. Thus, it provides a newer perspective to understand the embedded plasticity across the EMT spectrum.
Mapping of epistatic quantitative trait loci in four-way crosses.
He, Xiao-Hong; Qin, Hongde; Hu, Zhongli; Zhang, Tianzhen; Zhang, Yuan-Ming
2011-01-01
Four-way crosses (4WC) involving four different inbred lines often appear in plant and animal commercial breeding programs. Direct mapping of quantitative trait loci (QTL) in these commercial populations is both economical and practical. However, the existing statistical methods for mapping QTL in a 4WC population are built on the single-QTL genetic model. This simple genetic model fails to take into account QTL interactions, which play an important role in the genetic architecture of complex traits. In this paper, therefore, we attempted to develop a statistical method to detect epistatic QTL in 4WC population. Conditional probabilities of QTL genotypes, computed by the multi-point single locus method, were used to sample the genotypes of all putative QTL in the entire genome. The sampled genotypes were used to construct the design matrix for QTL effects. All QTL effects, including main and epistatic effects, were simultaneously estimated by the penalized maximum likelihood method. The proposed method was confirmed by a series of Monte Carlo simulation studies and real data analysis of cotton. The new method will provide novel tools for the genetic dissection of complex traits, construction of QTL networks, and analysis of heterosis.
A Local Vision on Soil Hydrology (John Dalton Medal Lecture)
NASA Astrophysics Data System (ADS)
Roth, K.
2012-04-01
After shortly looking back to some research trails of the past decades, and touching on the role of soils in our environmental machinery, a vision on the future of soil hydrology is offered. It is local in the sense of being based on limited experience as well as in the sense of focussing on local spatial scales, from 1 m to 1 km. Cornerstones of this vision are (i) rapid developments of quantitative observation technology, illustrated with the example of ground-penetrating radar (GPR), and (ii) the availability of ever more powerful compute facilities which allow to simulate increasingly complicated model representations in unprecedented detail. Together, they open a powerful and flexible approach to the quantitative understanding of soil hydrology where two lines are fitted: (i) potentially diverse measurements of the system of interest and their analysis and (ii) a comprehensive model representation, including architecture, material properties, forcings, and potentially unknown aspects, together with the same analysis as for (i). This approach pushes traditional inversion to operate on analyses, not on the underlying state variables, and to become flexible with respect to architecture and unknown aspects. The approach will be demonstrated for simple situations at test sites.
Quantitative modeling and optimization of magnetic tweezers.
Lipfert, Jan; Hao, Xiaomin; Dekker, Nynke H
2009-06-17
Magnetic tweezers are a powerful tool to manipulate single DNA or RNA molecules and to study nucleic acid-protein interactions in real time. Here, we have modeled the magnetic fields of permanent magnets in magnetic tweezers and computed the forces exerted on superparamagnetic beads from first principles. For simple, symmetric geometries the magnetic fields can be calculated semianalytically using the Biot-Savart law. For complicated geometries and in the presence of an iron yoke, we employ a finite-element three-dimensional PDE solver to numerically solve the magnetostatic problem. The theoretical predictions are in quantitative agreement with direct Hall-probe measurements of the magnetic field and with measurements of the force exerted on DNA-tethered beads. Using these predictive theories, we systematically explore the effects of magnet alignment, magnet spacing, magnet size, and of adding an iron yoke to the magnets on the forces that can be exerted on tethered particles. We find that the optimal configuration for maximal stretching forces is a vertically aligned pair of magnets, with a minimal gap between the magnets and minimal flow cell thickness. Following these principles, we present a configuration that allows one to apply > or = 40 pN stretching forces on approximately 1-microm tethered beads.
Quantitative Modeling and Optimization of Magnetic Tweezers
Lipfert, Jan; Hao, Xiaomin; Dekker, Nynke H.
2009-01-01
Abstract Magnetic tweezers are a powerful tool to manipulate single DNA or RNA molecules and to study nucleic acid-protein interactions in real time. Here, we have modeled the magnetic fields of permanent magnets in magnetic tweezers and computed the forces exerted on superparamagnetic beads from first principles. For simple, symmetric geometries the magnetic fields can be calculated semianalytically using the Biot-Savart law. For complicated geometries and in the presence of an iron yoke, we employ a finite-element three-dimensional PDE solver to numerically solve the magnetostatic problem. The theoretical predictions are in quantitative agreement with direct Hall-probe measurements of the magnetic field and with measurements of the force exerted on DNA-tethered beads. Using these predictive theories, we systematically explore the effects of magnet alignment, magnet spacing, magnet size, and of adding an iron yoke to the magnets on the forces that can be exerted on tethered particles. We find that the optimal configuration for maximal stretching forces is a vertically aligned pair of magnets, with a minimal gap between the magnets and minimal flow cell thickness. Following these principles, we present a configuration that allows one to apply ≥40 pN stretching forces on ≈1-μm tethered beads. PMID:19527664
Peroxisystem: harnessing systems cell biology to study peroxisomes.
Schuldiner, Maya; Zalckvar, Einat
2015-04-01
In recent years, high-throughput experimentation with quantitative analysis and modelling of cells, recently dubbed systems cell biology, has been harnessed to study the organisation and dynamics of simple biological systems. Here, we suggest that the peroxisome, a fascinating dynamic organelle, can be used as a good candidate for studying a complete biological system. We discuss several aspects of peroxisomes that can be studied using high-throughput systematic approaches and be integrated into a predictive model. Such approaches can be used in the future to study and understand how a more complex biological system, like a cell and maybe even ultimately a whole organism, works. © 2015 Société Française des Microscopies and Société de Biologie Cellulaire de France. Published by John Wiley & Sons Ltd.
Divya, O; Mishra, Ashok K
2007-05-29
Quantitative determination of kerosene fraction present in diesel has been carried out based on excitation emission matrix fluorescence (EEMF) along with parallel factor analysis (PARAFAC) and N-way partial least squares regression (N-PLS). EEMF is a simple, sensitive and nondestructive method suitable for the analysis of multifluorophoric mixtures. Calibration models consisting of varying compositions of diesel and kerosene were constructed and their validation was carried out using leave-one-out cross validation method. The accuracy of the model was evaluated through the root mean square error of prediction (RMSEP) for the PARAFAC, N-PLS and unfold PLS methods. N-PLS was found to be a better method compared to PARAFAC and unfold PLS method because of its low RMSEP values.
Tracing the origin of azimuthal gluon correlations in the color glass condensate
Lappi, T.; Schenke, B.; Schlichting, S.; ...
2016-01-11
Here we examine the origins of azimuthal correlations observed in high energy proton-nucleus collisions by considering the simple example of the scattering of uncorrelated partons off color fields in a large nucleus. We demonstrate how the physics of fluctuating color fields in the color glass condensate (CGC) effective theory generates these azimuthal multiparticle correlations and compute the corresponding Fourier coefficients v n within different CGC approximation schemes. We discuss in detail the qualitative and quantitative differences between the different schemes. Lastly, we will show how a recently introduced color field domain model that captures key features of the observed azimuthalmore » correlations can be understood in the CGC effective theory as a model of non-Gaussian correlations in the target nucleus.« less
Quantitation of dissolved gas content in emulsions and in blood using mass spectrometric detection
Grimley, Everett; Turner, Nicole; Newell, Clayton; Simpkins, Cuthbert; Rodriguez, Juan
2011-01-01
Quantitation of dissolved gases in blood or in other biological media is essential for understanding the dynamics of metabolic processes. Current detection techniques, while enabling rapid and convenient assessment of dissolved gases, provide only direct information on the partial pressure of gases dissolved in the aqueous fraction of the fluid. The more relevant quantity known as gas content, which refers to the total amount of the gas in all fractions of the sample, can be inferred from those partial pressures, but only indirectly through mathematical modeling. Here we describe a simple mass spectrometric technique for rapid and direct quantitation of gas content for a wide range of gases. The technique is based on a mass spectrometer detector that continuously monitors gases that are rapidly extracted from samples injected into a purge vessel. The accuracy and sample processing speed of the system is demonstrated with experiments that reproduce within minutes literature values for the solubility of various gases in water. The capability of the technique is further demonstrated through accurate determination of O2 content in a lipid emulsion and in whole blood, using as little as 20 μL of sample. The approach to gas content quantitation described here should greatly expand the range of animals and conditions that may be used in studies of metabolic gas exchange, and facilitate the development of artificial oxygen carriers and resuscitation fluids. PMID:21497566
NASA Astrophysics Data System (ADS)
Morgenthaler, George W.; Nuñez, German R.; Botello, Aaron M.; Soto, Jose; Shrairman, Ruth; Landau, Alexander
1998-01-01
Many reaction time experiments have been conducted over the years to observe human responses. However, most of the experiments that were performed did not have quantitatively accurate instruments for measuring change in reaction time under stress. There is a great need for quantitative instruments to measure neuromuscular reaction responses under stressful conditions such as distraction, disorientation, disease, alcohol, drugs, etc. The two instruments used in the experiments reported in this paper are such devices. Their accuracy, portability, ease of use, and biometric character are what makes them very special. PACE™ is a software model used to measure reaction time. VeriFax's Impairoscope measures the deterioration of neuromuscular responses. During the 1997 Summer Semester, various reaction time experiments were conducted on University of Colorado faculty, staff, and students using the PACE™ system. The tests included both two-eye and one-eye unstressed trials and trials with various stresses such as fatigue, distractions in which subjects were asked to perform simple arithmetic during the PACE™ tests, and stress due to rotating-chair dizziness. Various VeriFax Impairoscope tests, both stressed and unstressed, were conducted to determine the Impairoscope's ability to quantitatively measure this impairment. In the 1997 Fall Semester, a Phase II effort was undertaken to increase test sample sizes in order to provide statistical precision and stability. More sophisticated statistical methods remain to be applied to better interpret the data.
Universal Trade-Off between Power, Efficiency, and Constancy in Steady-State Heat Engines
NASA Astrophysics Data System (ADS)
Pietzonka, Patrick; Seifert, Udo
2018-05-01
Heat engines should ideally have large power output, operate close to Carnot efficiency and show constancy, i.e., exhibit only small fluctuations in this output. For steady-state heat engines, driven by a constant temperature difference between the two heat baths, we prove that out of these three requirements only two are compatible. Constancy enters quantitatively the conventional trade-off between power and efficiency. Thus, we rationalize and unify recent suggestions for overcoming this simple trade-off. Our universal bound is illustrated for a paradigmatic model of a quantum dot solar cell and for a Brownian gyrator delivering mechanical work against an external force.
NASA Astrophysics Data System (ADS)
Hamazaki, Junichi; Furusawa, Kentaro; Sekine, Norihiko; Kasamatsu, Akifumi; Hosako, Iwao
2016-11-01
The effects of the chirp of the pump pulse in broadband terahertz (THz) pulse generation by optical rectification (OR) in GaP were systematically investigated. It was found that the pre-compensation for the dispersion of GaP is important for obtaining smooth and single-peaked THz spectra as well as high power-conversion efficiency. It was also found that an excessive amount of chirp leads to distortions in THz spectra, which can be quantitatively analyzed by using a simple model. Our results highlight the importance of accurate control over the chirp of the pump pulse for generating broadband THz pulses by OR.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ivanov, Alexander S.; Bryantsev, Vyacheslav S.
An accurate description of solvation effects for trivalent lanthanide ions is a main stumbling block to the qualitative prediction of selectivity trends along the lanthanide series. In this work, we propose a simple model to describe the differential effect of solvation in the competitive binding of a ligand by lanthanide ions by including weakly co-ordinated counterions in the complexes of more than a +1 charge. The success of the approach to quantitatively reproduce selectivities obtained from aqueous phase complexation studies demonstrates its potential for the design and screening of new ligands for efficient size-based separation.
Ivanov, Alexander S.; Bryantsev, Vyacheslav S.
2016-06-20
An accurate description of solvation effects for trivalent lanthanide ions is a main stumbling block to the qualitative prediction of selectivity trends along the lanthanide series. In this work, we propose a simple model to describe the differential effect of solvation in the competitive binding of a ligand by lanthanide ions by including weakly co-ordinated counterions in the complexes of more than a +1 charge. The success of the approach to quantitatively reproduce selectivities obtained from aqueous phase complexation studies demonstrates its potential for the design and screening of new ligands for efficient size-based separation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deng, Mingsen; Guizhou Provincial Key Laboratory of Computational Nano-Material Science, Institute of Applied Physics, Guizhou Normal College, Guiyang, 550018; Ye, Gui
The probe of flexible molecular conformation is crucial for the electric application of molecular systems. We have developed a theoretical procedure to analyze the couplings of molecular local vibrations with the electron transportation process, which enables us to evaluate the structural fingerprints of some vibrational modes in the inelastic electron tunneling spectroscopy (IETS). Based on a model molecule of Bis-(4-mercaptophenyl)-ether with a flexible center angle, we have revealed and validated a simple mathematical relationship between IETS signals and molecular angles. Our results might open a route to quantitatively measure key geometrical parameters of molecular junctions, which helps to achieve precisemore » control of molecular devices.« less
Modelling Of Flotation Processes By Classical Mathematical Methods - A Review
NASA Astrophysics Data System (ADS)
Jovanović, Ivana; Miljanović, Igor
2015-12-01
Flotation process modelling is not a simple task, mostly because of the process complexity, i.e. the presence of a large number of variables that (to a lesser or a greater extent) affect the final outcome of the mineral particles separation based on the differences in their surface properties. The attempts toward the development of the quantitative predictive model that would fully describe the operation of an industrial flotation plant started in the middle of past century and it lasts to this day. This paper gives a review of published research activities directed toward the development of flotation models based on the classical mathematical rules. The description and systematization of classical flotation models were performed according to the available references, with emphasize exclusively given to the flotation process modelling, regardless of the model application in a certain control system. In accordance with the contemporary considerations, models were classified as the empirical, probabilistic, kinetic and population balance types. Each model type is presented through the aspects of flotation modelling at the macro and micro process levels.
Next generation simulation tools: the Systems Biology Workbench and BioSPICE integration.
Sauro, Herbert M; Hucka, Michael; Finney, Andrew; Wellock, Cameron; Bolouri, Hamid; Doyle, John; Kitano, Hiroaki
2003-01-01
Researchers in quantitative systems biology make use of a large number of different software packages for modelling, analysis, visualization, and general data manipulation. In this paper, we describe the Systems Biology Workbench (SBW), a software framework that allows heterogeneous application components--written in diverse programming languages and running on different platforms--to communicate and use each others' capabilities via a fast binary encoded-message system. Our goal was to create a simple, high performance, opensource software infrastructure which is easy to implement and understand. SBW enables applications (potentially running on separate, distributed computers) to communicate via a simple network protocol. The interfaces to the system are encapsulated in client-side libraries that we provide for different programming languages. We describe in this paper the SBW architecture, a selection of current modules, including Jarnac, JDesigner, and SBWMeta-tool, and the close integration of SBW into BioSPICE, which enables both frameworks to share tools and compliment and strengthen each others capabilities.
Increasing the realism of a laparoscopic box trainer: a simple, inexpensive method.
Hull, Louise; Kassab, Eva; Arora, Sonal; Kneebone, Roger
2010-01-01
Simulation-based training in medical education is increasing. Realism is an integral element of creating an engaging, effective training environment. Although physical trainers offer a low-cost alternative to expensive virtual reality (VR) simulators, many lack in realism. The aim of this research was to enhance the realism of a laparoscopic box trainer by using a simple, inexpensive method. Digital images of the abdominal cavity were captured from a VR simulator. The images were printed onto a laminated card that lined the bottom and sides of the box-trainer cavity. The standard black neoprene material that encloses the abdominal cavity was replaced with a skin-colored silicon model. The realism of the modified box trainer was assessed by surgeons, using quantitative and qualitative methodologies. Results suggest that the modified box trainer was more realistic than a standard box trainer alone. Incorporating this technique in the training of laparoscopic skills is an inexpensive means of emulating surgical reality that may enhance the engagement of the learner in simulation.
Quantitative Imaging of Microwave Electric Fields through Near-Field Scanning Microwave Microscopy
NASA Astrophysics Data System (ADS)
Dutta, S. K.; Vlahacos, C. P.; Steinhauer, D. E.; Thanawalla, A.; Feenstra, B. J.; Wellstood, F. C.; Anlage, Steven M.; Newman, H. S.
1998-03-01
The ability to non-destructively image electric field patterns generated by operating microwave devices (e.g. filters, antennas, circulators, etc.) would greatly aid in the design and testing of these structures. Such detailed information can be used to reconcile discrepancies between simulated behavior and experimental data (such as scattering parameters). The near-field scanning microwave microscope we present uses a coaxial probe to provide a simple, broadband method of imaging electric fields.(S. M. Anlage, et al.) IEEE Trans. Appl. Supercond. 7, 3686 (1997).^,(See http://www.csr.umd.edu/research/hifreq/micr_microscopy.html) The signal that is measured is related to the incident electric flux normal to the face of the center conductor of the probe, allowing different components of the field to be measured by orienting the probe appropriately. By using a simple model of the system, we can also convert raw data to absolute electric field. Detailed images of standing waves on copper microstrip will be shown and compared to theory.
Ota, Shusuke; Kanazawa, Satoshi; Kobayashi, Masaaki; Otsuka, Takanobu; Okamoto, Takashi
2005-04-01
Antibodies to type II collagen (col II) have been detected in patients with rheumatoid arthritis and in animal models of collagen induced arthritis. Here, we describe a novel method to detect anti-col II antibodies using an immunospot assay with an infrared fluorescence imaging system. This method showed very high sensitivity and specificity, and was simple, with low background levels. It also showed higher reproducibility and linearity, with a dynamic range of approximately 500-fold, than the conventional immunospot assay with enhanced chemiluminescence detection. Using this method we were able to demonstrate the antibody affinity maturation process in mice immunized with col II. In these immunized mice, although cross-reactive antibodies reacting with other collagen species were detected in earlier stages of immunization, the titers of cross-reactive antibodies rapidly diminished after the antigen boost, concomitantly with the elevation of the anti-col II antibody. The method and its possible applications are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Egestad, B.; Curstedt, T.; Sjoevall, J.
1982-01-01
Procedures for enrichment of non-volatile chlorinated aromatic pollutants from fat, water and milk are described. /sup 14/C-DDT was used as a model compound in recovery experiments. A several thousand-fold enrichment of DDT added to butter was achieved by two consecutive straight-phase chromatographies on Lipidex 5000. Trace amounts of DDT in liter volumes of water could be quantitatively extracted by rapid filtration through 2 ml beds of Lipidex 1000. A batch extraction procedure permitted enrichment of DDT from milk after addition of n-pentylamine, methanol and water. DDT could then be eluted from the gel with retention of more than 90% ofmore » the lipids. A reversed-phase system with Lipidex 5000 could be used for separation of TCDD from DDT and PCBs. The liquid-gel chromatographic procedures are simple and suitable for clean-up of samples prior to application of high-resolution methods. 5 tables.« less
A Computational Model of Linguistic Humor in Puns.
Kao, Justine T; Levy, Roger; Goodman, Noah D
2016-07-01
Humor plays an essential role in human interactions. Precisely what makes something funny, however, remains elusive. While research on natural language understanding has made significant advancements in recent years, there has been little direct integration of humor research with computational models of language understanding. In this paper, we propose two information-theoretic measures-ambiguity and distinctiveness-derived from a simple model of sentence processing. We test these measures on a set of puns and regular sentences and show that they correlate significantly with human judgments of funniness. Moreover, within a set of puns, the distinctiveness measure distinguishes exceptionally funny puns from mediocre ones. Our work is the first, to our knowledge, to integrate a computational model of general language understanding and humor theory to quantitatively predict humor at a fine-grained level. We present it as an example of a framework for applying models of language processing to understand higher level linguistic and cognitive phenomena. © 2015 The Authors. Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.
Stochastic Spatial Models in Ecology: A Statistical Physics Approach
NASA Astrophysics Data System (ADS)
Pigolotti, Simone; Cencini, Massimo; Molina, Daniel; Muñoz, Miguel A.
2018-07-01
Ecosystems display a complex spatial organization. Ecologists have long tried to characterize them by looking at how different measures of biodiversity change across spatial scales. Ecological neutral theory has provided simple predictions accounting for general empirical patterns in communities of competing species. However, while neutral theory in well-mixed ecosystems is mathematically well understood, spatial models still present several open problems, limiting the quantitative understanding of spatial biodiversity. In this review, we discuss the state of the art in spatial neutral theory. We emphasize the connection between spatial ecological models and the physics of non-equilibrium phase transitions and how concepts developed in statistical physics translate in population dynamics, and vice versa. We focus on non-trivial scaling laws arising at the critical dimension D = 2 of spatial neutral models, and their relevance for biological populations inhabiting two-dimensional environments. We conclude by discussing models incorporating non-neutral effects in the form of spatial and temporal disorder, and analyze how their predictions deviate from those of purely neutral theories.
VLF wave growth and discrete emission triggering in the magnetosphere - A feedback model
NASA Technical Reports Server (NTRS)
Helliwell, R. A.; Inan, U. S.
1982-01-01
A simple nonlinear feedback model is presented to explain VLF wave growth and emission triggering observed in VLF transmission experiments. The model is formulated in terms of the interaction of electrons with a slowly varying wave in an inhomogeneous medium as in an unstable feedback amplifier with a delay line; constant frequency oscillations are generated on the magnetic equator, while risers and fallers are generated on the downstream and upstream sides of the equator, respectively. Quantitative expressions are obtained for the stimulated radiation produced by energy exchanged between energetic electrons and waves by Doppler-shifted cyclotron resonance, and feedback between the stimulated radiation and the phase bunched currents is incorporated in terms of a two-port discrete time model. The resulting model is capable of explaining the observed temporal growth and saturation effects, phase advance, retardation or frequency shift during growth in the context of a single parameter depending on the energetic particle distribution function, as well as pretermination triggering.
Stochastic Spatial Models in Ecology: A Statistical Physics Approach
NASA Astrophysics Data System (ADS)
Pigolotti, Simone; Cencini, Massimo; Molina, Daniel; Muñoz, Miguel A.
2017-11-01
Ecosystems display a complex spatial organization. Ecologists have long tried to characterize them by looking at how different measures of biodiversity change across spatial scales. Ecological neutral theory has provided simple predictions accounting for general empirical patterns in communities of competing species. However, while neutral theory in well-mixed ecosystems is mathematically well understood, spatial models still present several open problems, limiting the quantitative understanding of spatial biodiversity. In this review, we discuss the state of the art in spatial neutral theory. We emphasize the connection between spatial ecological models and the physics of non-equilibrium phase transitions and how concepts developed in statistical physics translate in population dynamics, and vice versa. We focus on non-trivial scaling laws arising at the critical dimension D = 2 of spatial neutral models, and their relevance for biological populations inhabiting two-dimensional environments. We conclude by discussing models incorporating non-neutral effects in the form of spatial and temporal disorder, and analyze how their predictions deviate from those of purely neutral theories.
Mullins effect in a filled elastomer under uniaxial tension
Maiti, A.; Small, W.; Gee, R. H.; ...
2014-01-16
Modulus softening and permanent set in filled polymeric materials due to cyclic loading and unloading, commonly known as the Mullins effect, can have a significant impact on their use as support cushions. The quantitative analysis of such behavior is essential to ensure the effectiveness of such materials in long-term deployment. In this work we combine existing ideas of filler-induced modulus enhancement, strain amplification, and irreversible deformation within a simple non-Gaussian constitutive model to quantitatively interpret recent measurements on a relevant PDMS-based elastomeric cushion. Also, we find that the experimental stress-strain data is consistent with the picture that during stretching (loading)more » two effects take place simultaneously: (1) the physical constraints (entanglements) initially present in the polymer network get disentangled, thus leading to a gradual decrease in the effective cross-link density, and (2) the effective filler volume fraction gradually decreases with increasing strain due to the irreversible pulling out of an initially occluded volume of the soft polymer domain.« less
Recovery of permittivity and depth from near-field data as a step toward infrared nanotomography.
Govyadinov, Alexander A; Mastel, Stefan; Golmar, Federico; Chuvilin, Andrey; Carney, P Scott; Hillenbrand, Rainer
2014-07-22
The increasing complexity of composite materials structured on the nanometer scale requires highly sensitive analytical tools for nanoscale chemical identification, ideally in three dimensions. While infrared near-field microscopy provides high chemical sensitivity and nanoscopic spatial resolution in two dimensions, the quantitative extraction of material properties of three-dimensionally structured samples has not been achieved yet. Here we introduce a method to perform rapid recovery of the thickness and permittivity of simple 3D structures (such as thin films and nanostructures) from near-field measurements, and provide its first experimental demonstration. This is accomplished via a novel nonlinear invertible model of the imaging process, taking advantage of the near-field data recorded at multiple harmonics of the oscillation frequency of the near-field probe. Our work enables quantitative nanoscale-resolved optical studies of thin films, coatings, and functionalization layers, as well as the structural analysis of multiphase materials, among others. It represents a major step toward the further goal of near-field nanotomography.
General description and understanding of the nonlinear dynamics of mode-locked fiber lasers.
Wei, Huai; Li, Bin; Shi, Wei; Zhu, Xiushan; Norwood, Robert A; Peyghambarian, Nasser; Jian, Shuisheng
2017-05-02
As a type of nonlinear system with complexity, mode-locked fiber lasers are known for their complex behaviour. It is a challenging task to understand the fundamental physics behind such complex behaviour, and a unified description for the nonlinear behaviour and the systematic and quantitative analysis of the underlying mechanisms of these lasers have not been developed. Here, we present a complexity science-based theoretical framework for understanding the behaviour of mode-locked fiber lasers by going beyond reductionism. This hierarchically structured framework provides a model with variable dimensionality, resulting in a simple view that can be used to systematically describe complex states. Moreover, research into the attractors' basins reveals the origin of stochasticity, hysteresis and multistability in these systems and presents a new method for quantitative analysis of these nonlinear phenomena. These findings pave the way for dynamics analysis and system designs of mode-locked fiber lasers. We expect that this paradigm will also enable potential applications in diverse research fields related to complex nonlinear phenomena.
Spatially coordinated dynamic gene transcription in living pituitary tissue
Featherstone, Karen; Hey, Kirsty; Momiji, Hiroshi; McNamara, Anne V; Patist, Amanda L; Woodburn, Joanna; Spiller, David G; Christian, Helen C; McNeilly, Alan S; Mullins, John J; Finkenstädt, Bärbel F; Rand, David A; White, Michael RH; Davis, Julian RE
2016-01-01
Transcription at individual genes in single cells is often pulsatile and stochastic. A key question emerges regarding how this behaviour contributes to tissue phenotype, but it has been a challenge to quantitatively analyse this in living cells over time, as opposed to studying snap-shots of gene expression state. We have used imaging of reporter gene expression to track transcription in living pituitary tissue. We integrated live-cell imaging data with statistical modelling for quantitative real-time estimation of the timing of switching between transcriptional states across a whole tissue. Multiple levels of transcription rate were identified, indicating that gene expression is not a simple binary ‘on-off’ process. Immature tissue displayed shorter durations of high-expressing states than the adult. In adult pituitary tissue, direct cell contacts involving gap junctions allowed local spatial coordination of prolactin gene expression. Our findings identify how heterogeneous transcriptional dynamics of single cells may contribute to overall tissue behaviour. DOI: http://dx.doi.org/10.7554/eLife.08494.001 PMID:26828110
Direct injection analysis of fatty and resin acids in papermaking process waters by HPLC/MS.
Valto, Piia; Knuutinen, Juha; Alén, Raimo
2011-04-01
A novel HPLC-atmospheric pressure chemical ionization/MS (HPLC-APCI/MS) method was developed for the rapid analysis of selected fatty and resin acids typically present in papermaking process waters. A mixture of palmitic, stearic, oleic, linolenic, and dehydroabietic acids was separated by a commercial HPLC column (a modified stationary C(18) phase) using gradient elution with methanol/0.15% formic acid (pH 2.5) as a mobile phase. The internal standard (myristic acid) method was used to calculate the correlation coefficients and in the quantitation of the results. In the thorough quality parameters measurement, a mixture of these model acids in aqueous media as well as in six different paper machine process waters was quantitatively determined. The measured quality parameters, such as selectivity, linearity, precision, and accuracy, clearly indicated that, compared with traditional gas chromatographic techniques, the simple method developed provided a faster chromatographic analysis with almost real-time monitoring of these acids. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Assembly and positioning of actomyosin rings by contractility and planar cell polarity
Sehring, Ivonne M; Recho, Pierre; Denker, Elsa; Kourakis, Matthew; Mathiesen, Birthe; Hannezo, Edouard; Dong, Bo; Jiang, Di
2015-01-01
The actomyosin cytoskeleton is a primary force-generating mechanism in morphogenesis, thus a robust spatial control of cytoskeletal positioning is essential. In this report, we demonstrate that actomyosin contractility and planar cell polarity (PCP) interact in post-mitotic Ciona notochord cells to self-assemble and reposition actomyosin rings, which play an essential role for cell elongation. Intriguingly, rings always form at the cells′ anterior edge before migrating towards the center as contractility increases, reflecting a novel dynamical property of the cortex. Our drug and genetic manipulations uncover a tug-of-war between contractility, which localizes cortical flows toward the equator and PCP, which tries to reposition them. We develop a simple model of the physical forces underlying this tug-of-war, which quantitatively reproduces our results. We thus propose a quantitative framework for dissecting the relative contribution of contractility and PCP to the self-assembly and repositioning of cytoskeletal structures, which should be applicable to other morphogenetic events. DOI: http://dx.doi.org/10.7554/eLife.09206.001 PMID:26486861
Context influences on TALE–DNA binding revealed by quantitative profiling
Rogers, Julia M.; Barrera, Luis A.; Reyon, Deepak; Sander, Jeffry D.; Kellis, Manolis; Joung, J Keith; Bulyk, Martha L.
2015-01-01
Transcription activator-like effector (TALE) proteins recognize DNA using a seemingly simple DNA-binding code, which makes them attractive for use in genome engineering technologies that require precise targeting. Although this code is used successfully to design TALEs to target specific sequences, off-target binding has been observed and is difficult to predict. Here we explore TALE–DNA interactions comprehensively by quantitatively assaying the DNA-binding specificities of 21 representative TALEs to ∼5,000–20,000 unique DNA sequences per protein using custom-designed protein-binding microarrays (PBMs). We find that protein context features exert significant influences on binding. Thus, the canonical recognition code does not fully capture the complexity of TALE–DNA binding. We used the PBM data to develop a computational model, Specificity Inference For TAL-Effector Design (SIFTED), to predict the DNA-binding specificity of any TALE. We provide SIFTED as a publicly available web tool that predicts potential genomic off-target sites for improved TALE design. PMID:26067805
Context influences on TALE-DNA binding revealed by quantitative profiling.
Rogers, Julia M; Barrera, Luis A; Reyon, Deepak; Sander, Jeffry D; Kellis, Manolis; Joung, J Keith; Bulyk, Martha L
2015-06-11
Transcription activator-like effector (TALE) proteins recognize DNA using a seemingly simple DNA-binding code, which makes them attractive for use in genome engineering technologies that require precise targeting. Although this code is used successfully to design TALEs to target specific sequences, off-target binding has been observed and is difficult to predict. Here we explore TALE-DNA interactions comprehensively by quantitatively assaying the DNA-binding specificities of 21 representative TALEs to ∼5,000-20,000 unique DNA sequences per protein using custom-designed protein-binding microarrays (PBMs). We find that protein context features exert significant influences on binding. Thus, the canonical recognition code does not fully capture the complexity of TALE-DNA binding. We used the PBM data to develop a computational model, Specificity Inference For TAL-Effector Design (SIFTED), to predict the DNA-binding specificity of any TALE. We provide SIFTED as a publicly available web tool that predicts potential genomic off-target sites for improved TALE design.
Solvation effects on like-charge attraction.
Ghanbarian, Shahzad; Rottler, Jörg
2013-02-28
We present results of molecular dynamics simulations of the electrostatic interaction between two parallel charged rods in the presence of divalent counterions. Such polyelectrolytes have been considered as a simple model for understanding electrostatic interactions in highly charged biomolecules such as DNA. Since there are correlations between the free charge carriers, the phenomenon of like charge attraction appears for specific parameters. We explore the role of solvation effects and the resulting deviations from Coulomb's law on the nanoscale on this peculiar phenomenon. The behavior of the force between the charged rods in a simulation with atomistic representation of water molecules is completely different from a model in which water is modeled as a continuum dielectric. By calculating counterion-rodion pair correlation functions, we find that the presence of water molecules changes the structure of the counterion cloud and results in both qualitative and quantitative changes of the force between highly charged polyelectrolytes.
Persistence in the WFC3 IR Detector: An Area Dependent Model
NASA Astrophysics Data System (ADS)
Long, Knox S.; Baggett, Sylvia M.
2018-05-01
When the IR detector on WFC3 is exposed to a bright source or sources, the sources not only appear in the original exposure, but can appear as afterimages in later exposures, a phenomenon known as persistence. The magnitude and duration of persistence for a fixed stimulus varies somewhat across the face of the detector. Our previous attempts to characterize this variation were limited to a correction that captures only the variation in the magnitude. Here we describe a simple model which allows for variations both in the magnitude and the duration of the persistence, and then evaluate quantitatively how much improvement this model provides. We conclude that while this was a useful experiment, it does not result in a marked improvement in our ability to predict persistence in the WFC3/IR array. We discuss why this was the case, and possible paths forward.
Emergent dynamics of laboratory insect swarms
NASA Astrophysics Data System (ADS)
Kelley, Douglas H.; Ouellette, Nicholas T.
2013-01-01
Collective animal behaviour occurs at nearly every biological size scale, from single-celled organisms to the largest animals on earth. It has long been known that models with simple interaction rules can reproduce qualitative features of this complex behaviour. But determining whether these models accurately capture the biology requires data from real animals, which has historically been difficult to obtain. Here, we report three-dimensional, time-resolved measurements of the positions, velocities, and accelerations of individual insects in laboratory swarms of the midge Chironomus riparius. Even though the swarms do not show an overall polarisation, we find statistical evidence for local clusters of correlated motion. We also show that the swarms display an effective large-scale potential that keeps individuals bound together, and we characterize the shape of this potential. Our results provide quantitative data against which the emergent characteristics of animal aggregation models can be benchmarked.
NASA Technical Reports Server (NTRS)
Farral, Joseph F.; Seshan, P. K.; Rohatgi, Naresh K.
1991-01-01
This paper describes the Generic Modular Flow Schematic (GMFS) architecture capable of encompassing all functional elements of a physical/chemical life support system (LSS). The GMFS can be implemented to synthesize, model, analyze, and quantitatively compare many configurations of LSSs, from a simple, completely open-loop to a very complex closed-loop. The GMFS model is coded in ASPEN, a state-of-the-art chemical process simulation program, to accurately compute the material, heat, and power flow quantities for every stream in each of the subsystem functional elements (SFEs) in the chosen configuration of a life support system. The GMFS approach integrates the various SFEs and subsystems in a hierarchical and modular fashion facilitating rapid substitutions and reconfiguration of a life support system. The comprehensive ASPEN material and energy balance output is transferred to a systems and technology assessment spreadsheet for rigorous system analysis and trade studies.
Title: Experimental and analytical study of frictional anisotropy of nanotubes
NASA Astrophysics Data System (ADS)
Riedo, Elisa; Gao, Yang; Li, Tai-De; Chiu, Hsiang-Chih; Kim, Suenne; Klinke, Christian; Tosatti, Erio
The frictional properties of Carbon and Boron Nitride nanotubes (NTs) are very important in a variety of applications, including composite materials, carbon fibers, and micro/nano-electromechanical systems. Atomic force microscopy (AFM) is a powerful tool to investigate with nanoscale resolution the frictional properties of individual NTs. Here, we report on an experimental study of the frictional properties of different types of supported nanotubes by AFM. We also propose a quantitative model to describe and then predict the frictional properties of nanotubes sliding on a substrate along (longitudinal friction) or perpendicular (transverse friction) their axis. This model provides a simple but general analytical relationship that well describes the acquired experimental data. As an example of potential applications, this experimental method combined with the proposed model can guide to design better NTs-ceramic composites, or to self-assemble the nanotubes on a surface in a given direction. M. Lucas et al., Nature Materials 8, 876-881 (2009).
Zweiwegintegration durch zweisprachige Bildung? Ergebnisse aus der Staatlichen Europa-Schule Berlin
NASA Astrophysics Data System (ADS)
Meier, Gabriela
2012-06-01
While there is no simple recipe of how to respond to the multitude of languages present in many European schools, this article presents a promising alternative to monolingual education. The focus is on Staatliche Europa-Schule Berlin (SESB), a two-way immersion (TWI) model that unites children whose mother tongue is German with children whose mother tongue is another locally spoken language in one class and teaches them together in two languages. Thus in this model, offered by 17 primary schools and 13 secondary schools in Berlin, pupils learn in two languages from and with each other. Based on a largely quantitative, quasi-experimental study with 603 students, evidence is provided that there are a number of peace-linguistic benefits that can promote two-way social integration, besides fostering personal and societal multilingualism. This suggests that TWI education as practised in Berlin could serve as an educational model for other multilingual parts of Europe.
Systemic Analysis Approaches for Air Transportation
NASA Technical Reports Server (NTRS)
Conway, Sheila
2005-01-01
Air transportation system designers have had only limited success using traditional operations research and parametric modeling approaches in their analyses of innovations. They need a systemic methodology for modeling of safety-critical infrastructure that is comprehensive, objective, and sufficiently concrete, yet simple enough to be used with reasonable investment. The methodology must also be amenable to quantitative analysis so issues of system safety and stability can be rigorously addressed. However, air transportation has proven itself an extensive, complex system whose behavior is difficult to describe, no less predict. There is a wide range of system analysis techniques available, but some are more appropriate for certain applications than others. Specifically in the area of complex system analysis, the literature suggests that both agent-based models and network analysis techniques may be useful. This paper discusses the theoretical basis for each approach in these applications, and explores their historic and potential further use for air transportation analysis.
Insights from mathematical modeling of renal tubular function.
Weinstein, A M
1998-01-01
Mathematical models of proximal tubule have been developed which represent the important solute species within the constraints of known cytosolic concentrations, transport fluxes, and overall epithelial permeabilities. In general, model simulations have been used to assess the quantitative feasibility of what appear to be qualitatively plausible mechanisms, or alternatively, to identify incomplete rationalization of experimental observations. The examples considered include: (1) proximal water reabsorption, for which the lateral interspace is a locus for solute-solvent coupling; (2) ammonia secretion, for which the issue is prioritizing driving forces - transport on the Na+/H+ exchanger, on the Na,K-ATPase, or ammoniagenesis; (3) formate-stimulated NaCl reabsorption, for which simple addition of a luminal membrane chloride/formate exchanger fails to represent experimental observation, and (4) balancing luminal entry and peritubular exit, in which ATP-dependent peritubular K+ channels have been implicated, but appear unable to account for the bulk of proximal tubule cell volume homeostasis.
Prediction of crosslink density of solid propellant binders. [curing of elastomers
NASA Technical Reports Server (NTRS)
Marsh, H. E., Jr.
1976-01-01
A quantitative theory is outlined which allows calculation of crosslink density of solid propellant binders from a small number of predetermined parameters such as the binder composition, the functionality distributions of the ingredients, and the extent of the curing reaction. The parameter which is partly dependent on process conditions is the extent of reaction. The proposed theoretical model is verified by independent measurement of effective chain concentration and sol and gel fractions in simple compositions prepared from model compounds. The model is shown to correlate tensile data with composition in the case of urethane-cured polyether and certain solid propellants. A formula for the branching coefficient is provided according to which if one knows the functionality distributions of the ingredients and the corresponding equivalent weights and can measure or predict the extent of reaction, he can calculate the branching coefficient of such a system for any desired composition.
NASA Astrophysics Data System (ADS)
Toner, John; Tu, Yu-Hai
2002-05-01
We have developed a new continuum dynamical model for the collective motion of large "flocks" of biological organisms (e.g., flocks of birds, schools of fish, herds of wildebeest, hordes of bacteria, slime molds, etc.) . This model does for flocks what the Navier-Stokes equation does for fluids. The model predicts that, unlike simple fluids, flocks show huge fluctuation effects in spatial dimensions d < 4 that radically change their behavior. In d=2, it is only these effects that make it possible for the flock to move coherently at all. This explains why a million wildebeest can march together across the Serengeti plain, despite the fact that a million physicists gathered on the same plane could NOT all POINT in the same direction. Detailed quantitative predictions of this theory agree beautifully with computer simulations of flock motion.
Critical asset and portfolio risk analysis: an all-hazards framework.
Ayyub, Bilal M; McGill, William L; Kaminskiy, Mark
2007-08-01
This article develops a quantitative all-hazards framework for critical asset and portfolio risk analysis (CAPRA) that considers both natural and human-caused hazards. Following a discussion on the nature of security threats, the need for actionable risk assessments, and the distinction between asset and portfolio-level analysis, a general formula for all-hazards risk analysis is obtained that resembles the traditional model based on the notional product of consequence, vulnerability, and threat, though with clear meanings assigned to each parameter. Furthermore, a simple portfolio consequence model is presented that yields first-order estimates of interdependency effects following a successful attack on an asset. Moreover, depending on the needs of the decisions being made and available analytical resources, values for the parameters in this model can be obtained at a high level or through detailed systems analysis. Several illustrative examples of the CAPRA methodology are provided.
Understanding land surface evapotranspiration with satellite multispectral measurements
NASA Technical Reports Server (NTRS)
Menenti, M.
1993-01-01
Quantitative use of remote multispectral measurements to study and map land surface evapotranspiration has been a challenging issue for the past 20 years. Past work is reviewed against process physics. A simple two-layer combination-type model is used which is applicable to both vegetation and bare soil. The theoretic analysis is done to show which land surface properties are implicitly defined by such evaporation models and to assess whether they are measurable as a matter of principle. Conceptual implications of the spatial correlation of land surface properties, as observed by means of remote multispectral measurements, are illustrated with results of work done in arid zones. A normalization of spatial variability of land surface evaporation is proposed by defining a location-dependent potential evaporation and surface temperature range. Examples of the application of remote based estimates of evaporation to hydrological modeling studies in Egypt and Argentina are presented.
Estimating explosion properties of normal hydrogen-rich core-collapse supernovae
NASA Astrophysics Data System (ADS)
Pejcha, Ondrej
2017-08-01
Recent parameterized 1D explosion models of hundreds of core-collapse supernova progenitors suggest that success and failure are intertwined in a complex pattern that is not a simple function of the progenitor initial mass. This rugged landscape is present also in other explosion properties, allowing for quantitative tests of the neutrino mechanism from observations of hundreds of supernovae discovered every year. We present a new self-consistent and versatile method that derives photospheric radius and temperature variations of normal hydrogen-rich core-collapse supernovae based on their photometric measurements and expansion velocities. We construct SED and bolometric light curves, determine explosion energies, ejecta and nickel masses while taking into account all uncertainties and covariances of the model. We describe the efforts to compare the inferences to the predictions of the neutrino mechanim. The model can be adapted to include more physical assumptions to utilize primarily photometric data coming from surveys such as LSST.
Two-electron bond-orbital model, 1
NASA Technical Reports Server (NTRS)
Huang, C.; Moriarty, J. A.; Sher, A.; Breckenridge, R. A.
1975-01-01
Harrison's one-electron bond-orbital model of tetrahedrally coordinated solids was generalized to a two-electron model, using an extension of the method of Falicov and Harris for treating the hydrogen molecule. The six eigenvalues and eigenstates of the two-electron anion-cation Hamiltonian entering this theory can be found exactly general. The two-electron formalism is shown to provide a useful basis for calculating both non-magnetic and magnetic properties of semiconductors in perturbation theory. As an example of the former, expressions for the electric susceptibility and the dielectric constant were calculated. As an example of the latter, new expressions for the nuclear exchanges and pseudo-dipolar coefficients were calculated. A simple theoretical relationship between the dielectric constant and the exchange coefficient was also found in the limit of no correlation. These expressions were quantitatively evaluated in the limit of no correlation for twenty semiconductors.
Models of SOL transport and their relation to scaling of the divertor heat flux width in DIII-D
Makowski, M. A.; Lasnier, C. J.; Leonard, A. W.; ...
2014-10-06
Strong support for the critical pressure gradient model for the heat flux width has been obtained, in that the measured separatrix pressure gradient lies below and scales similarly to the pressure gradient limit obtained from the ideal, infinite-n stability codes, BALOO and 2DX, in all cases that have been examined. Predictions of a heuristic drift model for the heat flux width are also in qualitative agreement with the measurements. We obtained these results by using an improved high rep-rate and higher edge spatial resolution Thomson scattering system on DIII-D to measure the upstream electron temperature and density profiles. In ordermore » to compare theory and experiment, profiles of density, temperature, and pressure for both electrons and ions are needed as well values of these quantitities at the separatrix. We also developed a simple method to identify a proxy for the separatrix.« less
Hierarchical lattice models of hydrogen-bond networks in water
NASA Astrophysics Data System (ADS)
Dandekar, Rahul; Hassanali, Ali A.
2018-06-01
We develop a graph-based model of the hydrogen-bond network in water, with a view toward quantitatively modeling the molecular-level correlational structure of the network. The networks formed are studied by the constructing the model on two infinite-dimensional lattices. Our models are built bottom up, based on microscopic information coming from atomistic simulations, and we show that the predictions of the model are consistent with known results from ab initio simulations of liquid water. We show that simple entropic models can predict the correlations and clustering of local-coordination defects around tetrahedral waters observed in the atomistic simulations. We also find that orientational correlations between bonds are longer ranged than density correlations, determine the directional correlations within closed loops, and show that the patterns of water wires within these structures are also consistent with previous atomistic simulations. Our models show the existence of density and compressibility anomalies, as seen in the real liquid, and the phase diagram of these models is consistent with the singularity-free scenario previously proposed by Sastry and coworkers [Phys. Rev. E 53, 6144 (1996), 10.1103/PhysRevE.53.6144].
Pasotti, Lorenzo; Bellato, Massimo; Casanova, Michela; Zucca, Susanna; Cusella De Angelis, Maria Gabriella; Magni, Paolo
2017-01-01
The study of simplified, ad-hoc constructed model systems can help to elucidate if quantitatively characterized biological parts can be effectively re-used in composite circuits to yield predictable functions. Synthetic systems designed from the bottom-up can enable the building of complex interconnected devices via rational approach, supported by mathematical modelling. However, such process is affected by different, usually non-modelled, unpredictability sources, like cell burden. Here, we analyzed a set of synthetic transcriptional cascades in Escherichia coli . We aimed to test the predictive power of a simple Hill function activation/repression model (no-burden model, NBM) and of a recently proposed model, including Hill functions and the modulation of proteins expression by cell load (burden model, BM). To test the bottom-up approach, the circuit collection was divided into training and test sets, used to learn individual component functions and test the predicted output of interconnected circuits, respectively. Among the constructed configurations, two test set circuits showed unexpected logic behaviour. Both NBM and BM were able to predict the quantitative output of interconnected devices with expected behaviour, but only the BM was also able to predict the output of one circuit with unexpected behaviour. Moreover, considering training and test set data together, the BM captures circuits output with higher accuracy than the NBM, which is unable to capture the experimental output exhibited by some of the circuits even qualitatively. Finally, resource usage parameters, estimated via BM, guided the successful construction of new corrected variants of the two circuits showing unexpected behaviour. Superior descriptive and predictive capabilities were achieved considering resource limitation modelling, but further efforts are needed to improve the accuracy of models for biological engineering.
Simple & Safe Genomic DNA Isolation.
ERIC Educational Resources Information Center
Moss, Robert; Solomon, Sondra
1991-01-01
A procedure for purifying DNA using either bacteria or rat liver is presented. Directions for doing a qualitative DNA assay using diphenylamine and a quantitative DNA assay using spectroscopy are included. (KR)
Identifying habitats at risk: simple models can reveal complex ecosystem dynamics.
Maxwell, Paul S; Pitt, Kylie A; Olds, Andrew D; Rissik, David; Connolly, Rod M
2015-03-01
The relationship between ecological impact and ecosystem structure is often strongly nonlinear, so that small increases in impact levels can cause a disproportionately large response in ecosystem structure. Nonlinear ecosystem responses can be difficult to predict because locally relevant data sets can be difficult or impossible to obtain. Bayesian networks (BN) are an emerging tool that can help managers to define ecosystem relationships using a range of data types from comprehensive quantitative data sets to expert opinion. We show how a simple BN can reveal nonlinear dynamics in seagrass ecosystems using ecological relationships sourced from the literature. We first developed a conceptual diagram by cataloguing the ecological responses of seagrasses to a range of drivers and impacts. We used the conceptual diagram to develop a BN populated with values sourced from published studies. We then applied the BN to show that the amount of initial seagrass biomass has a mitigating effect on the level of impact a meadow can withstand without loss, and that meadow recovery can often require disproportionately large improvements in impact levels. This mitigating effect resulted in the middle ranges of impact levels having a wide likelihood of seagrass presence, a situation known as bistability. Finally, we applied the model in a case study to identify the risk of loss and the likelihood of recovery for the conservation and management of seagrass meadows in Moreton Bay, Queensland, Australia. We used the model to predict the likelihood of bistability in 23 locations in the Bay. The model predicted bistability in seven locations, most of which have experienced seagrass loss at some stage in the past 25 years providing essential information for potential future restoration efforts. Our results demonstrate the capacity of simple, flexible modeling tools to facilitate collation and synthesis of disparate information. This approach can be adopted in the initial stages of conservation programs as a low-cost and relatively straightforward way to provide preliminary assessments of.nonlinear dynamics in ecosystems.
Jack, Barbara A; O'Brien, Mary R; Kirton, Jennifer A; Marley, Kate; Whelan, Alison; Baldry, Catherine R; Groves, Karen E
2013-12-01
Good communication skills in healthcare professionals are acknowledged as a core competency. The consequences of poor communication are well-recognised with far reaching costs including; reduced treatment compliance, higher psychological morbidity, incorrect or delayed diagnoses, and increased complaints. The Simple Skills Secrets is a visual, easily memorised, model of communication for healthcare staff to respond to the distress or unanswerable questions of patients, families and colleagues. To explore the impact of the Simple Skills Secrets model of communication training on the general healthcare workforce. An evaluation methodology encompassing a quantitative pre- and post-course testing of confidence and willingness to have conversations with distressed patients, carers and colleagues and qualitative semi-structured telephone interviews with participants 6-8 weeks post course. During the evaluation, 153 staff undertook the training of which 149 completed the pre- and post-training questionnaire. A purposive sampling approach was adopted for the follow up qualitative interviews and 14 agreed to participate. There is a statistically significant improvement in both willingness and confidence for all categories; (overall confidence score, t(148)=-15.607, p=<0.05 overall willingness score, t(148)=-10.878, p=<0.05) with the greatest improvement in confidence in communicating with carers (pre-course mean 6.171 to post course mean 8.171). There is no statistical significant difference between the registered and support staff. Several themes were obtained from the qualitative data, including: a method of communicating differently, a structured approach, thinking differently and additional skills. The value of the model in clinical practice was reported. This model can be suggested as increasing the confidence of staff, in dealing with a myriad of situations which, if handled appropriately can lead to increased patient and carers' satisfaction. Empowering staff appears to have increased their willingness to undertake these conversations, which could lead to earlier intervention and minimise distress. Copyright © 2013 Elsevier Ltd. All rights reserved.
Yankson, Kweku K.; Steck, Todd R.
2009-01-01
We present a simple strategy for isolating and accurately enumerating target DNA from high-clay-content soils: desorption with buffers, an optional magnetic capture hybridization step, and quantitation via real-time PCR. With the developed technique, μg quantities of DNA were extracted from mg samples of pure kaolinite and a field clay soil. PMID:19633108
Modern projection of the old electroscope for nuclear radiation quantitative work and demonstrations
NASA Astrophysics Data System (ADS)
Oliveira Bastos, Rodrigo; Baltokoski Boch, Layara
2017-11-01
Although quantitative measurements in radioactivity teaching and research are only believed to be possible with high technology, early work in this area was fully accomplished with very simple apparatus such as zinc sulphide screens and electroscopes. This article presents an experimental practice using the electroscope, which is a very simple apparatus that has been widely used for educational purposes, although generally for qualitative work. The main objective is to show the possibility of measuring radioactivity not only in qualitative demonstrations, but also in quantitative experimental practices. The experimental set-up is a low-cost ion chamber connected to an electroscope in a configuration that is very similar to that used by Marie and Pierre Currie, Rutherford, Geiger, Pacini, Hess and other great researchers from the time of the big discoveries in nuclear and high-energy particle physics. An electroscope leaf is filmed and projected, permitting the collection of quantitative data for the measurement of the 220Rn half-life, collected from the emanation of the lantern mantles. The article presents the experimental procedures and the expected results, indicating that the experiment may provide support for nuclear physics classes. These practices could spread widely to either university or school didactic laboratories, and the apparatus has the potential to allow the development of new teaching activity for nuclear physics.
Hruska, Carrie B; Geske, Jennifer R; Swanson, Tiffinee N; Mammel, Alyssa N; Lake, David S; Manduca, Armando; Conners, Amy Lynn; Whaley, Dana H; Scott, Christopher G; Carter, Rickey E; Rhodes, Deborah J; O'Connor, Michael K; Vachon, Celine M
2018-06-05
Background parenchymal uptake (BPU), which refers to the level of Tc-99m sestamibi uptake within normal fibroglandular tissue on molecular breast imaging (MBI), has been identified as a breast cancer risk factor, independent of mammographic density. Prior analyses have used subjective categories to describe BPU. We evaluate a new quantitative method for assessing BPU by testing its reproducibility, comparing quantitative results with previously established subjective BPU categories, and determining the association of quantitative BPU with breast cancer risk. Two nonradiologist operators independently performed region-of-interest analysis on MBI images viewed in conjunction with corresponding digital mammograms. Quantitative BPU was defined as a unitless ratio of the average pixel intensity (counts/pixel) within the fibroglandular tissue versus the average pixel intensity in fat. Operator agreement and the correlation of quantitative BPU measures with subjective BPU categories assessed by expert radiologists were determined. Percent density on mammograms was estimated using Cumulus. The association of quantitative BPU with breast cancer (per one unit BPU) was examined within an established case-control study of 62 incident breast cancer cases and 177 matched controls. Quantitative BPU ranged from 0.4 to 3.2 across all subjects and was on average higher in cases compared to controls (1.4 versus 1.2, p < 0.007 for both operators). Quantitative BPU was strongly correlated with subjective BPU categories (Spearman's r = 0.59 to 0.69, p < 0.0001, for each paired combination of two operators and two radiologists). Interoperator and intraoperator agreement in the quantitative BPU measure, assessed by intraclass correlation, was 0.92 and 0.98, respectively. Quantitative BPU measures showed either no correlation or weak negative correlation with mammographic percent density. In a model adjusted for body mass index and percent density, higher quantitative BPU was associated with increased risk of breast cancer for both operators (OR = 4.0, 95% confidence interval (CI) 1.6-10.1, and 2.4, 95% CI 1.2-4.7). Quantitative measurement of BPU, defined as the ratio of average counts in fibroglandular tissue relative to that in fat, can be reliably performed by nonradiologist operators with a simple region-of-interest analysis tool. Similar to results obtained with subjective BPU categories, quantitative BPU is a functional imaging biomarker of breast cancer risk, independent of mammographic density and hormonal factors.
Quantifying the influence of sediment source area sampling on detrital thermochronometer data
NASA Astrophysics Data System (ADS)
Whipp, D. M., Jr.; Ehlers, T. A.; Coutand, I.; Bookhagen, B.
2014-12-01
Detrital thermochronology offers a unique advantage over traditional bedrock thermochronology because of its sensitivity to sediment production and transportation to sample sites. In mountainous regions, modern fluvial sediment is often collected and dated to determine the past (105 to >107 year) exhumation history of the upstream drainage area. Though potentially powerful, the interpretation of detrital thermochronometer data derived from modern fluvial sediment is challenging because of spatial and temporal variations in sediment production and transport, and target mineral concentrations. Thermochronometer age prediction models provide a quantitative basis for data interpretation, but it can be difficult to separate variations in catchment bedrock ages from the effects of variable basin denudation and sediment transport. We present two examples of quantitative data interpretation using detrital thermochronometer data from the Himalaya, focusing on the influence of spatial and temporal variations in basin denudation on predicted age distributions. We combine age predictions from the 3D thermokinematic numerical model Pecube with simple models for sediment sampling in the upstream drainage basin area to assess the influence of variations in sediment production by different geomorphic processes or scaled by topographic metrics. We first consider a small catchment from the central Himalaya where bedrock landsliding appears to have affected the observed muscovite 40Ar/39Ar age distributions. Using a simple model of random landsliding with a power-law landslide frequency-area relationship we find that the sediment residence time in the catchment has a major influence on predicted age distributions. In the second case, we compare observed detrital apatite fission-track age distributions from 16 catchments in the Bhutan Himalaya to ages predicted using Pecube and scaled by various topographic metrics. Preliminary results suggest that predicted age distributions scaled by the rock uplift rate in Pecube are statistically equivalent to the observed age distributions for ~75% of the catchments, but may improve when scaled by local relief or specific stream power weighted by satellite-derived precipitation. Ongoing work is exploring the effect of scaling by other topographic metrics.
Wang, Haidong; Yang, Guangsheng; Zhou, Jinyu; Pei, Jiang; Zhang, Qiangfeng; Song, Xingfa; Sun, Zengxian
2016-08-01
In this study, a simple and sensitive ultra performance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS) method was developed and validated for quantitation of droxidopa in human plasma for the first time. A simple plasma protein precipitation method using methanol containing 3% formic acid was selected, and the separation was achieved by an Acquity UPLC™ BEH Amide column (2.1mm×50mm, 1.7μm) with a gradient elution using acetonitrile, ammonium formate buffer and formic acid as mobile phase. The detection of droxidopa and benserazide (internal standard, IS) was performed using positive-ion electrospray tandem mass spectrometry via multiple reaction monitoring (MRM). The precursor-to-product ion transitions m/z 214.2→m/z 152.0 for droxidopa, and m/z 258.1→m/z 139.1 for IS were used for quantification. A lower limit of quantification of 5.00ng/mL was achieved and the linear curve range was 5.00-4000ng/mL using a weighted (1/x(2)) linear regression model. Intra-assay and inter-assay precision was less than 10.2%, and the accuracy ranged from 0.1% to 2.1%. Stability, recovery and matrix effects were within the acceptance criteria recommended by the regulatory bioanalytical guidelines. The method was successfully applied to a pharmacokinetic study of droxidopa in healthy Chinese volunteers. Copyright © 2016. Published by Elsevier B.V.
Collective and single cell behavior in epithelial contact inhibition.
Puliafito, Alberto; Hufnagel, Lars; Neveu, Pierre; Streichan, Sebastian; Sigal, Alex; Fygenson, D Kuchnir; Shraiman, Boris I
2012-01-17
Control of cell proliferation is a fundamental aspect of tissue physiology central to morphogenesis, wound healing, and cancer. Although many of the molecular genetic factors are now known, the system level regulation of growth is still poorly understood. A simple form of inhibition of cell proliferation is encountered in vitro in normally differentiating epithelial cell cultures and is known as "contact inhibition." The study presented here provides a quantitative characterization of contact inhibition dynamics on tissue-wide and single cell levels. Using long-term tracking of cultured Madin-Darby canine kidney cells we demonstrate that inhibition of cell division in a confluent monolayer follows inhibition of cell motility and sets in when mechanical constraint on local expansion causes divisions to reduce cell area. We quantify cell motility and cell cycle statistics in the low density confluent regime and their change across the transition to epithelial morphology which occurs with increasing cell density. We then study the dynamics of cell area distribution arising through reductive division, determine the average mitotic rate as a function of cell size, and demonstrate that complete arrest of mitosis occurs when cell area falls below a critical value. We also present a simple computational model of growth mechanics which captures all aspects of the observed behavior. Our measurements and analysis show that contact inhibition is a consequence of mechanical interaction and constraint rather than interfacial contact alone, and define quantitative phenotypes that can guide future studies of molecular mechanisms underlying contact inhibition.
Yamaguchi, Hideto; Hirakura, Yutaka; Shirai, Hiroki; Mimura, Hisashi; Toyo'oka, Toshimasa
2011-06-01
The need for a simple and high-throughput method for identifying the tertiary structure of protein pharmaceuticals has increased. In this study, a simple method for mapping the protein fold is proposed for use as a complementary quality test. This method is based on cross-linking a protein using a [bis(sulfosuccinimidyl)suberate (BS(3))], followed by peptide mapping by LC-MS. Consensus interferon (CIFN) was used as the model protein. The tryptic map obtained via liquid chromatography tandem mass spectroscopy (LC-MS/MS) and the mass mapping obtained via matrix-assisted laser desorption/ionization time-of-flight mass spectroscopy were used to identify cross-linked peptides. While LC-MS/MS analyses found that BS(3) formed cross-links in the loop region of the protein, which was regarded as the biologically active site, sodium dodecyl-sulfate polyacrylamide gel electrophoresis demonstrated that cross-linking occurred within a protein molecule, but not between protein molecules. The occurrence of cross-links at the active site depends greatly on the conformation of the protein, which is determined by the denaturing conditions. Quantitative evaluation of the tertiary structure of CIFN was thus possible by monitoring the amounts of cross-linked peptides generated. Assuming that background information is available at the development stage, this method may be applicable to process development as a complementary test for quality control. Copyright © 2011 Elsevier B.V. All rights reserved.
The halo model in a massive neutrino cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Massara, Elena; Villaescusa-Navarro, Francisco; Viel, Matteo, E-mail: emassara@sissa.it, E-mail: villaescusa@oats.inaf.it, E-mail: viel@oats.inaf.it
2014-12-01
We provide a quantitative analysis of the halo model in the context of massive neutrino cosmologies. We discuss all the ingredients necessary to model the non-linear matter and cold dark matter power spectra and compare with the results of N-body simulations that incorporate massive neutrinos. Our neutrino halo model is able to capture the non-linear behavior of matter clustering with a ∼20% accuracy up to very non-linear scales of k = 10 h/Mpc (which would be affected by baryon physics). The largest discrepancies arise in the range k = 0.5 – 1 h/Mpc where the 1-halo and 2-halo terms are comparable and are present also inmore » a massless neutrino cosmology. However, at scales k < 0.2 h/Mpc our neutrino halo model agrees with the results of N-body simulations at the level of 8% for total neutrino masses of < 0.3 eV. We also model the neutrino non-linear density field as a sum of a linear and clustered component and predict the neutrino power spectrum and the cold dark matter-neutrino cross-power spectrum up to k = 1 h/Mpc with ∼30% accuracy. For masses below 0.15 eV the neutrino halo model captures the neutrino induced suppression, casted in terms of matter power ratios between massive and massless scenarios, with a 2% agreement with the results of N-body/neutrino simulations. Finally, we provide a simple application of the halo model: the computation of the clustering of galaxies, in massless and massive neutrinos cosmologies, using a simple Halo Occupation Distribution scheme and our halo model extension.« less
Designing for time-dependent material response in spacecraft structures
NASA Technical Reports Server (NTRS)
Hyer, M. W.; Oleksuk, Lynda L. S.; Bowles, D. E.
1992-01-01
To study the influence on overall deformations of the time-dependent constitutive properties of fiber-reinforced polymeric matrix composite materials being considered for use in orbiting precision segmented reflectors, simple sandwich beam models are developed. The beam models include layers representing the face sheets, the core, and the adhesive bonding of the face sheets to the core. A three-layer model lumps the adhesive layers with the face sheets or core, while a five-layer model considers the adhesive layers explicitly. The deformation response of the three-layer and five-layer sandwich beam models to a midspan point load is studied. This elementary loading leads to a simple analysis, and it is easy to create this loading in the laboratory. Using the correspondence principle of viscoelasticity, the models representing the elastic behavior of the two beams are transformed into time-dependent models. Representative cases of time-dependent material behavior for the facesheet material, the core material, and the adhesive are used to evaluate the influence of these constituents being time-dependent on the deformations of the beam. As an example of the results presented, if it assumed that, as a worst case, the polymer-dominated shear properties of the core behave as a Maxwell fluid such that under constant shear stress the shear strain increases by a factor of 10 in 20 years, then it is shown that the beam deflection increases by a factor of 1.4 during that time. In addition to quantitative conclusions, several assumptions are discussed which simplify the analyses for use with more complicated material models. Finally, it is shown that the simpler three-layer model suffices in many situations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sorooshian, S.; Bales, R.C.; Gupta, V.K.
1992-02-01
In order to better understand the implications of acid deposition in watershed systems in the Sierra Nevada, the California Air Resources Board (CARB) initiated an intensive integrated watershed study at Emerald Lake in Sequoia National Park. The comprehensive nature of the data obtained from these studies provided an opportunity to develop a quantitative description of how watershed characteristics and inputs to the watershed influence within-watershed fluxes, chemical composition of streams and lakes, and, therefore, biotic processes. Two different but closely-related modeling approaches were followed. In the first, the emphasis was placed on the development of systems-theoretic models. In the secondmore » approach, development of a compartmental model was undertaken. The systems-theoretic effort results in simple time-series models that allow the consideration of the stochastic properties of model errors. The compartmental model (the University of Arizona Alpine Hydrochemical Model (AHM)) is a comprehensive and detailed description of the various interacting physical and chemical processes occurring on the watershed.« less
Inductive reasoning about causally transmitted properties.
Shafto, Patrick; Kemp, Charles; Bonawitz, Elizabeth Baraff; Coley, John D; Tenenbaum, Joshua B
2008-11-01
Different intuitive theories constrain and guide inferences in different contexts. Formalizing simple intuitive theories as probabilistic processes operating over structured representations, we present a new computational model of category-based induction about causally transmitted properties. A first experiment demonstrates undergraduates' context-sensitive use of taxonomic and food web knowledge to guide reasoning about causal transmission and shows good qualitative agreement between model predictions and human inferences. A second experiment demonstrates strong quantitative and qualitative fits to inferences about a more complex artificial food web. A third experiment investigates human reasoning about complex novel food webs where species have known taxonomic relations. Results demonstrate a double-dissociation between the predictions of our causal model and a related taxonomic model [Kemp, C., & Tenenbaum, J. B. (2003). Learning domain structures. In Proceedings of the 25th annual conference of the cognitive science society]: the causal model predicts human inferences about diseases but not genes, while the taxonomic model predicts human inferences about genes but not diseases. We contrast our framework with previous models of category-based induction and previous formal instantiations of intuitive theories, and outline challenges in developing a complete model of context-sensitive reasoning.
Kim, Yong-Il; Im, Hyung-Jun; Paeng, Jin Chul; Lee, Jae Sung; Eo, Jae Seon; Kim, Dong Hyun; Kim, Euishin E; Kang, Keon Wook; Chung, June-Key; Lee, Dong Soo
2012-12-01
(18)F-FP-CIT positron emission tomography (PET) is an effective imaging for dopamine transporters. In usual clinical practice, (18)F-FP-CIT PET is analyzed visually or quantified using manual delineation of a volume of interest (VOI) for the striatum. In this study, we suggested and validated two simple quantitative methods based on automatic VOI delineation using statistical probabilistic anatomical mapping (SPAM) and isocontour margin setting. Seventy-five (18)F-FP-CIT PET images acquired in routine clinical practice were used for this study. A study-specific image template was made and the subject images were normalized to the template. Afterwards, uptakes in the striatal regions and cerebellum were quantified using probabilistic VOI based on SPAM. A quantitative parameter, QSPAM, was calculated to simulate binding potential. Additionally, the functional volume of each striatal region and its uptake were measured in automatically delineated VOI using isocontour margin setting. Uptake-volume product (QUVP) was calculated for each striatal region. QSPAM and QUVP were compared with visual grading and the influence of cerebral atrophy on the measurements was tested. Image analyses were successful in all the cases. Both the QSPAM and QUVP were significantly different according to visual grading (P < 0.001). The agreements of QUVP or QSPAM with visual grading were slight to fair for the caudate nucleus (κ = 0.421 and 0.291, respectively) and good to perfect to the putamen (κ = 0.663 and 0.607, respectively). Also, QSPAM and QUVP had a significant correlation with each other (P < 0.001). Cerebral atrophy made a significant difference in QSPAM and QUVP of the caudate nuclei regions with decreased (18)F-FP-CIT uptake. Simple quantitative measurements of QSPAM and QUVP showed acceptable agreement with visual grading. Although QSPAM in some group may be influenced by cerebral atrophy, these simple methods are expected to be effective in the quantitative analysis of (18)F-FP-CIT PET in usual clinical practice.
Monosodium glutamate for simple photometric iron analysis
NASA Astrophysics Data System (ADS)
Prasetyo, E.
2018-01-01
Simple photometric method for iron analysis using monosodium glutamate (MSG) was proposed. The method could be used as an alternative method, which was technically simple, economic, quantitative, readily available, scientifically sound and environmental friendly. Rapid reaction of iron (III) with glutamate in sodium chloride-hydrochloric acid buffer (pH 2) to form red-brown complex was served as a basis in the photometric determination, which obeyed the range of iron (III) concentration 1.6 - 80 µg/ml. This method could be applied to determine iron concentration in soil with satisfactory results (accuracy and precision) compared to other photometric and atomic absorption spectrometry results.
Sun, Lirui; Jia, Longfei; Xie, Xing; Xie, Kaizhou; Wang, Jianfeng; Liu, Jianyu; Cui, Lulu; Zhang, Genxi; Dai, Guojun; Wang, Jinyu
2016-02-01
In this present study, we developed a simple, rapid and specific method for the quantitative analysis of the contents of amoxicillin (AMO), AMO metabolites and ampicillin (AMP) in eggs. This method uses a simple liquid-liquid extraction with acetonitrile followed by liquid chromatography-tandem mass spectrometry (LC-MS/MS). The optimized method has been validated according to requirements defined by the European Union and Food and Drug Administration. Extraction recoveries of the target compounds from the egg at 5, 10 and 25 μg/kg were all higher than 80%, with relative standard deviations not exceeding 10.00%. The limits of quantification in eggs were below the maximum residue limits (MRLs). The decision limits (CCα) ranged between 11.1 and 11.5 μg/kg, while detection capabilities (CCβ) from 12.1 to 13.0 μg/kg. These values were very close to the corresponding MRLs. Finally, the new approach was successfully verified for the quantitative determination of these analytes in 40 commercial eggs from local supermarkets. Copyright © 2015 Elsevier Ltd. All rights reserved.
Self-powered integrated microfluidic point-of-care low-cost enabling (SIMPLE) chip
Yeh, Erh-Chia; Fu, Chi-Cheng; Hu, Lucy; Thakur, Rohan; Feng, Jeffrey; Lee, Luke P.
2017-01-01
Portable, low-cost, and quantitative nucleic acid detection is desirable for point-of-care diagnostics; however, current polymerase chain reaction testing often requires time-consuming multiple steps and costly equipment. We report an integrated microfluidic diagnostic device capable of on-site quantitative nucleic acid detection directly from the blood without separate sample preparation steps. First, we prepatterned the amplification initiator [magnesium acetate (MgOAc)] on the chip to enable digital nucleic acid amplification. Second, a simplified sample preparation step is demonstrated, where the plasma is separated autonomously into 224 microwells (100 nl per well) without any hemolysis. Furthermore, self-powered microfluidic pumping without any external pumps, controllers, or power sources is accomplished by an integrated vacuum battery on the chip. This simple chip allows rapid quantitative digital nucleic acid detection directly from human blood samples (10 to 105 copies of methicillin-resistant Staphylococcus aureus DNA per microliter, ~30 min, via isothermal recombinase polymerase amplification). These autonomous, portable, lab-on-chip technologies provide promising foundations for future low-cost molecular diagnostic assays. PMID:28345028
Reference condition approach to restoration planning
Nestler, J.M.; Theiling, C.H.; Lubinski, S.J.; Smith, D.L.
2010-01-01
Ecosystem restoration planning requires quantitative rigor to evaluate alternatives, define end states, report progress and perform environmental benefits analysis (EBA). Unfortunately, existing planning frameworks are, at best, semi-quantitative. In this paper, we: (1) describe a quantitative restoration planning approach based on a comprehensive, but simple mathematical framework that can be used to effectively apply knowledge and evaluate alternatives, (2) use the approach to derive a simple but precisely defined lexicon based on the reference condition concept and allied terms and (3) illustrate the approach with an example from the Upper Mississippi River System (UMRS) using hydrologic indicators. The approach supports the development of a scaleable restoration strategy that, in theory, can be expanded to ecosystem characteristics such as hydraulics, geomorphology, habitat and biodiversity. We identify three reference condition types, best achievable condition (A BAC), measured magnitude (MMi which can be determined at one or many times and places) and desired future condition (ADFC) that, when used with the mathematical framework, provide a complete system of accounts useful for goal-oriented system-level management and restoration. Published in 2010 by John Wiley & Sons, Ltd.
A Simple Configuration for Quantitative Phase Contrast Microscopy of Transmissible Samples
NASA Astrophysics Data System (ADS)
Sengupta, Chandan; Dasgupta, Koustav; Bhattacharya, K.
Phase microscopy attempts to visualize and quantify the phase distribution of samples which are otherwise invisible under microscope without the use of stains. The two principal approaches to phase microscopy are essentially those of Fourier plane modulation and interferometric techniques. Although the former, first proposed by Zernike, had been the harbinger of phase microscopy, it was the latter that allowed for quantitative evaluation of phase samples. However interferometric techniques are fraught with associated problems such as complicated setup involving mirrors and beam-splitters, the need for a matched objective in the reference arm and also the need for vibration isolation. The present work proposes a single element cube beam-splitter (CBS) interferometer combined with a microscope objective (MO) for interference microscopy. Because of the monolithic nature of the interferometer, the system is almost insensitive to vibrations and relatively simple to align. It will be shown that phase shifting properties may also be introduced by suitable and proper use of polarizing devices. Initial results showing the quantitative three dimensional phase profiles of simulated and actual biological specimens are presented.
Lee, Kathy Wai Yu; Porter, Christopher J H; Boyd, Ben J
2013-09-01
There is increasing attention in the literature towards understanding the behaviour of lipid-based drug formulations under digestion conditions using in vitro and in vivo methods. This necessitates a convenient method for quantitation of lipids and lipid digestion products. In this study, a simple and accessible method for the separation and quantitative determination of typical formulation and digested lipids using high performance liquid chromatography coupled to refractive index detection (HPLC-RI) is described. Long and medium chain lipids were separated and quantified in a biological matrix (gastrointestinal content) without derivatisation using HPLC-RI on C18 and C8 columns, respectively. The intra- and inter-assay accuracy was between 92% and 106%, and the assays were precise to within a coefficient of variation of less than 10% over the range of 0.1-2 mg/mL for both long and medium chain lipids. This method is also shown to be suitable for quantifying the lipolysis products collected from the gastrointestinal tract in the course of in vivo lipid digestion studies.
NASA Astrophysics Data System (ADS)
Tao, Wanghai; Wang, Quanjiu; Lin, Henry
2018-03-01
Soil and water loss from farmland causes land degradation and water pollution, thus continued efforts are needed to establish mathematical model for quantitative analysis of relevant processes and mechanisms. In this study, an approximate analytical solution has been developed for overland flow model and sediment transport model, offering a simple and effective means to predict overland flow and erosion under natural rainfall conditions. In the overland flow model, the flow regime was considered to be transitional with the value of parameter β (in the kinematic wave model) approximately two. The change rate of unit discharge with distance was assumed to be constant and equal to the runoff rate at the outlet of the plane. The excess rainfall was considered to be constant under uniform rainfall conditions. The overland flow model developed can be further applied to natural rainfall conditions by treating excess rainfall intensity as constant over a small time interval. For the sediment model, the recommended values of the runoff erosion calibration constant (cr) and the splash erosion calibration constant (cf) have been given in this study so that it is easier to use the model. These recommended values are 0.15 and 0.12, respectively. Comparisons with observed results were carried out to validate the proposed analytical solution. The results showed that the approximate analytical solution developed in this paper closely matches the observed data, thus providing an alternative method of predicting runoff generation and sediment yield, and offering a more convenient method of analyzing the quantitative relationships between variables. Furthermore, the model developed in this study can be used as a theoretical basis for developing runoff and erosion control methods.
Quantification of myocardial perfusion based on signal intensity of flow sensitized MRI
NASA Astrophysics Data System (ADS)
Abeykoon, Sumeda B.
The quantitative assessment of perfusion is important for early recognition of a variety of heart diseases, determination of disease severity and their cure. In conventional approach of measuring cardiac perfusion by arterial spin labeling, the relative difference in the apparent T1 relaxation times in response to selective and non-selective inversion of blood entering the region of interest is related to perfusion via a two-compartment tissue model. But accurate determination of T1 in small animal hearts is difficult and prone to errors due to long scan times. The purpose of this study is to develop a fast, robust and simple method to quantitatively assess myocardial perfusion using arterial spin labeling. The proposed method is based on signal intensities (SI) of inversion recovery slice-select, non-select and steady-state images. Especially in this method data are acquired at a single inversion time and at short repetition times. This study began by investigating the accuracy of assessment of perfusion using a two compartment system. First, determination of perfusion by T1 and SI were implemented to a simple, two-compartment phantom model. Mathematical model developed for full spin exchange models (in-vivo experiments) by solving a modified Bloch equation was modified to develop mathematical models (T1 and SI) for a phantom (zero spin exchange). The phantom result at different flow rates shows remarkable evidence of accuracy of the two-compartment model and SI, T1 methods: the SI method has less propagation error and less scan time. Next, twelve healthy C57BL/6 mice were scanned for quantitative perfusion assessment and three of them were repeatedly scanned at three different time points for a reproducibility test. The myocardial perfusion of healthy mice obtained by the SI-method, 5.7+/-1.6 ml/g/min, was similar (p=0.38) to that obtained by the conventional T1 method, 5.6+/- 2.3 ml/g/min. The reproducibility of the SI method shows acceptable results: the maximum percentage deviation is about 5%. Then the SI-method was used in comparison to a delayed enhanced method to qualitatively and quantitatively assess perfusion deficits in an ischemia-reperfusion (IR) mouse model. The infarcted region of the perfusion map is comparable to the hyper intense region of the delayed enhanced image of the IR mouse. The SI method also used to record a chronological comparison of perfusion on delta sarcoglycan null (DSG) mice. Perfusion of DSG and wild-type (WT) mice at ages of 12 weeks and 32 weeks were compared and percentage change of perfusion was estimated. The result shows that in DSG mice perfusion changes considerably. Finally, the SI method was implemented on a 3 Tesla Philip scanner by modifying to data acquisition method. The perfusion obtained in this is consistent with literature values but further adjustment of pulse sequence and modification of numerical solution is needed. The most important benefit of the SI method is that it reduces scan time 30%--40% and lessens motion artifacts of images compared to the T1 method. This study demonstrates that the signal intensity-based ASL method is a robust alternative to the conventional T1-method.
Known-component 3D-2D registration for quality assurance of spine surgery pedicle screw placement
NASA Astrophysics Data System (ADS)
Uneri, A.; De Silva, T.; Stayman, J. W.; Kleinszig, G.; Vogt, S.; Khanna, A. J.; Gokaslan, Z. L.; Wolinsky, J.-P.; Siewerdsen, J. H.
2015-10-01
A 3D-2D image registration method is presented that exploits knowledge of interventional devices (e.g. K-wires or spine screws—referred to as ‘known components’) to extend the functionality of intraoperative radiography/fluoroscopy by providing quantitative measurement and quality assurance (QA) of the surgical product. The known-component registration (KC-Reg) algorithm uses robust 3D-2D registration combined with 3D component models of surgical devices known to be present in intraoperative 2D radiographs. Component models were investigated that vary in fidelity from simple parametric models (e.g. approximation of a screw as a simple cylinder, referred to as ‘parametrically-known’ component [pKC] registration) to precise models based on device-specific CAD drawings (referred to as ‘exactly-known’ component [eKC] registration). 3D-2D registration from three intraoperative radiographs was solved using the covariance matrix adaptation evolution strategy (CMA-ES) to maximize image-gradient similarity, relating device placement relative to 3D preoperative CT of the patient. Spine phantom and cadaver studies were conducted to evaluate registration accuracy and demonstrate QA of the surgical product by verification of the type of devices delivered and conformance within the ‘acceptance window’ of the spinal pedicle. Pedicle screws were successfully registered to radiographs acquired from a mobile C-arm, providing TRE 1-4 mm and <5° using simple parametric (pKC) models, further improved to <1 mm and <1° using eKC registration. Using advanced pKC models, screws that did not match the device models specified in the surgical plan were detected with an accuracy of >99%. Visualization of registered devices relative to surgical planning and the pedicle acceptance window provided potentially valuable QA of the surgical product and reliable detection of pedicle screw breach. 3D-2D registration combined with 3D models of known surgical devices offers a novel method for intraoperative QA. The method provides a near-real-time independent check against pedicle breach, facilitating revision within the same procedure if necessary and providing more rigorous verification of the surgical product.
NASA Astrophysics Data System (ADS)
Habte, Frezghi; Natarajan, Arutselvan; Paik, David S.; Gambhir, Sanjiv S.
2014-03-01
Cerenkov luminescence imaging (CLI) is an emerging cost effective modality that uses conventional small animal optical imaging systems and clinically available radionuclide probes for light emission. CLI has shown good correlation with PET for organs of high uptake such as kidney, spleen, thymus and subcutaneous tumors in mouse models. However, CLI has limitations for deep tissue quantitative imaging since the blue-weighted spectral characteristics of Cerenkov radiation attenuates highly by mammalian tissue. Large organs such as the liver have also shown higher signal due to the contribution of emission of light from a greater thickness of tissue. In this study, we developed a simple model that estimates the effective tissue attenuation coefficient in order to correct the CLI signal intensity with a priori estimated depth and thickness of specific organs. We used several thin slices of ham to build a phantom with realistic attenuation. We placed radionuclide sources inside the phantom at different tissue depths and imaged it using an IVIS Spectrum (Perkin-Elmer, Waltham, MA, USA) and Inveon microPET (Preclinical Solutions Siemens, Knoxville, TN). We also performed CLI and PET of mouse models and applied the proposed attenuation model to correct CLI measurements. Using calibration factors obtained from phantom study that converts the corrected CLI measurements to %ID/g, we obtained an average difference of less that 10% for spleen and less than 35% for liver compared to conventional PET measurements. Hence, the proposed model has a capability of correcting the CLI signal to provide comparable measurements with PET data.
Stable isotope dimethyl labelling for quantitative proteomics and beyond
Hsu, Jue-Liang; Chen, Shu-Hui
2016-01-01
Stable-isotope reductive dimethylation, a cost-effective, simple, robust, reliable and easy-to- multiplex labelling method, is widely applied to quantitative proteomics using liquid chromatography-mass spectrometry. This review focuses on biological applications of stable-isotope dimethyl labelling for a large-scale comparative analysis of protein expression and post-translational modifications based on its unique properties of the labelling chemistry. Some other applications of the labelling method for sample preparation and mass spectrometry-based protein identification and characterization are also summarized. This article is part of the themed issue ‘Quantitative mass spectrometry’. PMID:27644970
Sarkar, Aurijit; Anderson, Kelcey C; Kellogg, Glen E
2012-06-01
AcrA-AcrB-TolC efflux pumps extrude drugs of multiple classes from bacterial cells and are a leading cause for antimicrobial resistance. Thus, they are of paramount interest to those engaged in antibiotic discovery. Accurate prediction of antibiotic efflux has been elusive, despite several studies aimed at this purpose. Minimum inhibitory concentration (MIC) ratios of 32 β-lactam antibiotics were collected from literature. 3-Dimensional Quantitative Structure-Activity Relationship on the β-lactam antibiotic structures revealed seemingly predictive models (q(2)=0.53), but the lack of a general superposition rule does not allow its use on antibiotics that lack the β-lactam moiety. Since MIC ratios must depend on interactions of antibiotics with lipid membranes and transport proteins during influx, capture and extrusion of antibiotics from the bacterial cell, descriptors representing these factors were calculated and used in building mathematical models that quantitatively classify antibiotics as having high/low efflux (>93% accuracy). Our models provide preliminary evidence that it is possible to predict the effects of antibiotic efflux if the passage of antibiotics into, and out of, bacterial cells is taken into account--something descriptor and field-based QSAR models cannot do. While the paucity of data in the public domain remains the limiting factor in such studies, these models show significant improvements in predictions over simple LogP-based regression models and should pave the path toward further work in this field. This method should also be extensible to other pharmacologically and biologically relevant transport proteins. Copyright © 2012 Elsevier Masson SAS. All rights reserved.
Accumulator and random-walk models of psychophysical discrimination: a counter-evaluation.
Vickers, D; Smith, P
1985-01-01
In a recent assessment of models of psychophysical discrimination, Heath criticises the accumulator model for its reliance on computer simulation and qualitative evidence, and contrasts it unfavourably with a modified random-walk model, which yields exact predictions, is susceptible to critical test, and is provided with simple parameter-estimation techniques. A counter-evaluation is presented, in which the approximations employed in the modified random-walk analysis are demonstrated to be seriously inaccurate, the resulting parameter estimates to be artefactually determined, and the proposed test not critical. It is pointed out that Heath's specific application of the model is not legitimate, his data treatment inappropriate, and his hypothesis concerning confidence inconsistent with experimental results. Evidence from adaptive performance changes is presented which shows that the necessary assumptions for quantitative analysis in terms of the modified random-walk model are not satisfied, and that the model can be reconciled with data at the qualitative level only by making it virtually indistinguishable from an accumulator process. A procedure for deriving exact predictions for an accumulator process is outlined.
Kann, Z R; Skinner, J L
2014-09-14
Non-polarizable models for ions and water quantitatively and qualitatively misrepresent the salt concentration dependence of water diffusion in electrolyte solutions. In particular, experiment shows that the water diffusion coefficient increases in the presence of salts of low charge density (e.g., CsI), whereas the results of simulations with non-polarizable models show a decrease of the water diffusion coefficient in all alkali halide solutions. We present a simple charge-scaling method based on the ratio of the solvent dielectric constants from simulation and experiment. Using an ion model that was developed independently of a solvent, i.e., in the crystalline solid, this method improves the water diffusion trends across a range of water models. When used with a good-quality water model, e.g., TIP4P/2005 or E3B, this method recovers the qualitative behaviour of the water diffusion trends. The model and method used were also shown to give good results for other structural and dynamic properties including solution density, radial distribution functions, and ion diffusion coefficients.
Exponential quantum spreading in a class of kicked rotor systems near high-order resonances
NASA Astrophysics Data System (ADS)
Wang, Hailong; Wang, Jiao; Guarneri, Italo; Casati, Giulio; Gong, Jiangbin
2013-11-01
Long-lasting exponential quantum spreading was recently found in a simple but very rich dynamical model, namely, an on-resonance double-kicked rotor model [J. Wang, I. Guarneri, G. Casati, and J. B. Gong, Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.107.234104 107, 234104 (2011)]. The underlying mechanism, unrelated to the chaotic motion in the classical limit but resting on quasi-integrable motion in a pseudoclassical limit, is identified for one special case. By presenting a detailed study of the same model, this work offers a framework to explain long-lasting exponential quantum spreading under much more general conditions. In particular, we adopt the so-called “spinor” representation to treat the kicked-rotor dynamics under high-order resonance conditions and then exploit the Born-Oppenheimer approximation to understand the dynamical evolution. It is found that the existence of a flat band (or an effectively flat band) is one important feature behind why and how the exponential dynamics emerges. It is also found that a quantitative prediction of the exponential spreading rate based on an interesting and simple pseudoclassical map may be inaccurate. In addition to general interests regarding the question of how exponential behavior in quantum systems may persist for a long time scale, our results should motivate further studies toward a better understanding of high-order resonance behavior in δ-kicked quantum systems.
Stability of procalcitonin at room temperature.
Milcent, Karen; Poulalhon, Claire; Fellous, Christelle Vauloup; Petit, François; Bouyer, Jean; Gajdos, Vincent
2014-01-01
The aim was to assess procalcitonin (PCT) stability after two days of storage at room temperature. Samples were collected from febrile children aged 7 to 92 days and were rapidly frozen after sampling. PCT levels were measured twice after thawing: immediately (named y) and 48 hours later after storage at room temperature (named x). PCT values were described with medians and interquartile ranges or by categorizing them into classes with thresholds 0.25, 0.5, and 2 ng/mL. The relationship between x and y PCT levels was analyzed using fractional polynomials in order to predict the PCT value immediately after thawing (named y') from x. A significant decrease in PCT values was observed after 48 hours of storage at room temperature, either in median, 30% lowering (p < 0.001), or as categorical variable (p < 0.001). The relationship between x and y can be accurately modeled with a simple linear model: y = 1.37 x (R2 = 0.99). The median of the predicted PCT values y' was quantitatively very close to the median of y and the distributions of y and y' across categories were very similar and not statistically different. PCT levels noticeably decrease after 48 hours of storage at room temperature. It is possible to pre- dict accurately effective PCT values from the values after 48 hours of storage at room temperature with a simple statistical model.
NASA Astrophysics Data System (ADS)
Urban, Nathan M.; Keller, Klaus
2010-10-01
How has the Atlantic Meridional Overturning Circulation (AMOC) varied over the past centuries and what is the risk of an anthropogenic AMOC collapse? We report probabilistic projections of the future climate which improve on previous AMOC projection studies by (i) greatly expanding the considered observational constraints and (ii) carefully sampling the tail areas of the parameter probability distribution function (pdf). We use a Bayesian inversion to constrain a simple model of the coupled climate, carbon cycle and AMOC systems using observations to derive multicentury hindcasts and projections. Our hindcasts show considerable skill in representing the observational constraints. We show that robust AMOC risk estimates can require carefully sampling the parameter pdfs. We find a low probability of experiencing an AMOC collapse within the 21st century for a business-as-usual emissions scenario. The probability of experiencing an AMOC collapse within two centuries is 1/10. The probability of crossing a forcing threshold and triggering a future AMOC collapse (by 2300) is approximately 1/30 in the 21st century and over 1/3 in the 22nd. Given the simplicity of the model structure and uncertainty in the forcing assumptions, our analysis should be considered a proof of concept and the quantitative conclusions subject to severe caveats.
Sun, Zhelin; Wang, Deli; Xiang, Jie
2014-11-25
Spontaneous attractions between free-standing nanostructures have often caused adhesion or stiction that affects a wide range of nanoscale devices, particularly nano/microelectromechanical systems. Previous understandings of the attraction mechanisms have included capillary force, van der Waals/Casimir forces, and surface polar charges. However, none of these mechanisms universally applies to simple semiconductor structures such as silicon nanowire arrays that often exhibit bunching or adhesions. Here we propose a simple capacitive force model to quantitatively study the universal spontaneous attraction that often causes stiction among semiconductor or metallic nanostructures such as vertical nanowire arrays with inevitably nonuniform size variations due to fabrication. When nanostructures are uniform in size, they share the same substrate potential. The presence of slight size differences will break the symmetry in the capacitive network formed between the nanowires, substrate, and their environment, giving rise to electrostatic attraction forces due to the relative potential difference between neighboring wires. Our model is experimentally verified using arrays of vertical silicon nanowire pairs with varied spacing, diameter, and size differences. Threshold nanowire spacing, diameter, or size difference between the nearest neighbors has been identified beyond which the nanowires start to exhibit spontaneous attraction that leads to bridging when electrostatic forces overcome elastic restoration forces. This work illustrates a universal understanding of spontaneous attraction that will impact the design, fabrication, and reliable operation of nanoscale devices and systems.
Simultaneous 19F-1H medium resolution NMR spectroscopy for online reaction monitoring
NASA Astrophysics Data System (ADS)
Zientek, Nicolai; Laurain, Clément; Meyer, Klas; Kraume, Matthias; Guthausen, Gisela; Maiwald, Michael
2014-12-01
Medium resolution nuclear magnetic resonance (MR-NMR) spectroscopy is currently a fast developing field, which has an enormous potential to become an important analytical tool for reaction monitoring, in hyphenated techniques, and for systematic investigations of complex mixtures. The recent developments of innovative MR-NMR spectrometers are therefore remarkable due to their possible applications in quality control, education, and process monitoring. MR-NMR spectroscopy can beneficially be applied for fast, non-invasive, and volume integrating analyses under rough environmental conditions. Within this study, a simple 1/16″ fluorinated ethylene propylene (FEP) tube with an ID of 0.04″ (1.02 mm) was used as a flow cell in combination with a 5 mm glass Dewar tube inserted into a benchtop MR-NMR spectrometer with a 1H Larmor frequency of 43.32 MHz and 40.68 MHz for 19F. For the first time, quasi-simultaneous proton and fluorine NMR spectra were recorded with a series of alternating 19F and 1H single scan spectra along the reaction time coordinate of a homogeneously catalysed esterification model reaction containing fluorinated compounds. The results were compared to quantitative NMR spectra from a hyphenated 500 MHz online NMR instrument for validation. Automation of handling, pre-processing, and analysis of NMR data becomes increasingly important for process monitoring applications of online NMR spectroscopy and for its technical and practical acceptance. Thus, NMR spectra were automatically baseline corrected and phased using the minimum entropy method. Data analysis schemes were designed such that they are based on simple direct integration or first principle line fitting, with the aim that the analysis directly revealed molar concentrations from the spectra. Finally, the performance of 1/16″ FEP tube set-up with an ID of 1.02 mm was characterised regarding the limit of detection (LOQ (1H) = 0.335 mol L-1 and LOQ (19F) = 0.130 mol L-1 for trifluoroethanol in D2O (single scan)) and maximum quantitative flow rates up to 0.3 mL min-1. Thus, a series of single scan 19F and 1H NMR spectra acquired with this simple set-up already presents a valuable basis for quantitative reaction monitoring.
Turning around Newton's Second Law
ERIC Educational Resources Information Center
Goff, John Eric
2004-01-01
Conceptual and quantitative difficulties surrounding Newton's second law often arise among introductory physics students. Simply turning around how one expresses Newton's second law may assist students in their understanding of a deceptively simple-looking equation.
Automated quantitative cytological analysis using portable microfluidic microscopy.
Jagannadh, Veerendra Kalyan; Murthy, Rashmi Sreeramachandra; Srinivasan, Rajesh; Gorthi, Sai Siva
2016-06-01
In this article, a portable microfluidic microscopy based approach for automated cytological investigations is presented. Inexpensive optical and electronic components have been used to construct a simple microfluidic microscopy system. In contrast to the conventional slide-based methods, the presented method employs microfluidics to enable automated sample handling and image acquisition. The approach involves the use of simple in-suspension staining and automated image acquisition to enable quantitative cytological analysis of samples. The applicability of the presented approach to research in cellular biology is shown by performing an automated cell viability assessment on a given population of yeast cells. Further, the relevance of the presented approach to clinical diagnosis and prognosis has been demonstrated by performing detection and differential assessment of malaria infection in a given sample. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Quantitative characterization of edge enhancement in phase contrast x-ray imaging.
Monnin, P; Bulling, S; Hoszowska, J; Valley, J F; Meuli, R; Verdun, F R
2004-06-01
The aim of this study was to model the edge enhancement effect in in-line holography phase contrast imaging. A simple analytical approach was used to quantify refraction and interference contrasts in terms of beam energy and imaging geometry. The model was applied to predict the peak intensity and frequency of the edge enhancement for images of cylindrical fibers. The calculations were compared with measurements, and the relationship between the spatial resolution of the detector and the amplitude of the phase contrast signal was investigated. Calculations using the analytical model were in good agreement with experimental results for nylon, aluminum and copper wires of 50 to 240 microm diameter, and with numerical simulations based on Fresnel-Kirchhoff theory. A relationship between the defocusing distance and the pixel size of the image detector was established. This analytical model is a useful tool for optimizing imaging parameters in phase contrast in-line holography, including defocusing distance, detector resolution and beam energy.
System cost/performance analysis (study 2.3). Volume 1: Executive summary
NASA Technical Reports Server (NTRS)
Kazangey, T.
1973-01-01
The relationships between performance, safety, cost, and schedule parameters were identified and quantified in support of an overall effort to generate program models and methodology that provide insight into a total space vehicle program. A specific space vehicle system, the attitude control system (ACS), was used, and a modeling methodology was selected that develops a consistent set of quantitative relationships among performance, safety, cost, and schedule, based on the characteristics of the components utilized in candidate mechanisms. These descriptive equations were developed for a three-axis, earth-pointing, mass expulsion ACS. A data base describing typical candidate ACS components was implemented, along with a computer program to perform sample calculations. This approach, implemented on a computer, is capable of determining the effect of a change in functional requirements to the ACS mechanization and the resulting cost and schedule. By a simple extension of this modeling methodology to the other systems in a space vehicle, a complete space vehicle model can be developed. Study results and recommendations are presented.
NASA Astrophysics Data System (ADS)
Oses, Corey; Isayev, Olexandr; Toher, Cormac; Curtarolo, Stefano; Tropsha, Alexander
Historically, materials discovery is driven by a laborious trial-and-error process. The growth of materials databases and emerging informatics approaches finally offer the opportunity to transform this practice into data- and knowledge-driven rational design-accelerating discovery of novel materials exhibiting desired properties. By using data from the AFLOW repository for high-throughput, ab-initio calculations, we have generated Quantitative Materials Structure-Property Relationship (QMSPR) models to predict critical materials properties, including the metal/insulator classification, band gap energy, and bulk modulus. The prediction accuracy obtained with these QMSPR models approaches training data for virtually any stoichiometric inorganic crystalline material. We attribute the success and universality of these models to the construction of new materials descriptors-referred to as the universal Property-Labeled Material Fragments (PLMF). This representation affords straightforward model interpretation in terms of simple heuristic design rules that could guide rational materials design. This proof-of-concept study demonstrates the power of materials informatics to dramatically accelerate the search for new materials.
Quantitative Modeling of Cerenkov Light Production Efficiency from Medical Radionuclides
Beattie, Bradley J.; Thorek, Daniel L. J.; Schmidtlein, Charles R.; Pentlow, Keith S.; Humm, John L.; Hielscher, Andreas H.
2012-01-01
There has been recent and growing interest in applying Cerenkov radiation (CR) for biological applications. Knowledge of the production efficiency and other characteristics of the CR produced by various radionuclides would help in accessing the feasibility of proposed applications and guide the choice of radionuclides. To generate this information we developed models of CR production efficiency based on the Frank-Tamm equation and models of CR distribution based on Monte-Carlo simulations of photon and β particle transport. All models were validated against direct measurements using multiple radionuclides and then applied to a number of radionuclides commonly used in biomedical applications. We show that two radionuclides, Ac-225 and In-111, which have been reported to produce CR in water, do not in fact produce CR directly. We also propose a simple means of using this information to calibrate high sensitivity luminescence imaging systems and show evidence suggesting that this calibration may be more accurate than methods in routine current use. PMID:22363636
Noise from Supersonic Coaxial Jets. Part 1; Mean Flow Predictions
NASA Technical Reports Server (NTRS)
Dahl, Milo D.; Morris, Philip J.
1997-01-01
Recent theories for supersonic jet noise have used an instability wave noise generation model to predict radiated noise. This model requires a known mean flow that has typically been described by simple analytic functions for single jet mean flows. The mean flow of supersonic coaxial jets is not described easily in terms of analytic functions. To provide these profiles at all axial locations, a numerical scheme is developed to calculate the mean flow properties of a coaxial jet. The Reynolds-averaged, compressible, parabolic boundary layer equations are solved using a mixing length turbulence model. Empirical correlations are developed to account for the effects of velocity and temperature ratios and Mach number on the shear layer spreading. Both normal velocity profile and inverted velocity profile coaxial jets are considered. The mixing length model is modified in each case to obtain reasonable results when the two stream jet merges into a single fully developed jet. The mean flow calculations show both good qualitative and quantitative agreement with measurements in single and coaxial jet flows.
Minimal model for a hydrodynamic fingering instability in microroller suspensions
NASA Astrophysics Data System (ADS)
Delmotte, Blaise; Donev, Aleksandar; Driscoll, Michelle; Chaikin, Paul
2017-11-01
We derive a minimal continuum model to investigate the hydrodynamic mechanism behind the fingering instability recently discovered in a suspension of microrollers near a floor [M. Driscoll et al., Nat. Phys. 13, 375 (2017), 10.1038/nphys3970]. Our model, consisting of two continuous lines of rotlets, exhibits a linear instability driven only by hydrodynamic interactions and reproduces the length-scale selection observed in large-scale particle simulations and in experiments. By adjusting only one parameter, the distance between the two lines, our dispersion relation exhibits quantitative agreement with the simulations and qualitative agreement with experimental measurements. Our linear stability analysis indicates that this instability is caused by the combination of the advective and transverse flows generated by the microrollers near a no-slip surface. Our simple model offers an interesting formalism to characterize other hydrodynamic instabilities that have not been well understood, such as size scale selection in suspensions of particles sedimenting adjacent to a wall, or the recently observed formations of traveling phonons in systems of confined driven particles.
Modelling the Size Effects on the Mechanical Properties of Micro/Nano Structures.
Abazari, Amir Musa; Safavi, Seyed Mohsen; Rezazadeh, Ghader; Villanueva, Luis Guillermo
2015-11-11
Experiments on micro- and nano-mechanical systems (M/NEMS) have shown that their behavior under bending loads departs in many cases from the classical predictions using Euler-Bernoulli theory and Hooke's law. This anomalous response has usually been seen as a dependence of the material properties on the size of the structure, in particular thickness. A theoretical model that allows for quantitative understanding and prediction of this size effect is important for the design of M/NEMS. In this paper, we summarize and analyze the five theories that can be found in the literature: Grain Boundary Theory (GBT), Surface Stress Theory (SST), Residual Stress Theory (RST), Couple Stress Theory (CST) and Surface Elasticity Theory (SET). By comparing these theories with experimental data we propose a simplified model combination of CST and SET that properly fits all considered cases, therefore delivering a simple (two parameters) model that can be used to predict the mechanical properties at the nanoscale.
Modelling the Size Effects on the Mechanical Properties of Micro/Nano Structures
Abazari, Amir Musa; Safavi, Seyed Mohsen; Rezazadeh, Ghader; Villanueva, Luis Guillermo
2015-01-01
Experiments on micro- and nano-mechanical systems (M/NEMS) have shown that their behavior under bending loads departs in many cases from the classical predictions using Euler-Bernoulli theory and Hooke’s law. This anomalous response has usually been seen as a dependence of the material properties on the size of the structure, in particular thickness. A theoretical model that allows for quantitative understanding and prediction of this size effect is important for the design of M/NEMS. In this paper, we summarize and analyze the five theories that can be found in the literature: Grain Boundary Theory (GBT), Surface Stress Theory (SST), Residual Stress Theory (RST), Couple Stress Theory (CST) and Surface Elasticity Theory (SET). By comparing these theories with experimental data we propose a simplified model combination of CST and SET that properly fits all considered cases, therefore delivering a simple (two parameters) model that can be used to predict the mechanical properties at the nanoscale. PMID:26569256