Simple Spreadsheet Models For Interpretation Of Fractured Media Tracer Tests
An analysis of a gas-phase partitioning tracer test conducted through fractured media is discussed within this paper. The analysis employed matching eight simple mathematical models to the experimental data to determine transport parameters. All of the models tested; two porous...
Conifer ovulate cones accumulate pollen principally by simple impaction.
Cresswell, James E; Henning, Kevin; Pennel, Christophe; Lahoubi, Mohamed; Patrick, Michael A; Young, Phillipe G; Tabor, Gavin R
2007-11-13
In many pine species (Family Pinaceae), ovulate cones structurally resemble a turbine, which has been widely interpreted as an adaptation for improving pollination by producing complex aerodynamic effects. We tested the turbine interpretation by quantifying patterns of pollen accumulation on ovulate cones in a wind tunnel and by using simulation models based on computational fluid dynamics. We used computer-aided design and computed tomography to create computational fluid dynamics model cones. We studied three species: Pinus radiata, Pinus sylvestris, and Cedrus libani. Irrespective of the approach or species studied, we found no evidence that turbine-like aerodynamics made a significant contribution to pollen accumulation, which instead occurred primarily by simple impaction. Consequently, we suggest alternative adaptive interpretations for the structure of ovulate cones.
Conifer ovulate cones accumulate pollen principally by simple impaction
Cresswell, James E.; Henning, Kevin; Pennel, Christophe; Lahoubi, Mohamed; Patrick, Michael A.; Young, Phillipe G.; Tabor, Gavin R.
2007-01-01
In many pine species (Family Pinaceae), ovulate cones structurally resemble a turbine, which has been widely interpreted as an adaptation for improving pollination by producing complex aerodynamic effects. We tested the turbine interpretation by quantifying patterns of pollen accumulation on ovulate cones in a wind tunnel and by using simulation models based on computational fluid dynamics. We used computer-aided design and computed tomography to create computational fluid dynamics model cones. We studied three species: Pinus radiata, Pinus sylvestris, and Cedrus libani. Irrespective of the approach or species studied, we found no evidence that turbine-like aerodynamics made a significant contribution to pollen accumulation, which instead occurred primarily by simple impaction. Consequently, we suggest alternative adaptive interpretations for the structure of ovulate cones. PMID:17986613
Beyond harmonic sounds in a simple model for birdsong production.
Amador, Ana; Mindlin, Gabriel B
2008-12-01
In this work we present an analysis of the dynamics displayed by a simple bidimensional model of labial oscillations during birdsong production. We show that the same model capable of generating tonal sounds can present, for a wide range of parameters, solutions which are spectrally rich. The role of physiologically sensible parameters is discussed in each oscillatory regime, allowing us to interpret previously reported data.
Interpretation of commonly used statistical regression models.
Kasza, Jessica; Wolfe, Rory
2014-01-01
A review of some regression models commonly used in respiratory health applications is provided in this article. Simple linear regression, multiple linear regression, logistic regression and ordinal logistic regression are considered. The focus of this article is on the interpretation of the regression coefficients of each model, which are illustrated through the application of these models to a respiratory health research study. © 2013 The Authors. Respirology © 2013 Asian Pacific Society of Respirology.
NASA Astrophysics Data System (ADS)
Thurmond, John B.; Drzewiecki, Peter A.; Xu, Xueming
2005-08-01
Geological data collected from outcrop are inherently three-dimensional (3D) and span a variety of scales, from the megascopic to the microscopic. This presents challenges in both interpreting and communicating observations. The Virtual Reality Modeling Language provides an easy way for geoscientists to construct complex visualizations that can be viewed with free software. Field data in tabular form can be used to generate hierarchical multi-scale visualizations of outcrops, which can convey the complex relationships between a variety of data types simultaneously. An example from carbonate mud-mounds in southeastern New Mexico illustrates the embedding of three orders of magnitude of observation into a single visualization, for the purpose of interpreting depositional facies relationships in three dimensions. This type of raw data visualization can be built without software tools, yet is incredibly useful for interpreting and communicating data. Even simple visualizations can aid in the interpretation of complex 3D relationships that are frequently encountered in the geosciences.
Interpretable Deep Models for ICU Outcome Prediction
Che, Zhengping; Purushotham, Sanjay; Khemani, Robinder; Liu, Yan
2016-01-01
Exponential surge in health care data, such as longitudinal data from electronic health records (EHR), sensor data from intensive care unit (ICU), etc., is providing new opportunities to discover meaningful data-driven characteristics and patterns ofdiseases. Recently, deep learning models have been employedfor many computational phenotyping and healthcare prediction tasks to achieve state-of-the-art performance. However, deep models lack interpretability which is crucial for wide adoption in medical research and clinical decision-making. In this paper, we introduce a simple yet powerful knowledge-distillation approach called interpretable mimic learning, which uses gradient boosting trees to learn interpretable models and at the same time achieves strong prediction performance as deep learning models. Experiment results on Pediatric ICU dataset for acute lung injury (ALI) show that our proposed method not only outperforms state-of-the-art approaches for morality and ventilator free days prediction tasks but can also provide interpretable models to clinicians. PMID:28269832
Slantwise convection on fluid planets: Interpreting convective adjustment from Juno observations
NASA Astrophysics Data System (ADS)
O'Neill, M. E.; Kaspi, Y.; Galanti, E.
2016-12-01
NASA's Juno mission provides unprecedented microwave measurements that pierce Jupiter's weather layer and image the transition to an adiabatic fluid below. This region is expected to be highly turbulent and complex, but to date most models use the moist-to-dry transition as a simple boundary. We present simple theoretical arguments and GCM results to argue that columnar convection is important even in the relatively thin boundary layer, particularly in the equatorial region. We first demonstrate how surface cooling can lead to very horizontal parcel paths, using a simple parcel model. Next we show the impact of this horizontal motion on angular momentum flux in a high-resolution Jovian model. The GCM is a state-of-the-art modification of the MITgcm, with deep geometry, compressibility and interactive two-stream radiation. We show that slantwise convection primarily mixes fluid along columnar surfaces of angular momentum, and discuss the impacts this should have on lapse rate interpretation of both the Galileo probe sounding and the Juno microwave observations.
Microarray-based cancer prediction using soft computing approach.
Wang, Xiaosheng; Gotoh, Osamu
2009-05-26
One of the difficulties in using gene expression profiles to predict cancer is how to effectively select a few informative genes to construct accurate prediction models from thousands or ten thousands of genes. We screen highly discriminative genes and gene pairs to create simple prediction models involved in single genes or gene pairs on the basis of soft computing approach and rough set theory. Accurate cancerous prediction is obtained when we apply the simple prediction models for four cancerous gene expression datasets: CNS tumor, colon tumor, lung cancer and DLBCL. Some genes closely correlated with the pathogenesis of specific or general cancers are identified. In contrast with other models, our models are simple, effective and robust. Meanwhile, our models are interpretable for they are based on decision rules. Our results demonstrate that very simple models may perform well on cancerous molecular prediction and important gene markers of cancer can be detected if the gene selection approach is chosen reasonably.
Inattentive Drivers: Making the Solution Method the Model
ERIC Educational Resources Information Center
McCartney, Mark
2003-01-01
A simple car following model based on the solution of coupled ordinary differential equations is considered. The model is solved using Euler's method and this method of solution is itself interpreted as a mathematical model for car following. Examples of possible classroom use are given. (Contains 6 figures.)
A gentle introduction to Rasch measurement models for metrologists
NASA Astrophysics Data System (ADS)
Mari, Luca; Wilson, Mark
2013-09-01
The talk introduces the basics of Rasch models by systematically interpreting them in the conceptual and lexical framework of the International Vocabulary of Metrology, third edition (VIM3). An admittedly simple example of physical measurement highlights the analogies between physical transducers and tests, as they can be understood as measuring instruments of Rasch models and psychometrics in general. From the talk natural scientists and engineers might learn something of Rasch models, as a specifically relevant case of social measurement, and social scientists might re-interpret something of their knowledge of measurement in the light of the current physical measurement models.
ERIC Educational Resources Information Center
McCartney, Mark; Walsh, Ian
2006-01-01
A simple model for how traffic moves around a closed loop of road is introduced. The consequent analysis of the model can be used as an application of techniques taught at first year undergraduate level, and as a motivator to encourage students to think critically about model formulation and interpretation.
Estimation of a Nonlinear Intervention Phase Trajectory for Multiple-Baseline Design Data
ERIC Educational Resources Information Center
Hembry, Ian; Bunuan, Rommel; Beretvas, S. Natasha; Ferron, John M.; Van den Noortgate, Wim
2015-01-01
A multilevel logistic model for estimating a nonlinear trajectory in a multiple-baseline design is introduced. The model is applied to data from a real multiple-baseline design study to demonstrate interpretation of relevant parameters. A simple change-in-levels (?"Levels") model and a model involving a quadratic function…
Models for Models: An Introduction to Polymer Models Employing Simple Analogies
NASA Astrophysics Data System (ADS)
Tarazona, M. Pilar; Saiz, Enrique
1998-11-01
An introduction to the most common models used in the calculations of conformational properties of polymers, ranging from the freely jointed chain approximation to Monte Carlo or molecular dynamics methods, is presented. Mathematical formalism is avoided and simple analogies, such as human chains, gases, opinion polls, or marketing strategies, are used to explain the different models presented. A second goal of the paper is to teach students how models required for the interpretation of a system can be elaborated, starting with the simplest model and introducing successive improvements until the refinements become so sophisticated that it is much better to use an alternative approach.
NASA Astrophysics Data System (ADS)
Sherrington, David; Davison, Lexie; Buhot, Arnaud; Garrahan, Juan P.
2002-02-01
We report a study of a series of simple model systems with only non-interacting Hamiltonians, and hence simple equilibrium thermodynamics, but with constrained dynamics of a type initially suggested by foams and idealized covalent glasses. We demonstrate that macroscopic dynamical features characteristic of real and more complex model glasses, such as two-time decays in energy and auto-correlation functions, arise from the dynamics and we explain them qualitatively and quantitatively in terms of annihilation-diffusion concepts and theory. The comparison is with strong glasses. We also consider fluctuation-dissipation relations and demonstrate subtleties of interpretation. We find no FDT breakdown when the correct normalization is chosen.
A simple inertial model for Neptune's zonal circulation
NASA Technical Reports Server (NTRS)
Allison, Michael; Lumetta, James T.
1990-01-01
Voyager imaging observations of zonal cloud-tracked winds on Neptune revealed a strongly subrotational equatorial jet with a speed approaching 500 m/s and generally decreasing retrograde motion toward the poles. The wind data are interpreted with a speculative but revealingly simple model based on steady gradient flow balance and an assumed global homogenization of potential vorticity for shallow layer motion. The prescribed model flow profile relates the equatorial velocity to the mid-latitude shear, in reasonable agreement with the available data, and implies a global horizontal deformation scale L(D) of about 3000 km.
Constructing a simple parametric model of shoulder from medical images
NASA Astrophysics Data System (ADS)
Atmani, H.; Fofi, D.; Merienne, F.; Trouilloud, P.
2006-02-01
The modelling of the shoulder joint is an important step to set a Computer-Aided Surgery System for shoulder prosthesis placement. Our approach mainly concerns the bones structures of the scapulo-humeral joint. Our goal is to develop a tool that allows the surgeon to extract morphological data from medical images in order to interpret the biomechanical behaviour of a prosthesised shoulder for preoperative and peroperative virtual surgery. To provide a light and easy-handling representation of the shoulder, a geometrical model composed of quadrics, planes and other simple forms is proposed.
Two simple models of classical heat pumps.
Marathe, Rahul; Jayannavar, A M; Dhar, Abhishek
2007-03-01
Motivated by recent studies of models of particle and heat quantum pumps, we study similar simple classical models and examine the possibility of heat pumping. Unlike many of the usual ratchet models of molecular engines, the models we study do not have particle transport. We consider a two-spin system and a coupled oscillator system which exchange heat with multiple heat reservoirs and which are acted upon by periodic forces. The simplicity of our models allows accurate numerical and exact solutions and unambiguous interpretation of results. We demonstrate that while both our models seem to be built on similar principles, one is able to function as a heat pump (or engine) while the other is not.
Essential core of the Hawking–Ellis types
NASA Astrophysics Data System (ADS)
Martín-Moruno, Prado; Visser, Matt
2018-06-01
The Hawking–Ellis (Segre–Plebański) classification of possible stress–energy tensors is an essential tool in analyzing the implications of the Einstein field equations in a more-or-less model-independent manner. In the current article the basic idea is to simplify the Hawking–Ellis type I, II, III, and IV classification by isolating the ‘essential core’ of the type II, type III, and type IV stress–energy tensors; this being done by subtracting (special cases of) type I to simplify the (Lorentz invariant) eigenvalue structure as much as possible without disturbing the eigenvector structure. We will denote these ‘simplified cores’ type II0, type III0, and type IV0. These ‘simplified cores’ have very nice and simple algebraic properties. Furthermore, types I and II0 have very simple classical interpretations, while type IV0 is known to arise semi-classically (in renormalized expectation values of standard stress–energy tensors). In contrast type III0 stands out in that it has neither a simple classical interpretation, nor even a simple semi-classical interpretation. We will also consider the robustness of this classification considering the stability of the different Hawking–Ellis types under perturbations. We argue that types II and III are definitively unstable, whereas types I and IV are stable.
Das, Rudra Narayan; Roy, Kunal; Popelier, Paul L A
2015-11-01
The present study explores the chemical attributes of diverse ionic liquids responsible for their cytotoxicity in a rat leukemia cell line (IPC-81) by developing predictive classification as well as regression-based mathematical models. Simple and interpretable descriptors derived from a two-dimensional representation of the chemical structures along with quantum topological molecular similarity indices have been used for model development, employing unambiguous modeling strategies that strictly obey the guidelines of the Organization for Economic Co-operation and Development (OECD) for quantitative structure-activity relationship (QSAR) analysis. The structure-toxicity relationships that emerged from both classification and regression-based models were in accordance with the findings of some previous studies. The models suggested that the cytotoxicity of ionic liquids is dependent on the cationic surfactant action, long alkyl side chains, cationic lipophilicity as well as aromaticity, the presence of a dialkylamino substituent at the 4-position of the pyridinium nucleus and a bulky anionic moiety. The models have been transparently presented in the form of equations, thus allowing their easy transferability in accordance with the OECD guidelines. The models have also been subjected to rigorous validation tests proving their predictive potential and can hence be used for designing novel and "greener" ionic liquids. The major strength of the present study lies in the use of a diverse and large dataset, use of simple reproducible descriptors and compliance with the OECD norms. Copyright © 2015 Elsevier Ltd. All rights reserved.
Equivalent circuit models for interpreting impedance perturbation spectroscopy data
NASA Astrophysics Data System (ADS)
Smith, R. Lowell
2004-07-01
As in-situ structural integrity monitoring disciplines mature, there is a growing need to process sensor/actuator data efficiently in real time. Although smaller, faster embedded processors will contribute to this, it is also important to develop straightforward, robust methods to reduce the overall computational burden for practical applications of interest. This paper addresses the use of equivalent circuit modeling techniques for inferring structure attributes monitored using impedance perturbation spectroscopy. In pioneering work about ten years ago significant progress was associated with the development of simple impedance models derived from the piezoelectric equations. Using mathematical modeling tools currently available from research in ultrasonics and impedance spectroscopy is expected to provide additional synergistic benefits. For purposes of structural health monitoring the objective is to use impedance spectroscopy data to infer the physical condition of structures to which small piezoelectric actuators are bonded. Features of interest include stiffness changes, mass loading, and damping or mechanical losses. Equivalent circuit models are typically simple enough to facilitate the development of practical analytical models of the actuator-structure interaction. This type of parametric structure model allows raw impedance/admittance data to be interpreted optimally using standard multiple, nonlinear regression analysis. One potential long-term outcome is the possibility of cataloging measured viscoelastic properties of the mechanical subsystems of interest as simple lists of attributes and their statistical uncertainties, whose evolution can be followed in time. Equivalent circuit models are well suited for addressing calibration and self-consistency issues such as temperature corrections, Poisson mode coupling, and distributed relaxation processes.
Keep Your Distance! Using Second-Order Ordinary Differential Equations to Model Traffic Flow
ERIC Educational Resources Information Center
McCartney, Mark
2004-01-01
A simple mathematical model for how vehicles follow each other along a stretch of road is presented. The resulting linear second-order differential equation with constant coefficients is solved and interpreted. The model can be used as an application of solution techniques taught at first-year undergraduate level and as a motivator to encourage…
Controls on the distribution and isotopic composition of helium in deep ground-water flows
Zhao, X.; Fritzel, T.L.B.; Quinodoz, H.A.M.; Bethke, C.M.; Torgersen, T.
1998-01-01
The distribution and isotopic composition of helium in sedimentary basins can be used to interpret the ages of very old ground waters. The piston-flow model commonly used in such interpretation, how ever, does not account for several important factors and as such works well only in very simple flow regimes. In this study of helium transport in a hypothetical sedimentary basin, we develop a numerical model that accounts for the magnitude and distribution of the basal helium flux, hydrodynamic dispersion, and complexities in flow regimes such as subregional flow cells. The modeling shows that these factors exert strong controls on the helium distribution and isotopic composition. The simulations may provide a basis for more accurate interpretations of observed helium concentrations and isotopic ratios in sedimentary basins.
The Argumentative Introduction in Oral Interpretation.
ERIC Educational Resources Information Center
Mills, Daniel; Gaer, David C.
A study examined introductions used in competitive oral interpretation events. A total of 97 introductions (from four oral interpretation events at a nationally recognized Midwestern intercollegiate forensic tournament) were analyzed using four categories: Descriptive, Simple Theme, Descriptive and Simple Theme, and Argumentative Theme. Results…
On heart rate variability and autonomic activity in homeostasis and in systemic inflammation.
Scheff, Jeremy D; Griffel, Benjamin; Corbett, Siobhan A; Calvano, Steve E; Androulakis, Ioannis P
2014-06-01
Analysis of heart rate variability (HRV) is a promising diagnostic technique due to the noninvasive nature of the measurements involved and established correlations with disease severity, particularly in inflammation-linked disorders. However, the complexities underlying the interpretation of HRV complicate understanding the mechanisms that cause variability. Despite this, such interpretations are often found in literature. In this paper we explored mathematical modeling of the relationship between the autonomic nervous system and the heart, incorporating basic mechanisms such as perturbing mean values of oscillating autonomic activities and saturating signal transduction pathways to explore their impacts on HRV. We focused our analysis on human endotoxemia, a well-established, controlled experimental model of systemic inflammation that provokes changes in HRV representative of acute stress. By contrasting modeling results with published experimental data and analyses, we found that even a simple model linking the autonomic nervous system and the heart confound the interpretation of HRV changes in human endotoxemia. Multiple plausible alternative hypotheses, encoded in a model-based framework, equally reconciled experimental results. In total, our work illustrates how conventional assumptions about the relationships between autonomic activity and frequency-domain HRV metrics break down, even in a simple model. This underscores the need for further experimental work towards unraveling the underlying mechanisms of autonomic dysfunction and HRV changes in systemic inflammation. Understanding the extent of information encoded in HRV signals is critical in appropriately analyzing prior and future studies. Copyright © 2014 Elsevier Inc. All rights reserved.
On heart rate variability and autonomic activity in homeostasis and in systemic inflammation
Scheff, Jeremy D.; Griffel, Benjamin; Corbett, Siobhan A.; Calvano, Steve E.; Androulakis, Ioannis P.
2014-01-01
Analysis of heart rate variability (HRV) is a promising diagnostic technique due to the noninvasive nature of the measurements involved and established correlations with disease severity, particularly in inflammation-linked disorders. However, the complexities underlying the interpretation of HRV complicate understanding the mechanisms that cause variability. Despite this, such interpretations are often found in literature. In this paper we explored mathematical modeling of the relationship between the autonomic nervous system and the heart, incorporating basic mechanisms such as perturbing mean values of oscillating autonomic activities and saturating signal transduction pathways to explore their impacts on HRV. We focused our analysis on human endotoxemia, a well-established, controlled experimental model of systemic inflammation that provokes changes in HRV representative of acute stress. By contrasting modeling results with published experimental data and analyses, we found that even a simple model linking the autonomic nervous system and the heart confound the interpretation of HRV changes in human endotoxemia. Multiple plausible alternative hypotheses, encoded in a model-based framework, equally reconciled experimental results. In total, our work illustrates how conventional assumptions about the relationships between autonomic activity and frequency-domain HRV metrics break down, even in a simple model. This underscores the need for further experimental work towards unraveling the underlying mechanisms of autonomic dysfunction and HRV changes in systemic inflammation. Understanding the extent of information encoded in HRV signals is critical in appropriately analyzing prior and future studies. PMID:24680646
The application of sensitivity analysis to models of large scale physiological systems
NASA Technical Reports Server (NTRS)
Leonard, J. I.
1974-01-01
A survey of the literature of sensitivity analysis as it applies to biological systems is reported as well as a brief development of sensitivity theory. A simple population model and a more complex thermoregulatory model illustrate the investigatory techniques and interpretation of parameter sensitivity analysis. The role of sensitivity analysis in validating and verifying models, and in identifying relative parameter influence in estimating errors in model behavior due to uncertainty in input data is presented. This analysis is valuable to the simulationist and the experimentalist in allocating resources for data collection. A method for reducing highly complex, nonlinear models to simple linear algebraic models that could be useful for making rapid, first order calculations of system behavior is presented.
Assistive Technologies for Second-Year Statistics Students Who Are Blind
ERIC Educational Resources Information Center
Erhardt, Robert J.; Shuman, Michael P.
2015-01-01
At Wake Forest University, a student who is blind enrolled in a second course in statistics. The course covered simple and multiple regression, model diagnostics, model selection, data visualization, and elementary logistic regression. These topics required that the student both interpret and produce three sets of materials: mathematical writing,…
An Emphasis on Perception: Teaching Image Formation Using a Mechanistic Model of Vision.
ERIC Educational Resources Information Center
Allen, Sue; And Others
An effective way to teach the concept of image is to give students a model of human vision which incorporates a simple mechanism of depth perception. In this study two almost identical versions of a curriculum in geometrical optics were created. One used a mechanistic, interpretive eye model, and in the other the eye was modeled as a passive,…
The Barrett-Crane model: asymptotic measure factor
NASA Astrophysics Data System (ADS)
Kamiński, Wojciech; Steinhaus, Sebastian
2014-04-01
The original spin foam model construction for 4D gravity by Barrett and Crane suffers from a few troubling issues. In the simple examples of the vertex amplitude they can be summarized as the existence of contributions to the asymptotics from non-geometric configurations. Even restricted to geometric contributions the amplitude is not completely worked out. While the phase is known to be the Regge action, the so-called measure factor has remained mysterious for a decade. In the toy model case of the 6j symbol this measure factor has a nice geometric interpretation of V-1/2 leading to speculations that a similar interpretation should be possible also in the 4D case. In this paper we provide the first geometric interpretation of the geometric part of the asymptotic for the spin foam consisting of two glued 4-simplices (decomposition of the 4-sphere) in the Barrett-Crane model in the large internal spin regime.
Weighing Evidence "Steampunk" Style via the Meta-Analyser.
Bowden, Jack; Jackson, Chris
2016-10-01
The funnel plot is a graphical visualization of summary data estimates from a meta-analysis, and is a useful tool for detecting departures from the standard modeling assumptions. Although perhaps not widely appreciated, a simple extension of the funnel plot can help to facilitate an intuitive interpretation of the mathematics underlying a meta-analysis at a more fundamental level, by equating it to determining the center of mass of a physical system. We used this analogy to explain the concepts of weighing evidence and of biased evidence to a young audience at the Cambridge Science Festival, without recourse to precise definitions or statistical formulas and with a little help from Sherlock Holmes! Following on from the science fair, we have developed an interactive web-application (named the Meta-Analyser) to bring these ideas to a wider audience. We envisage that our application will be a useful tool for researchers when interpreting their data. First, to facilitate a simple understanding of fixed and random effects modeling approaches; second, to assess the importance of outliers; and third, to show the impact of adjusting for small study bias. This final aim is realized by introducing a novel graphical interpretation of the well-known method of Egger regression.
Huang, Ruili; Southall, Noel; Xia, Menghang; Cho, Ming-Hsuang; Jadhav, Ajit; Nguyen, Dac-Trung; Inglese, James; Tice, Raymond R.; Austin, Christopher P.
2009-01-01
In support of the U.S. Tox21 program, we have developed a simple and chemically intuitive model we call weighted feature significance (WFS) to predict the toxicological activity of compounds, based on the statistical enrichment of structural features in toxic compounds. We trained and tested the model on the following: (1) data from quantitative high–throughput screening cytotoxicity and caspase activation assays conducted at the National Institutes of Health Chemical Genomics Center, (2) data from Salmonella typhimurium reverse mutagenicity assays conducted by the U.S. National Toxicology Program, and (3) hepatotoxicity data published in the Registry of Toxic Effects of Chemical Substances. Enrichments of structural features in toxic compounds are evaluated for their statistical significance and compiled into a simple additive model of toxicity and then used to score new compounds for potential toxicity. The predictive power of the model for cytotoxicity was validated using an independent set of compounds from the U.S. Environmental Protection Agency tested also at the National Institutes of Health Chemical Genomics Center. We compared the performance of our WFS approach with classical classification methods such as Naive Bayesian clustering and support vector machines. In most test cases, WFS showed similar or slightly better predictive power, especially in the prediction of hepatotoxic compounds, where WFS appeared to have the best performance among the three methods. The new algorithm has the important advantages of simplicity, power, interpretability, and ease of implementation. PMID:19805409
Interpretable Decision Sets: A Joint Framework for Description and Prediction
Lakkaraju, Himabindu; Bach, Stephen H.; Jure, Leskovec
2016-01-01
One of the most important obstacles to deploying predictive models is the fact that humans do not understand and trust them. Knowing which variables are important in a model’s prediction and how they are combined can be very powerful in helping people understand and trust automatic decision making systems. Here we propose interpretable decision sets, a framework for building predictive models that are highly accurate, yet also highly interpretable. Decision sets are sets of independent if-then rules. Because each rule can be applied independently, decision sets are simple, concise, and easily interpretable. We formalize decision set learning through an objective function that simultaneously optimizes accuracy and interpretability of the rules. In particular, our approach learns short, accurate, and non-overlapping rules that cover the whole feature space and pay attention to small but important classes. Moreover, we prove that our objective is a non-monotone submodular function, which we efficiently optimize to find a near-optimal set of rules. Experiments show that interpretable decision sets are as accurate at classification as state-of-the-art machine learning techniques. They are also three times smaller on average than rule-based models learned by other methods. Finally, results of a user study show that people are able to answer multiple-choice questions about the decision boundaries of interpretable decision sets and write descriptions of classes based on them faster and more accurately than with other rule-based models that were designed for interpretability. Overall, our framework provides a new approach to interpretable machine learning that balances accuracy, interpretability, and computational efficiency. PMID:27853627
NASA Astrophysics Data System (ADS)
Ogée, J.; Barbour, M. M.; Wingate, L.; Bert, D.; Bosc, A.; Stievenard, M.; Lambrot, C.; Pierre, M.; Bariac, T.; Dewar, R. C.
2009-04-01
High-resolution intra-annual measurements of the carbon and oxygen stable isotope composition of cellulose in annual tree rings (δ13Ccellulose and δ18Ocellulose, respectively) reveal well-defined seasonal patterns that could contain valuable records of past climate and tree function. Interpreting these signals is nonetheless complex because they not only record the signature of current assimilates, but also depend on carbon allocation dynamics within the trees. Here, we present a simple, single-substrate model for wood growth containing only 12 main parameters. The model is used to interpret an isotopic intra-annual chronology collected in an even-aged maritime pine plantation growing in the South-West of France, where climate, soil and flux variables were also monitored. The empirical δ13Ccellulose and δ18Ocellulose exhibit dynamic seasonal patterns, with clear differences between years and individuals, that are mostly captured by the model. In particular, the amplitude of both signals is reproduced satisfactorily as well as the sharp 18O enrichment at the beginning of 1997 and the less pronounced 13C and 18O depletion observed at the end of the latewood. Our results suggest that the single-substrate hypothesis is a good approximation for tree ring studies on Pinus pinaster, at least for the environmental conditions covered by this study. A sensitivity analysis revealed that, in the early wood, the model was particularly sensitive to the date when cell wall thickening begins (twt). We therefore propose to use the model to reconstruct time series of twt and explore how climate influences this key parameter of xylogenesis.
Functional programming interpreter. M. S. thesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robison, A.D.
1987-03-01
Functional Programming (FP) sup BAC87 is an alternative to conventional imperative programming languages. This thesis describes an FP interpreter implementation. Superficially, FP appears to be a simple, but very inefficient language. Its simplicity, however, allows it to be interpreted quickly. Much of the inefficiency can be removed by simple interpreter techniques. This thesis describes the Illinois Functional Programming (IFP) interpreter, an interactive functional programming implementation which runs under both MS-DOS and UNIX. The IFP interpreter allows functions to be created, executed, and debugged in an environment very similar to UNIX. IFP's speed is competitive with other interpreted languages such asmore » BASIC.« less
Integrating individual movement behaviour into dispersal functions.
Heinz, Simone K; Wissel, Christian; Conradt, Larissa; Frank, Karin
2007-04-21
Dispersal functions are an important tool for integrating dispersal into complex models of population and metapopulation dynamics. Most approaches in the literature are very simple, with the dispersal functions containing only one or two parameters which summarise all the effects of movement behaviour as for example different movement patterns or different perceptual abilities. The summarising nature of these parameters makes assessing the effect of one particular behavioural aspect difficult. We present a way of integrating movement behavioural parameters into a particular dispersal function in a simple way. Using a spatial individual-based simulation model for simulating different movement behaviours, we derive fitting functions for the functional relationship between the parameters of the dispersal function and several details of movement behaviour. This is done for three different movement patterns (loops, Archimedean spirals, random walk). Additionally, we provide measures which characterise the shape of the dispersal function and are interpretable in terms of landscape connectivity. This allows an ecological interpretation of the relationships found.
Simple linear and multivariate regression models.
Rodríguez del Águila, M M; Benítez-Parejo, N
2011-01-01
In biomedical research it is common to find problems in which we wish to relate a response variable to one or more variables capable of describing the behaviour of the former variable by means of mathematical models. Regression techniques are used to this effect, in which an equation is determined relating the two variables. While such equations can have different forms, linear equations are the most widely used form and are easy to interpret. The present article describes simple and multiple linear regression models, how they are calculated, and how their applicability assumptions are checked. Illustrative examples are provided, based on the use of the freely accessible R program. Copyright © 2011 SEICAP. Published by Elsevier Espana. All rights reserved.
Active earth pressure model tests versus finite element analysis
NASA Astrophysics Data System (ADS)
Pietrzak, Magdalena
2017-06-01
The purpose of the paper is to compare failure mechanisms observed in small scale model tests on granular sample in active state, and simulated by finite element method (FEM) using Plaxis 2D software. Small scale model tests were performed on rectangular granular sample retained by a rigid wall. Deformation of the sample resulted from simple wall translation in the direction `from the soil" (active earth pressure state. Simple Coulomb-Mohr model for soil can be helpful in interpreting experimental findings in case of granular materials. It was found that the general alignment of strain localization pattern (failure mechanism) may belong to macro scale features and be dominated by a test boundary conditions rather than the nature of the granular sample.
Perspective: Sloppiness and emergent theories in physics, biology, and beyond.
Transtrum, Mark K; Machta, Benjamin B; Brown, Kevin S; Daniels, Bryan C; Myers, Christopher R; Sethna, James P
2015-07-07
Large scale models of physical phenomena demand the development of new statistical and computational tools in order to be effective. Many such models are "sloppy," i.e., exhibit behavior controlled by a relatively small number of parameter combinations. We review an information theoretic framework for analyzing sloppy models. This formalism is based on the Fisher information matrix, which is interpreted as a Riemannian metric on a parameterized space of models. Distance in this space is a measure of how distinguishable two models are based on their predictions. Sloppy model manifolds are bounded with a hierarchy of widths and extrinsic curvatures. The manifold boundary approximation can extract the simple, hidden theory from complicated sloppy models. We attribute the success of simple effective models in physics as likewise emerging from complicated processes exhibiting a low effective dimensionality. We discuss the ramifications and consequences of sloppy models for biochemistry and science more generally. We suggest that the reason our complex world is understandable is due to the same fundamental reason: simple theories of macroscopic behavior are hidden inside complicated microscopic processes.
Assessment of cardiovascular risk based on a data-driven knowledge discovery approach.
Mendes, D; Paredes, S; Rocha, T; Carvalho, P; Henriques, J; Cabiddu, R; Morais, J
2015-01-01
The cardioRisk project addresses the development of personalized risk assessment tools for patients who have been admitted to the hospital with acute myocardial infarction. Although there are models available that assess the short-term risk of death/new events for such patients, these models were established in circumstances that do not take into account the present clinical interventions and, in some cases, the risk factors used by such models are not easily available in clinical practice. The integration of the existing risk tools (applied in the clinician's daily practice) with data-driven knowledge discovery mechanisms based on data routinely collected during hospitalizations, will be a breakthrough in overcoming some of these difficulties. In this context, the development of simple and interpretable models (based on recent datasets), unquestionably will facilitate and will introduce confidence in this integration process. In this work, a simple and interpretable model based on a real dataset is proposed. It consists of a decision tree model structure that uses a reduced set of six binary risk factors. The validation is performed using a recent dataset provided by the Portuguese Society of Cardiology (11113 patients), which originally comprised 77 risk factors. A sensitivity, specificity and accuracy of, respectively, 80.42%, 77.25% and 78.80% were achieved showing the effectiveness of the approach.
Weighing Evidence “Steampunk” Style via the Meta-Analyser
Bowden, Jack; Jackson, Chris
2016-01-01
ABSTRACT The funnel plot is a graphical visualization of summary data estimates from a meta-analysis, and is a useful tool for detecting departures from the standard modeling assumptions. Although perhaps not widely appreciated, a simple extension of the funnel plot can help to facilitate an intuitive interpretation of the mathematics underlying a meta-analysis at a more fundamental level, by equating it to determining the center of mass of a physical system. We used this analogy to explain the concepts of weighing evidence and of biased evidence to a young audience at the Cambridge Science Festival, without recourse to precise definitions or statistical formulas and with a little help from Sherlock Holmes! Following on from the science fair, we have developed an interactive web-application (named the Meta-Analyser) to bring these ideas to a wider audience. We envisage that our application will be a useful tool for researchers when interpreting their data. First, to facilitate a simple understanding of fixed and random effects modeling approaches; second, to assess the importance of outliers; and third, to show the impact of adjusting for small study bias. This final aim is realized by introducing a novel graphical interpretation of the well-known method of Egger regression. PMID:28003684
Nonlinear Constitutive Modeling of Piezoelectric Ceramics
NASA Astrophysics Data System (ADS)
Xu, Jia; Li, Chao; Wang, Haibo; Zhu, Zhiwen
2017-12-01
Nonlinear constitutive modeling of piezoelectric ceramics is discussed in this paper. Van der Pol item is introduced to explain the simple hysteretic curve. Improved nonlinear difference items are used to interpret the hysteresis phenomena of piezoelectric ceramics. The fitting effect of the model on experimental data is proved by the partial least-square regression method. The results show that this method can describe the real curve well. The results of this paper are helpful to piezoelectric ceramics constitutive modeling.
Action Centered Contextual Bandits.
Greenewald, Kristjan; Tewari, Ambuj; Klasnja, Predrag; Murphy, Susan
2017-12-01
Contextual bandits have become popular as they offer a middle ground between very simple approaches based on multi-armed bandits and very complex approaches using the full power of reinforcement learning. They have demonstrated success in web applications and have a rich body of associated theoretical guarantees. Linear models are well understood theoretically and preferred by practitioners because they are not only easily interpretable but also simple to implement and debug. Furthermore, if the linear model is true, we get very strong performance guarantees. Unfortunately, in emerging applications in mobile health, the time-invariant linear model assumption is untenable. We provide an extension of the linear model for contextual bandits that has two parts: baseline reward and treatment effect. We allow the former to be complex but keep the latter simple. We argue that this model is plausible for mobile health applications. At the same time, it leads to algorithms with strong performance guarantees as in the linear model setting, while still allowing for complex nonlinear baseline modeling. Our theory is supported by experiments on data gathered in a recently concluded mobile health study.
Semantic wireless localization of WiFi terminals in smart buildings
NASA Astrophysics Data System (ADS)
Ahmadi, H.; Polo, A.; Moriyama, T.; Salucci, M.; Viani, F.
2016-06-01
The wireless localization of mobile terminals in indoor scenarios by means of a semantic interpretation of the environment is addressed in this work. A training-less approach based on the real-time calibration of a simple path loss model is proposed which combines (i) the received signal strength information measured by the wireless terminal and (ii) the topological features of the localization domain. A customized evolutionary optimization technique has been designed to estimate the optimal target position that fits the complex wireless indoor propagation and the semantic target-environment relation, as well. The proposed approach is experimentally validated in a real building area where the available WiFi network is opportunistically exploited for data collection. The presented results point out a reduction of the localization error obtained with the introduction of a very simple semantic interpretation of the considered scenario.
Obermaier, Michael; Bandarenka, Aliaksandr S; Lohri-Tymozhynsky, Cyrill
2018-03-21
Electrochemical impedance spectroscopy (EIS) is an indispensable tool for non-destructive operando characterization of Polymer Electrolyte Fuel Cells (PEFCs). However, in order to interpret the PEFC's impedance response and understand the phenomena revealed by EIS, numerous semi-empirical or purely empirical models are used. In this work, a relatively simple model for PEFC cathode catalyst layers in absence of oxygen has been developed, where all the equivalent circuit parameters have an entire physical meaning. It is based on: (i) experimental quantification of the catalyst layer pore radii, (ii) application of De Levie's analytical formula to calculate the response of a single pore, (iii) approximating the ionomer distribution within every pore, (iv) accounting for the specific adsorption of sulfonate groups and (v) accounting for a small H 2 crossover through ~15 μm ionomer membranes. The derived model has effectively only 6 independent fitting parameters and each of them has clear physical meaning. It was used to investigate the cathode catalyst layer and the double layer capacitance at the interface between the ionomer/membrane and Pt-electrocatalyst. The model has demonstrated excellent results in fitting and interpretation of the impedance data under different relative humidities. A simple script enabling fitting of impedance data is provided as supporting information.
Theoretical model for optical properties of porphyrin
NASA Astrophysics Data System (ADS)
Phan, Anh D.; Nga, Do T.; Phan, The-Long; Thanh, Le T. M.; Anh, Chu T.; Bernad, Sophie; Viet, N. A.
2014-12-01
We propose a simple model to interpret the optical absorption spectra of porphyrin in different solvents. Our model successfully explains the decrease in the intensity of optical absorption at maxima of increased wavelengths. We also prove the dependence of the intensity and peak positions in the absorption spectra on the environment. The nature of the Soret band is supposed to derive from π plasmon. Our theoretical calculations are consistent with previous experimental studies.
Quantifying Confidence in Model Predictions for Hypersonic Aircraft Structures
2015-03-01
of isolating calibrations of models in the network, segmented and simultaneous calibration are compared using the Kullback - Leibler ...value of θ. While not all test -statistics are as simple as measuring goodness or badness of fit , their directional interpretations tend to remain...data quite well, qualitatively. Quantitative goodness - of - fit tests are problematic because they assume a true empirical CDF is being tested or
ERIC Educational Resources Information Center
GLOVER, J.H.
THE CHIEF OBJECTIVE OF THIS STUDY OF SPEED-SKILL ACQUISITION WAS TO FIND A MATHEMATICAL MODEL CAPABLE OF SIMPLE GRAPHIC INTERPRETATION FOR INDUSTRIAL TRAINING AND PRODUCTION SCHEDULING AT THE SHOP FLOOR LEVEL. STUDIES OF MIDDLE SKILL DEVELOPMENT IN MACHINE AND VEHICLE ASSEMBLY, AIRCRAFT PRODUCTION, SPOOLMAKING AND THE MACHINING OF PARTS CONFIRMED…
Herding, minority game, market clearing and efficient markets in a simple spin model framework
NASA Astrophysics Data System (ADS)
Kristoufek, Ladislav; Vosvrda, Miloslav
2018-01-01
We present a novel approach towards the financial Ising model. Most studies utilize the model to find settings which generate returns closely mimicking the financial stylized facts such as fat tails, volatility clustering and persistence, and others. We tackle the model utility from the other side and look for the combination of parameters which yields return dynamics of the efficient market in the view of the efficient market hypothesis. Working with the Ising model, we are able to present nicely interpretable results as the model is based on only two parameters. Apart from showing the results of our simulation study, we offer a new interpretation of the Ising model parameters via inverse temperature and entropy. We show that in fact market frictions (to a certain level) and herding behavior of the market participants do not go against market efficiency but what is more, they are needed for the markets to be efficient.
Testing the Structure of Hydrological Models using Genetic Programming
NASA Astrophysics Data System (ADS)
Selle, B.; Muttil, N.
2009-04-01
Genetic Programming is able to systematically explore many alternative model structures of different complexity from available input and response data. We hypothesised that genetic programming can be used to test the structure hydrological models and to identify dominant processes in hydrological systems. To test this, genetic programming was used to analyse a data set from a lysimeter experiment in southeastern Australia. The lysimeter experiment was conducted to quantify the deep percolation response under surface irrigated pasture to different soil types, water table depths and water ponding times during surface irrigation. Using genetic programming, a simple model of deep percolation was consistently evolved in multiple model runs. This simple and interpretable model confirmed the dominant process contributing to deep percolation represented in a conceptual model that was published earlier. Thus, this study shows that genetic programming can be used to evaluate the structure of hydrological models and to gain insight about the dominant processes in hydrological systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pinches, A.; Pallent, L.J.
1986-10-01
Rate and yield information relating to biomass and product formation and to nitrogen, glucose and oxygen consumption are described for xanthan gum batch fermentations in which both chemically defined (glutamate nitrogen) and complex (peptone nitrogen) media are employed. Simple growth and product models are used for data interpretation. For both nitrogen sources, rate and yield parameter estimates are shown to be independent of initial nitrogen concentrations. For stationary phases, specific rates of gum production are shown to be independent of nitrogen source but dependent on initial nitrogen concentration. The latter is modeled empirically and suggests caution in applying simple productmore » models to xanthan gum fermentations. 13 references.« less
Matrix population models from 20 studies of perennial plant populations
Ellis, Martha M.; Williams, Jennifer L.; Lesica, Peter; Bell, Timothy J.; Bierzychudek, Paulette; Bowles, Marlin; Crone, Elizabeth E.; Doak, Daniel F.; Ehrlen, Johan; Ellis-Adam, Albertine; McEachern, Kathryn; Ganesan, Rengaian; Latham, Penelope; Luijten, Sheila; Kaye, Thomas N.; Knight, Tiffany M.; Menges, Eric S.; Morris, William F.; den Nijs, Hans; Oostermeijer, Gerard; Quintana-Ascencio, Pedro F.; Shelly, J. Stephen; Stanley, Amanda; Thorpe, Andrea; Tamara, Ticktin; Valverde, Teresa; Weekley, Carl W.
2012-01-01
Demographic transition matrices are one of the most commonly applied population models for both basic and applied ecological research. The relatively simple framework of these models and simple, easily interpretable summary statistics they produce have prompted the wide use of these models across an exceptionally broad range of taxa. Here, we provide annual transition matrices and observed stage structures/population sizes for 20 perennial plant species which have been the focal species for long-term demographic monitoring. These data were assembled as part of the "Testing Matrix Models" working group through the National Center for Ecological Analysis and Synthesis (NCEAS). In sum, these data represent 82 populations with >460 total population-years of data. It is our hope that making these data available will help promote and improve our ability to monitor and understand plant population dynamics.
Matrix population models from 20 studies of perennial plant populations
Ellis, Martha M.; Williams, Jennifer L.; Lesica, Peter; Bell, Timothy J.; Bierzychudek, Paulette; Bowles, Marlin; Crone, Elizabeth E.; Doak, Daniel F.; Ehrlen, Johan; Ellis-Adam, Albertine; McEachern, Kathryn; Ganesan, Rengaian; Latham, Penelope; Luijten, Sheila; Kaye, Thomas N.; Knight, Tiffany M.; Menges, Eric S.; Morris, William F.; den Nijs, Hans; Oostermeijer, Gerard; Quintana-Ascencio, Pedro F.; Shelly, J. Stephen; Stanley, Amanda; Thorpe, Andrea; Tamara, Ticktin; Valverde, Teresa; Weekley, Carl W.
2012-01-01
Demographic transition matrices are one of the most commonly applied population models for both basic and applied ecological research. The relatively simple framework of these models and simple, easily interpretable summary statistics they produce have prompted the wide use of these models across an exceptionally broad range of taxa. Here, we provide annual transition matrices and observed stage structures/population sizes for 20 perennial plant species which have been the focal species for long-term demographic monitoring. These data were assembled as part of the 'Testing Matrix Models' working group through the National Center for Ecological Analysis and Synthesis (NCEAS). In sum, these data represent 82 populations with >460 total population-years of data. It is our hope that making these data available will help promote and improve our ability to monitor and understand plant population dynamics.
Magneto-hydrodynamic modeling of gas discharge switches
NASA Astrophysics Data System (ADS)
Doiphode, P.; Sakthivel, N.; Sarkar, P.; Chaturvedi, S.
2002-12-01
We have performed one-dimensional, time-dependent magneto-hydrodynamic modeling of fast gas-discharge switches. The model has been applied to both high- and low-pressure switches, involving a cylindrical argon-filled cavity. It is assumed that the discharge is initiated in a small channel near the axis of the cylinder. Joule heating in this channel rapidly raises its temperature and pressure. This drives a radial shock wave that heats and ionizes the surrounding low-temperature region, resulting in progressive expansion of the current channel. Our model is able to reproduce this expansion. However, significant difference of detail is observed, as compared with a simple model reported in the literature. In this paper, we present details of our simulations, a comparison with results from the simple model, and a physical interpretation for these differences. This is a first step towards development of a detailed 2-D model for such switches.
On the simple random-walk models of ion-channel gate dynamics reflecting long-term memory.
Wawrzkiewicz, Agata; Pawelek, Krzysztof; Borys, Przemyslaw; Dworakowska, Beata; Grzywna, Zbigniew J
2012-06-01
Several approaches to ion-channel gating modelling have been proposed. Although many models describe the dwell-time distributions correctly, they are incapable of predicting and explaining the long-term correlations between the lengths of adjacent openings and closings of a channel. In this paper we propose two simple random-walk models of the gating dynamics of voltage and Ca(2+)-activated potassium channels which qualitatively reproduce the dwell-time distributions, and describe the experimentally observed long-term memory quite well. Biological interpretation of both models is presented. In particular, the origin of the correlations is associated with fluctuations of channel mass density. The long-term memory effect, as measured by Hurst R/S analysis of experimental single-channel patch-clamp recordings, is close to the behaviour predicted by our models. The flexibility of the models enables their use as templates for other types of ion channel.
Interpreting electrically evoked emissions using a finite-element model of the cochlea
NASA Astrophysics Data System (ADS)
Deo, Niranjan V.; Grosh, Karl; Parthasarathi, Anand
2003-10-01
Electrically evoked otoacoustic emissions (EEOAEs) are used to investigate in vivo cochlear electromechanical function. Electrical stimulation through bipolar electrodes placed very close to the basilar membrane (in the scala vestibuli and scala tympani) gives rise to a narrow frequency range of EEOAEs, limited to around 20 kHz when the electrodes are placed near the 18-kHz best frequency place. Model predictions using a three-dimensional inviscid fluid model in conjunction with a middle ear model [S. Puria and J. B. Allen, J. Acoust. Soc. Am. 104, 3463-3481 (1998)] and a simple model for outer hair cell activity [S. Neely and D. Kim, J. Acoust. Soc. Am. 94, 137-146 (1993)] are used to interpret the experimental results. To estimate effect of viscosity, model results are compared with those obtained for a viscous fluid. The models are solved using a 2.5-D finite-element formulation. Predictions show that the high frequency limit of the excitation is determined by the spatial extent of the current stimulus. The global peaks in the EEOAE spectra are interpreted as constructive interference between electrically evoked backward traveling waves and forward traveling waves reflected from the stapes. Steady state response predictions of the model are presented.
Model for the loop voltage of reversed field pinches
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jarboe, T.R.; Alper, B.
1987-04-01
A simple model is presented that uses the concept of helicity balance to predict the toroidal loop voltage of reversed field pinches (RFP's). Data from the RFP's at Culham (Plasma Phys. Controlled Fusion 27, 1307 (1985)) are used to calibrate and verify the model. The model indicates that most of the helicity dissipation occurs in edge regions that are outside the limiters or in regions where field lines contact the walls. The value of this new interpretation to future RFP and spheromak experiments is discussed.
Bertoluzzi, Luca; Badia-Bou, Laura; Fabregat-Santiago, Francisco; Gimenez, Sixto; Bisquert, Juan
2013-04-18
A simple model is proposed that allows interpretation of the cyclic voltammetry diagrams obtained experimentally for photoactive semiconductors with surface states or catalysts used for fuel production from sunlight. When the system is limited by charge transfer from the traps/catalyst layer and by detrapping, it is shown that only one capacitive peak is observable and is not recoverable in the return voltage scan. If the system is limited only by charge transfer and not by detrapping, two symmetric capacitive peaks can be observed in the cathodic and anodic directions. The model appears as a useful tool for the swift analysis of the electronic processes that limit fuel production.
Jovanovic, Milos; Radovanovic, Sandro; Vukicevic, Milan; Van Poucke, Sven; Delibasic, Boris
2016-09-01
Quantification and early identification of unplanned readmission risk have the potential to improve the quality of care during hospitalization and after discharge. However, high dimensionality, sparsity, and class imbalance of electronic health data and the complexity of risk quantification, challenge the development of accurate predictive models. Predictive models require a certain level of interpretability in order to be applicable in real settings and create actionable insights. This paper aims to develop accurate and interpretable predictive models for readmission in a general pediatric patient population, by integrating a data-driven model (sparse logistic regression) and domain knowledge based on the international classification of diseases 9th-revision clinical modification (ICD-9-CM) hierarchy of diseases. Additionally, we propose a way to quantify the interpretability of a model and inspect the stability of alternative solutions. The analysis was conducted on >66,000 pediatric hospital discharge records from California, State Inpatient Databases, Healthcare Cost and Utilization Project between 2009 and 2011. We incorporated domain knowledge based on the ICD-9-CM hierarchy in a data driven, Tree-Lasso regularized logistic regression model, providing the framework for model interpretation. This approach was compared with traditional Lasso logistic regression resulting in models that are easier to interpret by fewer high-level diagnoses, with comparable prediction accuracy. The results revealed that the use of a Tree-Lasso model was as competitive in terms of accuracy (measured by area under the receiver operating characteristic curve-AUC) as the traditional Lasso logistic regression, but integration with the ICD-9-CM hierarchy of diseases provided more interpretable models in terms of high-level diagnoses. Additionally, interpretations of models are in accordance with existing medical understanding of pediatric readmission. Best performing models have similar performances reaching AUC values 0.783 and 0.779 for traditional Lasso and Tree-Lasso, respectfully. However, information loss of Lasso models is 0.35 bits higher compared to Tree-Lasso model. We propose a method for building predictive models applicable for the detection of readmission risk based on Electronic Health records. Integration of domain knowledge (in the form of ICD-9-CM taxonomy) and a data-driven, sparse predictive algorithm (Tree-Lasso Logistic Regression) resulted in an increase of interpretability of the resulting model. The models are interpreted for the readmission prediction problem in general pediatric population in California, as well as several important subpopulations, and the interpretations of models comply with existing medical understanding of pediatric readmission. Finally, quantitative assessment of the interpretability of the models is given, that is beyond simple counts of selected low-level features. Copyright © 2016 Elsevier B.V. All rights reserved.
The Critical Power Model as a Potential Tool for Anti-doping
Puchowicz, Michael J.; Mizelman, Eliran; Yogev, Assaf; Koehle, Michael S.; Townsend, Nathan E.; Clarke, David C.
2018-01-01
Existing doping detection strategies rely on direct and indirect biochemical measurement methods focused on detecting banned substances, their metabolites, or biomarkers related to their use. However, the goal of doping is to improve performance, and yet evidence from performance data is not considered by these strategies. The emergence of portable sensors for measuring exercise intensities and of player tracking technologies may enable the widespread collection of performance data. How these data should be used for doping detection is an open question. Herein, we review the basis by which performance models could be used for doping detection, followed by critically reviewing the potential of the critical power (CP) model as a prototypical performance model that could be used in this regard. Performance models are mathematical representations of performance data specific to the athlete. Some models feature parameters with physiological interpretations, changes to which may provide clues regarding the specific doping method. The CP model is a simple model of the power-duration curve and features two physiologically interpretable parameters, CP and W′. We argue that the CP model could be useful for doping detection mainly based on the predictable sensitivities of its parameters to ergogenic aids and other performance-enhancing interventions. However, our argument is counterbalanced by the existence of important limitations and unresolved questions that need to be addressed before the model is used for doping detection. We conclude by providing a simple worked example showing how it could be used and propose recommendations for its implementation. PMID:29928234
Using the PLUM procedure of SPSS to fit unequal variance and generalized signal detection models.
DeCarlo, Lawrence T
2003-02-01
The recent addition of aprocedure in SPSS for the analysis of ordinal regression models offers a simple means for researchers to fit the unequal variance normal signal detection model and other extended signal detection models. The present article shows how to implement the analysis and how to interpret the SPSS output. Examples of fitting the unequal variance normal model and other generalized signal detection models are given. The approach offers a convenient means for applying signal detection theory to a variety of research.
Formal verification of automated teller machine systems using SPIN
NASA Astrophysics Data System (ADS)
Iqbal, Ikhwan Mohammad; Adzkiya, Dieky; Mukhlash, Imam
2017-08-01
Formal verification is a technique for ensuring the correctness of systems. This work focuses on verifying a model of the Automated Teller Machine (ATM) system against some specifications. We construct the model as a state transition diagram that is suitable for verification. The specifications are expressed as Linear Temporal Logic (LTL) formulas. We use Simple Promela Interpreter (SPIN) model checker to check whether the model satisfies the formula. This model checker accepts models written in Process Meta Language (PROMELA), and its specifications are specified in LTL formulas.
Strategy Space Exploration of a Multi-Agent Model for the Labor Market
NASA Astrophysics Data System (ADS)
de Grande, Pablo; Eguia, Manuel
We present a multi-agent system where typical labor market mechanisms emerge. Based on a few simple rules, our model allows for different interpretative paradigms to be represented and for different scenarios to be tried out. We thoroughly explore the space of possible strategies both for those unemployed and for companies and analyze the trade-off between these strategies regarding global social and economical indicators.
Solares, Santiago D
2015-01-01
This paper introduces a quasi-3-dimensional (Q3D) viscoelastic model and software tool for use in atomic force microscopy (AFM) simulations. The model is based on a 2-dimensional array of standard linear solid (SLS) model elements. The well-known 1-dimensional SLS model is a textbook example in viscoelastic theory but is relatively new in AFM simulation. It is the simplest model that offers a qualitatively correct description of the most fundamental viscoelastic behaviors, namely stress relaxation and creep. However, this simple model does not reflect the correct curvature in the repulsive portion of the force curve, so its application in the quantitative interpretation of AFM experiments is relatively limited. In the proposed Q3D model the use of an array of SLS elements leads to force curves that have the typical upward curvature in the repulsive region, while still offering a very low computational cost. Furthermore, the use of a multidimensional model allows for the study of AFM tips having non-ideal geometries, which can be extremely useful in practice. Examples of typical force curves are provided for single- and multifrequency tapping-mode imaging, for both of which the force curves exhibit the expected features. Finally, a software tool to simulate amplitude and phase spectroscopy curves is provided, which can be easily modified to implement other controls schemes in order to aid in the interpretation of AFM experiments.
Modeling Simple Driving Tasks with a One-Boundary Diffusion Model
Ratcliff, Roger; Strayer, David
2014-01-01
A one-boundary diffusion model was applied to the data from two experiments in which subjects were performing a simple simulated driving task. In the first experiment, the same subjects were tested on two driving tasks using a PC-based driving simulator and the psychomotor vigilance test (PVT). The diffusion model fit the response time (RT) distributions for each task and individual subject well. Model parameters were found to correlate across tasks which suggests common component processes were being tapped in the three tasks. The model was also fit to a distracted driving experiment of Cooper and Strayer (2008). Results showed that distraction altered performance by affecting the rate of evidence accumulation (drift rate) and/or increasing the boundary settings. This provides an interpretation of cognitive distraction whereby conversing on a cell phone diverts attention from the normal accumulation of information in the driving environment. PMID:24297620
A simple white noise analysis of neuronal light responses.
Chichilnisky, E J
2001-05-01
A white noise technique is presented for estimating the response properties of spiking visual system neurons. The technique is simple, robust, efficient and well suited to simultaneous recordings from multiple neurons. It provides a complete and easily interpretable model of light responses even for neurons that display a common form of response nonlinearity that precludes classical linear systems analysis. A theoretical justification of the technique is presented that relies only on elementary linear algebra and statistics. Implementation is described with examples. The technique and the underlying model of neural responses are validated using recordings from retinal ganglion cells, and in principle are applicable to other neurons. Advantages and disadvantages of the technique relative to classical approaches are discussed.
Bidault, Xavier; Chaussedent, Stéphane; Blanc, Wilfried
2015-10-21
A simple transferable adaptive model is developed and it allows for the first time to simulate by molecular dynamics the separation of large phases in the MgO-SiO2 binary system, as experimentally observed and as predicted by the phase diagram, meaning that separated phases have various compositions. This is a real improvement over fixed-charge models, which are often limited to an interpretation involving the formation of pure clusters, or involving the modified random network model. Our adaptive model, efficient to reproduce known crystalline and glassy structures, allows us to track the formation of large amorphous Mg-rich Si-poor nanoparticles in an Mg-poor Si-rich matrix from a 0.1MgO-0.9SiO2 melt.
NASA Astrophysics Data System (ADS)
Roy, S. G.; Koons, P. O.; Gerbi, C. C.; Capps, D. K.; Tucker, G. E.; Rogers, Z. A.
2014-12-01
Sophisticated numerical tools exist for modeling geomorphic processes and linking them to tectonic and climatic systems, but they are often seen as inaccessible for users with an exploratory level of interest. We have improved the accessibility of landscape evolution models by producing a simple graphics user interface (GUI) that takes advantage of the Channel-Hillslope Integrated Landscape Development (CHILD) model. Model access is flexible: the user can edit values for basic geomorphic, tectonic, and climate parameters, or obtain greater control by defining the spatiotemporal distributions of those parameters. Users can make educated predictions by choosing their own parametric values for the governing equations and interpreting the results immediately through model graphics. This method of modeling allows users to iteratively build their understanding through experimentation. Use of this GUI is intended for inquiry and discovery-based learning activities. We discuss a number of examples of how the GUI can be used at the upper high school, introductory university, and advanced university level. Effective teaching modules initially focus on an inquiry-based example guided by the instructor. As students become familiar with the GUI and the CHILD model, the class can shift to more student-centered exploration and experimentation. To make model interpretations more robust, digital elevation models can be imported and direct comparisons can be made between CHILD model results and natural topography. The GUI is available online through the University of Maine's Earth and Climate Sciences website, through the Community Surface Dynamics Modeling System (CSDMS) model repository, or by contacting the corresponding author.
Evidence integration in model-based tree search
Solway, Alec; Botvinick, Matthew M.
2015-01-01
Research on the dynamics of reward-based, goal-directed decision making has largely focused on simple choice, where participants decide among a set of unitary, mutually exclusive options. Recent work suggests that the deliberation process underlying simple choice can be understood in terms of evidence integration: Noisy evidence in favor of each option accrues over time, until the evidence in favor of one option is significantly greater than the rest. However, real-life decisions often involve not one, but several steps of action, requiring a consideration of cumulative rewards and a sensitivity to recursive decision structure. We present results from two experiments that leveraged techniques previously applied to simple choice to shed light on the deliberation process underlying multistep choice. We interpret the results from these experiments in terms of a new computational model, which extends the evidence accumulation perspective to multiple steps of action. PMID:26324932
Interpretation of styles of simple stations in Korea
NASA Astrophysics Data System (ADS)
Hwang, Minhye; Shin, Yekyeong
2018-06-01
The purpose of this paper is to apply stylistic interpretation through the exterior of simple stations in Korea. Simple Station is a kind of railway stations. It was installed when there were not a lot of passengers and it was not necessary to operate the station at a high cost. It has minimal functions such as a waiting room, an office, an operating room, and toilets and was built between the 1910s and the 1960s. The form of the building is as simple as the name of "Simple Station". That is why the reading its style is easy and obvious. But it is also difficult to interpret because of the lack of stylistic evidences. Nevertheless, in the relationship between the station and the station tree, the concept of the Picturesque and Palladian Style are found. But it is still hard to distinguish whether the whole building style is Western or Japanese. Simple Station is one of the things that Japan has built as Western Culture in Korea during the Japanese colonial era, so it is natural that its style of form is complex.
Wenk, H.-R.; Takeshita, T.; Bechler, E.; Erskine, B.G.; Matthies, S.
1987-01-01
The pattern of lattice preferred orientation (texture) in deformed rocks is an expression of the strain path and the acting deformation mechanisms. A first indication about the strain path is given by the symmetry of pole figures: coaxial deformation produces orthorhombic pole figures, while non-coaxial deformation yields monoclinic or triclinic pole figures. More quantitative information about the strain history can be obtained by comparing natural textures with experimental ones and with theoretical models. For this comparison, a representation in the sensitive three-dimensional orientation distribution space is extremely important and efforts are made to explain this concept. We have been investigating differences between pure shear and simple shear deformation incarbonate rocks and have found considerable agreement between textures produced in plane strain experiments and predictions based on the Taylor model. We were able to simulate the observed changes with strain history (coaxial vs non-coaxial) and the profound texture transition which occurs with increasing temperature. Two natural calcite textures were then selected which we interpreted by comparing them with the experimental and theoretical results. A marble from the Santa Rosa mylonite zone in southern California displays orthorhombic pole figures with patterns consistent with low temperature deformation in pure shear. A limestone from the Tanque Verde detachment fault in Arizona has a monoclinic fabric from which we can interpret that 60% of the deformation occurred by simple shear. ?? 1987.
Holographic multiverse and conformal invariance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garriga, Jaume; Vilenkin, Alexander, E-mail: jaume.garriga@ub.edu, E-mail: vilenkin@cosmos.phy.tufts.edu
2009-11-01
We consider a holographic description of the inflationary multiverse, according to which the wave function of the universe is interpreted as the generating functional for a lower dimensional Euclidean theory. We analyze a simple model where transitions between inflationary vacua occur through bubble nucleation, and the inflating part of spacetime consists of de Sitter regions separated by thin bubble walls. In this model, we present some evidence that the dual theory is conformally invariant in the UV.
Signal transmission competing with noise in model excitable brains
NASA Astrophysics Data System (ADS)
Marro, J.; Mejias, J. F.; Pinamonti, G.; Torres, J. J.
2013-01-01
This is a short review of recent studies in our group on how weak signals may efficiently propagate in a system with noise-induced excitation-inhibition competition which adapts to the activity at short-time scales and thus induces excitable conditions. Our numerical results on simple mathematical models should hold for many complex networks in nature, including some brain cortical areas. In particular, they serve us here to interpret available psycho-technical data.
Statistical Hypothesis Testing in Intraspecific Phylogeography: NCPA versus ABC
Templeton, Alan R.
2009-01-01
Nested clade phylogeographic analysis (NCPA) and approximate Bayesian computation (ABC) have been used to test phylogeographic hypotheses. Multilocus NCPA tests null hypotheses, whereas ABC discriminates among a finite set of alternatives. The interpretive criteria of NCPA are explicit and allow complex models to be built from simple components. The interpretive criteria of ABC are ad hoc and require the specification of a complete phylogeographic model. The conclusions from ABC are often influenced by implicit assumptions arising from the many parameters needed to specify a complex model. These complex models confound many assumptions so that biological interpretations are difficult. Sampling error is accounted for in NCPA, but ABC ignores important sources of sampling error that creates pseudo-statistical power. NCPA generates the full sampling distribution of its statistics, but ABC only yields local probabilities, which in turn make it impossible to distinguish between a good fitting model, a non-informative model, and an over-determined model. Both NCPA and ABC use approximations, but convergences of the approximations used in NCPA are well defined whereas those in ABC are not. NCPA can analyze a large number of locations, but ABC cannot. Finally, the dimensionality of tested hypothesis is known in NCPA, but not for ABC. As a consequence, the “probabilities” generated by ABC are not true probabilities and are statistically non-interpretable. Accordingly, ABC should not be used for hypothesis testing, but simulation approaches are valuable when used in conjunction with NCPA or other methods that do not rely on highly parameterized models. PMID:19192182
NASA Technical Reports Server (NTRS)
Carsey, Frank D.; Garwood, Ronald W.; Roach, Andrew T.
1993-01-01
In this paper we present an interpretation of coarse resolution passive microwave data for 1989 and 1992 in the context of a simple model of ice-edge retreat to obtain the Nordbukta emayment growth and the formation and migration of an Odden polynya.
ERIC Educational Resources Information Center
National Oceanic and Atmospheric Administration (DOC), Rockville, MD.
This activity is designed to teach about topographic maps and bathymetric charts. Students are expected to create a topographic map from a model landform, interpret a simple topographic map, and explain the difference between topographic and bathymetric maps. The activity provides learning objectives, a list of needed materials, key vocabulary…
Wintermark, M; Zeineh, M; Zaharchuk, G; Srivastava, A; Fischbein, N
2016-07-01
A neuroradiologist's activity includes many tasks beyond interpreting relative value unit-generating imaging studies. Our aim was to test a simple method to record and quantify the non-relative value unit-generating clinical activity represented by consults and clinical conferences, including tumor boards. Four full-time neuroradiologists, working an average of 50% clinical and 50% academic activity, systematically recorded all the non-relative value unit-generating consults and conferences in which they were involved during 3 months by using a simple, Web-based, computer-based application accessible from smartphones, tablets, or computers. The number and type of imaging studies they interpreted during the same period and the associated relative value units were extracted from our billing system. During 3 months, the 4 neuroradiologists working an average of 50% clinical activity interpreted 4241 relative value unit-generating imaging studies, representing 8152 work relative value units. During the same period, they recorded 792 non-relative value unit-generating study reviews as part of consults and conferences (not including reading room consults), representing 19% of the interpreted relative value unit-generating imaging studies. We propose a simple Web-based smartphone app to record and quantify non-relative value unit-generating activities including consults, clinical conferences, and tumor boards. The quantification of non-relative value unit-generating activities is paramount in this time of a paradigm shift from volume to value. It also represents an important tool for determining staffing levels, which cannot be performed on the basis of relative value unit only, considering the importance of time spent by radiologists on non-relative value unit-generating activities. It may also influence payment models from medical centers to radiology departments or practices. © 2016 by American Journal of Neuroradiology.
Implications of Biospheric Energization
NASA Astrophysics Data System (ADS)
Budding, Edd; Demircan, Osman; Gündüz, Güngör; Emin Özel, Mehmet
2016-07-01
Our physical model relating to the origin and development of lifelike processes from very simple beginnings is reviewed. This molecular ('ABC') process is compared with the chemoton model, noting the role of the autocatalytic tuning to the time-dependent source of energy. This substantiates a Darwinian character to evolution. The system evolves from very simple beginnings to a progressively more highly tuned, energized and complex responding biosphere, that grows exponentially; albeit with a very low net growth factor. Rates of growth and complexity in the evolution raise disturbing issues of inherent stability. Autocatalytic processes can include a fractal character to their development allowing recapitulative effects to be observed. This property, in allowing similarities of pattern to be recognized, can be useful in interpreting complex (lifelike) systems.
Photometric studies of Saturn's ring and eclipses of the Galilean satellites
NASA Technical Reports Server (NTRS)
Brunk, W. E.
1972-01-01
Reliable data defining the photometric function of the Saturn ring system at visual wavelengths are interpreted in terms of a simple scattering model. To facilitate the analysis, new photographic photometry of the ring has been carried out and homogeneous measurements of the mean surface brightness are presented. The ring model adopted is a plane parallel slab of isotropically scattering particles; the single scattering albedo and the perpendicular optical thickness are both arbitrary. Results indicate that primary scattering is inadequate to describe the photometric properties of the ring: multiple scattering predominates for all angles of tilt with respect to the Sun and earth. In addition, the scattering phase function of the individual particles is significantly anisotropic: they scatter preferentially towards the sun. Photoelectric photometry of Ganymede during its eclipse by Jupiter indicate that neither a simple reflecting-layer model nor a semi-infinite homogeneous scattering model provides an adequate physical description of the Jupiter atmosphere.
Testing the structure of a hydrological model using Genetic Programming
NASA Astrophysics Data System (ADS)
Selle, Benny; Muttil, Nitin
2011-01-01
SummaryGenetic Programming is able to systematically explore many alternative model structures of different complexity from available input and response data. We hypothesised that Genetic Programming can be used to test the structure of hydrological models and to identify dominant processes in hydrological systems. To test this, Genetic Programming was used to analyse a data set from a lysimeter experiment in southeastern Australia. The lysimeter experiment was conducted to quantify the deep percolation response under surface irrigated pasture to different soil types, watertable depths and water ponding times during surface irrigation. Using Genetic Programming, a simple model of deep percolation was recurrently evolved in multiple Genetic Programming runs. This simple and interpretable model supported the dominant process contributing to deep percolation represented in a conceptual model that was published earlier. Thus, this study shows that Genetic Programming can be used to evaluate the structure of hydrological models and to gain insight about the dominant processes in hydrological systems.
Historical perspective on lead biokinetic models.
Rabinowitz, M
1998-01-01
A historical review of the development of biokinetic model of lead is presented. Biokinetics is interpreted narrowly to mean only physiologic processes happening within the body. Proceeding chronologically, for each epoch, the measurements of lead in the body are presented along with mathematical models in an attempt to trace the convergence of observations from two disparate fields--occupational medicine and radiologic health--into some unified models. Kehoe's early balance studies and the use of radioactive lead tracers are presented. The 1960s saw the joint application of radioactive lead techniques and simple compartmental kinetic models used to establish the exchange rates and residence times of lead in body pools. The applications of stable isotopes to questions of the magnitudes of respired and ingested inputs required the development of a simple three-pool model. During the 1980s more elaborate models were developed. One of their key goals was the establishment of the dose-response relationship between exposure to lead and biologic precursors of adverse health effects. PMID:9860905
Modeling Respiratory Gas Dynamics in the Aviator’s Breathing System. Volume 2. Appendices
1994-05-01
Rideout, at at. Dfference-Differentlat Equations for Fluid C... Flow in Distensible Tubes. IEEE Transactions on Bio-Medlcat C... Enginhering. Vot INE-14...McGraw-Hill; 1970; Chapter 13: 433-450. 12. Astrand, PO; Saltin, B. Oxygen uptake during the first minutes of heavy muscular exercise. J Appl Physiol...1802-1814; 1986. 233. Linehan, JH; Haworth, ST; Nelin, LD; Krenz, GS; Dawson, CA. A Simple Distensible Vessel Model for Interpreting Pulmonary
SOFT: a synthetic synchrotron diagnostic for runaway electrons
NASA Astrophysics Data System (ADS)
Hoppe, M.; Embréus, O.; Tinguely, R. A.; Granetz, R. S.; Stahl, A.; Fülöp, T.
2018-02-01
Improved understanding of the dynamics of runaway electrons can be obtained by measurement and interpretation of their synchrotron radiation emission. Models for synchrotron radiation emitted by relativistic electrons are well established, but the question of how various geometric effects—such as magnetic field inhomogeneity and camera placement—influence the synchrotron measurements and their interpretation remains open. In this paper we address this issue by simulating synchrotron images and spectra using the new synthetic synchrotron diagnostic tool SOFT (Synchrotron-detecting Orbit Following Toolkit). We identify the key parameters influencing the synchrotron radiation spot and present scans in those parameters. Using a runaway electron distribution function obtained by Fokker-Planck simulations for parameters from an Alcator C-Mod discharge, we demonstrate that the corresponding synchrotron image is well-reproduced by SOFT simulations, and we explain how it can be understood in terms of the parameter scans. Geometric effects are shown to significantly influence the synchrotron spectrum, and we show that inherent inconsistencies in a simple emission model (i.e. not modeling detection) can lead to incorrect interpretation of the images.
Measurement and modeling of CO2 mass transfer in brine at reservoir conditions
NASA Astrophysics Data System (ADS)
Shi, Z.; Wen, B.; Hesse, M. A.; Tsotsis, T. T.; Jessen, K.
2018-03-01
In this work, we combine measurements and modeling to investigate the application of pressure-decay experiments towards delineation and interpretation of CO2 solubility, uptake and mass transfer in water/brine systems at elevated pressures of relevance to CO2 storage operations in saline aquifers. Accurate measurements and modeling of mass transfer in this context are crucial to an improved understanding of the longer-term fate of CO2 that is injected into the subsurface for storage purposes. Pressure-decay experiments are presented for CO2/water and CO2/brine systems with and without the presence of unconsolidated porous media. We demonstrate, via high-resolution numerical calculations in 2-D, that natural convection will complicate the interpretation of the experimental observations if the particle size is not sufficiently small. In such settings, we demonstrate that simple 1-D interpretations can result in an overestimation of the uptake (diffusivity) by two orders of magnitude. Furthermore, we demonstrate that high-resolution numerical calculations agree well with the experimental observations for settings where natural convection contributes substantially to the overall mass transfer process.
Ramsey-type spectroscopy in the XUV spectral region
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pirri, A.; European Laboratory for Nonlinear Spectroscopy, Via N. Carrara 1, I-50019 Sesto Fiorentino; Sali, E.
2010-02-02
We report an experimental and theoretical investigation of Ramsey-type spectroscopy with high-order harmonic generation applied to autoionizing states of Krypton. The ionization yield, detected by an ion-mass spectrometer, shows the characteristic quantum interference pattern. The behaviour of the fringe contrast was interpreted on the basis of a simple analytic model, which reproduces the experimental data without any free parameter.
Convolutional neural networks for vibrational spectroscopic data analysis.
Acquarelli, Jacopo; van Laarhoven, Twan; Gerretzen, Jan; Tran, Thanh N; Buydens, Lutgarde M C; Marchiori, Elena
2017-02-15
In this work we show that convolutional neural networks (CNNs) can be efficiently used to classify vibrational spectroscopic data and identify important spectral regions. CNNs are the current state-of-the-art in image classification and speech recognition and can learn interpretable representations of the data. These characteristics make CNNs a good candidate for reducing the need for preprocessing and for highlighting important spectral regions, both of which are crucial steps in the analysis of vibrational spectroscopic data. Chemometric analysis of vibrational spectroscopic data often relies on preprocessing methods involving baseline correction, scatter correction and noise removal, which are applied to the spectra prior to model building. Preprocessing is a critical step because even in simple problems using 'reasonable' preprocessing methods may decrease the performance of the final model. We develop a new CNN based method and provide an accompanying publicly available software. It is based on a simple CNN architecture with a single convolutional layer (a so-called shallow CNN). Our method outperforms standard classification algorithms used in chemometrics (e.g. PLS) in terms of accuracy when applied to non-preprocessed test data (86% average accuracy compared to the 62% achieved by PLS), and it achieves better performance even on preprocessed test data (96% average accuracy compared to the 89% achieved by PLS). For interpretability purposes, our method includes a procedure for finding important spectral regions, thereby facilitating qualitative interpretation of results. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ogée, J.; Barbour, M. M.; Dewar, R. C.; Wingate, L.; Bert, D.; Bosc, A.; Lambrot, C.; Stievenard, M.; Bariac, T.; Berbigier, P.; Loustau, D.
2007-12-01
High-resolution measurements of the carbon and oxygen stable isotope composition of cellulose in annual tree rings (δ13Ccellulose and δ18Ocellulose, respectively) reveal well-defined seasonal patterns that could contain valuable records of past climate and tree function. Interpreting these signals is nonetheless complex because they not only record the signature of current assimilates, but also depend on carbon allocation dynamics within the trees. Here, we will present a single-substrate model for wood growth in order to interpret qualitatively and quantitatively these seasonal isotopic signals. We will also show how this model can relate to more complex models of phloem transport and cambial activity. The model will then be tested against an isotopic intra-annual chronology collected on a Pinus pinaster tree equipped with point dendrometers and growing on a Carboeurope site where climate, soil and flux variables are also monitored. The empirical δ13Ccellulose and δ18Ocellulose signals exhibit dynamic seasonal patterns with clear differences between years, which makes it suitable for model testing. We will show how our simple model of carbohydrate reserves, forced by sap flow and eddy covariance measurements, enables us to interpret these seasonal and inter-annual patterns. Finally, we will present a sensitivity analysis of the model, showing how gas-exchange parameters, carbon and water pool sizes or wood maturation times affect these isotopic signals. Acknowledgements: this study benefited from the CarboEurope-IP Bray site facilities and was funded by the French INSU programme Eclipse, with an additional support from the INRA department EFPA.
The problem with simple lumped parameter models: Evidence from tritium mean transit times
NASA Astrophysics Data System (ADS)
Stewart, Michael; Morgenstern, Uwe; Gusyev, Maksym; Maloszewski, Piotr
2017-04-01
Simple lumped parameter models (LPMs) based on assuming homogeneity and stationarity in catchments and groundwater bodies are widely used to model and predict hydrological system outputs. However, most systems are not homogeneous or stationary, and errors resulting from disregard of the real heterogeneity and non-stationarity of such systems are not well understood and rarely quantified. As an example, mean transit times (MTTs) of streamflow are usually estimated from tracer data using simple LPMs. The MTT or transit time distribution of water in a stream reveals basic catchment properties such as water flow paths, storage and mixing. Importantly however, Kirchner (2016a) has shown that there can be large (several hundred percent) aggregation errors in MTTs inferred from seasonal cycles in conservative tracers such as chloride or stable isotopes when they are interpreted using simple LPMs (i.e. a range of gamma models or GMs). Here we show that MTTs estimated using tritium concentrations are similarly affected by aggregation errors due to heterogeneity and non-stationarity when interpreted using simple LPMs (e.g. GMs). The tritium aggregation error series from the strong nonlinearity between tritium concentrations and MTT, whereas for seasonal tracer cycles it is due to the nonlinearity between tracer cycle amplitudes and MTT. In effect, water from young subsystems in the catchment outweigh water from old subsystems. The main difference between the aggregation errors with the different tracers is that with tritium it applies at much greater ages than it does with seasonal tracer cycles. We stress that the aggregation errors arise when simple LPMs are applied (with simple LPMs the hydrological system is assumed to be a homogeneous whole with parameters representing averages for the system). With well-chosen compound LPMs (which are combinations of simple LPMs) on the other hand, aggregation errors are very much smaller because young and old water flows are treated separately. "Well-chosen" means that the compound LPM is based on hydrologically- and geologically-validated information, and the choice can be assisted by matching simulations to time series of tritium measurements. References: Kirchner, J.W. (2016a): Aggregation in environmental systems - Part 1: Seasonal tracer cycles quantify young water fractions, but not mean transit times, in spatially heterogeneous catchments. Hydrol. Earth Syst. Sci. 20, 279-297. Stewart, M.K., Morgenstern, U., Gusyev, M.A., Maloszewski, P. 2016: Aggregation effects on tritium-based mean transit times and young water fractions in spatially heterogeneous catchments and groundwater systems, and implications for past and future applications of tritium. Submitted to Hydrol. Earth Syst. Sci., 10 October 2016, doi:10.5194/hess-2016-532.
Byass, Peter; Huong, Dao Lan; Minh, Hoang Van
2003-01-01
Verbal autopsy (VA) has become an important tool in the past 20 years for determining cause of death in communities where there is no routine registration. In many cases, expert physicians have been used to interpret the VA findings and so assign individual causes of death. However, this is time consuming and not always repeatable. Other approaches such as algorithms and neural networks have been developed in some settings. This paper aims to develop a method that is simple, reliable and consistent, which could represent an advance in VA interpretation. This paper describes the development of a Bayesian probability model for VA interpretation as an attempt to find a better approach. This methodology and a preliminary implementation are described, with an evaluation based on VA material from rural Vietnam. The new model was tested against a series of 189 VA interviews from a rural community in Vietnam. Using this very basic model, over 70% of individual causes of death corresponded with those determined by two physicians increasing to over 80% if those cases ascribed to old age or as being indeterminate by the physicians were excluded. Although there is a clear need to improve the preliminary model and to test more extensively with larger and more varied datasets, these preliminary results suggest that there may be good potential in this probabilistic approach.
Interpretation of lunar heat flow data. [for estimating bulk uranium abundance
NASA Technical Reports Server (NTRS)
Conel, J. E.; Morton, J. B.
1975-01-01
Lunar heat flow observations at the Apollo 15 and 17 sites can be interpreted to imply bulk U concentrations for the moon of 5 to 8 times those of normal chondrites and 2 to 4 times terrestrial values inferred from the earth's heat flow and the assumption of thermal steady state between surface heat flow and heat production. A simple model of nearsurface structure that takes into account the large difference in (highly insulating) regolith thickness between mare and highland provinces is considered. This model predicts atypically high local values of heat flow near the margins of mare regions - possibly a factor of 10 or so higher than the global average. A test of the proposed model using multifrequency microwave techniques appears possible wherein heat flow traverse measurements are made across mare-highland contacts. The theoretical considerations discussed here urge caution in attributing global significance to point heat-flow measurements on the moon.
Glacial morphology and depositional sequences of the Antarctic Continental Shelf
ten Brink, Uri S.; Schneider, Christopher
1995-01-01
Proposes a simple model for the unusual depositional sequences and morphology of the Antarctic continental shelf. It considers the regional stratal geometry and the reversed morphology to be principally the results of time-integrated effects of glacial erosion and sedimentation related to the location of the ice grounding line. The model offers several guidelines for stratigraphic interpretation of the Antarctic shelf and a Northern Hemisphere shelf, both of which were subject to many glacial advances and retreats. -Authors
Transitivity vs. intransitivity in decision making process - an example in quantum game theory
NASA Astrophysics Data System (ADS)
Makowski, Marcin
2009-06-01
We compare two different ways of quantum modification in a simple sequential game called Cat's Dilemma in the context of the debate on intransitive and transitive preferences. This kind of analysis can have essential meaning for research on artificial intelligence (some possibilities are discussed). Nature has both transitive and intransitive properties and perhaps quantum models will be more able to capture this dualism than the classical models. We also present an electoral interpretation of the game.
Nonzero solutions of nonlinear integral equations modeling infectious disease
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, L.R.; Leggett, R.W.
1982-01-01
Sufficient conditions to insure the existence of periodic solutions to the nonlinear integral equation, x(t) = ..integral../sup t//sub t-tau/f(s,x(s))ds, are given in terms of simple product and product integral inequalities. The equation can be interpreted as a model for the spread of infectious diseases (e.g., gonorrhea or any of the rhinovirus viruses) if x(t) is the proportion of infectives at time t and f(t,x(t)) is the proportion of new infectives per unit time.
NASA Astrophysics Data System (ADS)
Ghosh, Dipak; Sarkar, Sharmila; Sen, Sanjib; Roy, Jaya
1995-06-01
In this paper the behavior of factorial moments with rapidity window size, which is usually explained in terms of ``intermittency,'' has been interpreted by simple quantum statistical properties of the emitting system using the concept of ``modified two-source model'' as recently proposed by Ghosh and Sarkar [Phys. Lett. B 278, 465 (1992)]. The analysis has been performed using our own data of 16Ag/Br and 24Ag/Br interactions at a few tens of GeV energy regime.
NASA Astrophysics Data System (ADS)
Orr, Matthew; Hopkins, Philip F.
2018-06-01
I will present a simple model of non-equilibrium star formation and its relation to the scatter in the Kennicutt-Schmidt relation and large-scale star formation efficiencies in galaxies. I will highlight the importance of a hierarchy of timescales, between the galaxy dynamical time, local free-fall time, the delay time of stellar feedback, and temporal overlap in observables, in setting the scatter of the observed star formation rates for a given gas mass. Further, I will talk about how these timescales (and their associated duty-cycles of star formation) influence interpretations of the large-scale star formation efficiency in reasonably star-forming galaxies. Lastly, the connection with galactic centers and out-of-equilibrium feedback conditions will be mentioned.
Ink dating part II: Interpretation of results in a legal perspective.
Koenig, Agnès; Weyermann, Céline
2018-01-01
The development of an ink dating method requires an important investment of resources in order to step from the monitoring of ink ageing on paper to the determination of the actual age of a questioned ink entry. This article aimed at developing and evaluating the potential of three interpretation models to date ink entries in a legal perspective: (1) the threshold model comparing analytical results to tabulated values in order to determine the maximal possible age of an ink entry, (2) the trend tests that focusing on the "ageing status" of an ink entry, and (3) the likelihood ratio calculation comparing the probabilities to observe the results under at least two alternative hypotheses. This is the first report showing ink dating interpretation results on a ballpoint be ink reference population. In the first part of this paper three ageing parameters were selected as promising from the population of 25 ink entries aged during 4 to 304days: the quantity of phenoxyethanol (PE), the difference between the PE quantities contained in a naturally aged sample and an artificially aged sample (R NORM ) and the solvent loss ratio (R%). In the current part, each model was tested using the three selected ageing parameters. Results showed that threshold definition remains a simple model easily applicable in practice, but that the risk of false positive cannot be completely avoided without reducing significantly the feasibility of the ink dating approaches. The trend tests from the literature showed unreliable results and an alternative had to be developed yielding encouraging results. The likelihood ratio calculation introduced a degree of certainty to the ink dating conclusion in comparison to the threshold approach. The proposed model remains quite simple to apply in practice, but should be further developed in order to yield reliable results in practice. Copyright © 2017 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.
Inhomogeneity and velocity fields effects on scattering polarization in solar prominences
NASA Astrophysics Data System (ADS)
Milić, I.; Faurobert, M.
2015-10-01
One of the methods for diagnosing vector magnetic fields in solar prominences is the so called "inversion" of observed polarized spectral lines. This inversion usually assumes a fairly simple generative model and in this contribution we aim to study the possible systematic errors that are introduced by this assumption. On two-dimensional toy model of a prominence, we first demonstrate importance of multidimensional radiative transfer and horizontal inhomogeneities. These are able to induce a significant level of polarization in Stokes U, without the need for the magnetic field. We then compute emergent Stokes spectrum from a prominence which is pervaded by the vector magnetic field and use a simple, one-dimensional model to interpret these synthetic observations. We find that inferred values for the magnetic field vector generally differ from the original ones. Most importantly, the magnetic field might seem more inclined than it really is.
Simple mental addition in children with and without mild mental retardation.
Janssen, R; De Boeck, P; Viaene, M; Vallaeys, L
1999-11-01
The speeded performance on simple mental addition problems of 6- and 7-year-old children with and without mild mental retardation is modeled from a person perspective and an item perspective. On the person side, it was found that a single cognitive dimension spanned the performance differences between the two ability groups. However, a discontinuity, or "jump," was observed in the performance of the normal ability group on the easier items. On the item side, the addition problems were almost perfectly ordered in difficulty according to their problem size. Differences in difficulty were explained by factors related to the difficulty of executing nonretrieval strategies. All findings were interpreted within the framework of Siegler's (e.g., R. S. Siegler & C. Shipley, 1995) model of children's strategy choices in arithmetic. Models from item response theory were used to test the hypotheses. Copyright 1999 Academic Press.
Steven H. Ackers; Raymond J. Davis; Keith A. Olsen; Katie M. Dugger
2015-01-01
Wildlife habitat mapping has evolved at a rapid pace over the last fewdecades. Beginning with simple, often subjective, hand-drawn maps, habitat mapping now involves complex species distribution models (SDMs) using mapped predictor variables derived from remotely sensed data. For species that inhabit large geographic areas, remote sensing technology is often...
MEG evidence that the central auditory system simultaneously encodes multiple temporal cues.
Simpson, Michael I G; Barnes, Gareth R; Johnson, Sam R; Hillebrand, Arjan; Singh, Krish D; Green, Gary G R
2009-09-01
Speech contains complex amplitude modulations that have envelopes with multiple temporal cues. The processing of these complex envelopes is not well explained by the classical models of amplitude modulation processing. This may be because the evidence for the models typically comes from the use of simple sinusoidal amplitude modulations. In this study we used magnetoencephalography (MEG) to generate source space current estimates of the steady-state responses to simple one-component amplitude modulations and to a two-component amplitude modulation. A two-component modulation introduces the simplest form of modulation complexity into the waveform; the summation of the two-modulation rates introduces a beat-like modulation at the difference frequency between the two modulation rates. We compared the cortical representations of responses to the one-component and two-component modulations. In particular, we show that the temporal complexity in the two-component amplitude modulation stimuli was preserved at the cortical level. The method of stimulus normalization that we used also allows us to interpret these results as evidence that the important feature in sound modulations is the relative depth of one modulation rate with respect to another, rather than the absolute carrier-to-sideband modulation depth. More generally, this may be interpreted as evidence that modulation detection accurately preserves a representation of the modulation envelope. This is an important observation with respect to models of modulation processing, as it suggests that models may need a dynamic processing step to effectively model non-stationary stimuli. We suggest that the classic modulation filterbank model needs to be modified to take these findings into account.
Relative arrival-time upper-mantle tomography and the elusive background mean
NASA Astrophysics Data System (ADS)
Bastow, Ian D.
2012-08-01
The interpretation of seismic tomographic images of upper-mantle seismic wave speed structure is often a matter of considerable debate because the observations can usually be explained by a range of hypotheses, including variable temperature, composition, anisotropy, and the presence of partial melt. An additional problem, often overlooked in tomographic studies using relative as opposed to absolute arrival-times, is the issue of the resulting velocity model's zero mean. In shield areas, for example, relative arrival-time analysis strips off a background mean velocity structure that is markedly fast compared to the global average. Conversely, in active areas, the background mean is often markedly slow compared to the global average. Appreciation of this issue is vital when interpreting seismic tomographic images: 'high' and 'low' velocity anomalies should not necessarily be interpreted, respectively, as 'fast' and 'slow' compared to 'normal mantle'. This issue has been discussed in the seismological literature in detail over the years, yet subsequent tomography studies have still fallen into the trap of mis-interpreting their velocity models. I highlight here some recent examples of this and provide a simple strategy to address the problem using constraints from a recent global tomographic model, and insights from catalogues of absolute traveltime anomalies. Consultation of such absolute measures of seismic wave speed should be routine during regional tomographic studies, if only for the benefit of the broader Earth Science community, who readily follow the red = hot and slow, blue = cold and fast rule of thumb when interpreting the images for themselves.
NASA Technical Reports Server (NTRS)
Cassen, Pat
1991-01-01
Attempts to derive a theoretical framework for the interpretation of the meteoritic record have been frustrated by our incomplete understanding of the fundamental processes that controlled the evolution of the primitive solar nebula. Nevertheless, it is possible to develop qualitative models of the nebula that illuminate its dynamic character, as well as the roles of some key parameters. These models draw on the growing body of observational data on the properties of disks around young, solar-type stars, and are constructed by applying the results of known solutions of protostellar collapse problems; making simple assumptions about the radial variations of nebular variables; and imposing the integral constraints demanded by conservation of mass, angular momentum, and energy. The models so constructed are heuristic, rather than predictive; they are intended to help us think about the nebula in realistic ways, but they cannot provide a definitive description of conditions in the nebula.
ERIC Educational Resources Information Center
Brandt, Silke; Lieven, Elena; Tomasello, Michael
2016-01-01
Children and adults follow cues such as case marking and word order in their assignment of semantic roles in simple transitives (e.g., "the dog chased the cat"). It has been suggested that the same cues are used for the interpretation of complex sentences, such as transitive relative clauses (RCs) (e.g., "that's the dog that chased…
Rudall, Paula J.; Bateman, Richard M.
2010-01-01
Recent phylogenetic reconstructions suggest that axially condensed flower-like structures evolved iteratively in seed plants from either simple or compound strobili. The simple-strobilus model of flower evolution, widely applied to the angiosperm flower, interprets the inflorescence as a compound strobilus. The conifer cone and the gnetalean ‘flower’ are commonly interpreted as having evolved from a compound strobilus by extreme condensation and (at least in the case of male conifer cones) elimination of some structures present in the presumed ancestral compound strobilus. These two hypotheses have profoundly different implications for reconstructing the evolution of developmental genetic mechanisms in seed plants. If different flower-like structures evolved independently, there should intuitively be little commonality of patterning genes. However, reproductive units of some early-divergent angiosperms, including the extant genus Trithuria (Hydatellaceae) and the extinct genus Archaefructus (Archaefructaceae), apparently combine features considered typical of flowers and inflorescences. We re-evaluate several disparate strands of comparative data to explore whether flower-like structures could have arisen by co-option of flower-expressed patterning genes into independently evolved condensed inflorescences, or vice versa. We discuss the evolution of the inflorescence in both gymnosperms and angiosperms, emphasising the roles of heterotopy in dictating gender expression and heterochrony in permitting internodal compression. PMID:20047867
Rudall, Paula J; Bateman, Richard M
2010-02-12
Recent phylogenetic reconstructions suggest that axially condensed flower-like structures evolved iteratively in seed plants from either simple or compound strobili. The simple-strobilus model of flower evolution, widely applied to the angiosperm flower, interprets the inflorescence as a compound strobilus. The conifer cone and the gnetalean 'flower' are commonly interpreted as having evolved from a compound strobilus by extreme condensation and (at least in the case of male conifer cones) elimination of some structures present in the presumed ancestral compound strobilus. These two hypotheses have profoundly different implications for reconstructing the evolution of developmental genetic mechanisms in seed plants. If different flower-like structures evolved independently, there should intuitively be little commonality of patterning genes. However, reproductive units of some early-divergent angiosperms, including the extant genus Trithuria (Hydatellaceae) and the extinct genus Archaefructus (Archaefructaceae), apparently combine features considered typical of flowers and inflorescences. We re-evaluate several disparate strands of comparative data to explore whether flower-like structures could have arisen by co-option of flower-expressed patterning genes into independently evolved condensed inflorescences, or vice versa. We discuss the evolution of the inflorescence in both gymnosperms and angiosperms, emphasising the roles of heterotopy in dictating gender expression and heterochrony in permitting internodal compression.
NASA Astrophysics Data System (ADS)
Asiedu, Mercy Nyamewaa; Simhal, Anish; Lam, Christopher T.; Mueller, Jenna; Chaudhary, Usamah; Schmitt, John W.; Sapiro, Guillermo; Ramanujam, Nimmi
2018-02-01
The world health organization recommends visual inspection with acetic acid (VIA) and/or Lugol's Iodine (VILI) for cervical cancer screening in low-resource settings. Human interpretation of diagnostic indicators for visual inspection is qualitative, subjective, and has high inter-observer discordance, which could lead both to adverse outcomes for the patient and unnecessary follow-ups. In this work, we a simple method for automatic feature extraction and classification for Lugol's Iodine cervigrams acquired with a low-cost, miniature, digital colposcope. Algorithms to preprocess expert physician-labelled cervigrams and to extract simple but powerful color-based features are introduced. The features are used to train a support vector machine model to classify cervigrams based on expert physician labels. The selected framework achieved a sensitivity, specificity, and accuracy of 89.2%, 66.7% and 80.6% with majority diagnosis of the expert physicians in discriminating cervical intraepithelial neoplasia (CIN +) relative to normal tissues. The proposed classifier also achieved an area under the curve of 84 when trained with majority diagnosis of the expert physicians. The results suggest that utilizing simple color-based features may enable unbiased automation of VILI cervigrams, opening the door to a full system of low-cost data acquisition complemented with automatic interpretation.
The use of models to predict potential contamination aboard orbital vehicles
NASA Technical Reports Server (NTRS)
Boraas, Martin E.; Seale, Dianne B.
1989-01-01
A model of fungal growth on air-exposed, nonnutritive solid surfaces, developed for utilization aboard orbital vehicles is presented. A unique feature of this testable model is that the development of a fungal mycelium can facilitate its own growth by condensation of water vapor from its environment directly onto fungal hyphae. The fungal growth rate is limited by the rate of supply of volatile nutrients and fungal biomass is limited by either the supply of nonvolatile nutrients or by metabolic loss processes. The model discussed is structurally simple, but its dynamics can be quite complex. Biofilm accumulation can vary from a simple linear increase to sustained exponential growth, depending on the values of the environmental variable and model parameters. The results of the model are consistent with data from aquatic biofilm studies, insofar as the two types of systems are comparable. It is shown that the model presented is experimentally testable and provides a platform for the interpretation of observational data that may be directly relevant to the question of growth of organisms aboard the proposed Space Station.
Calculation of density of states for modeling photoemission using method of moments
NASA Astrophysics Data System (ADS)
Finkenstadt, Daniel; Lambrakos, Samuel G.; Jensen, Kevin L.; Shabaev, Andrew; Moody, Nathan A.
2017-09-01
Modeling photoemission using the Moments Approach (akin to Spicer's "Three Step Model") is often presumed to follow simple models for the prediction of two critical properties of photocathodes: the yield or "Quantum Efficiency" (QE), and the intrinsic spreading of the beam or "emittance" ɛnrms. The simple models, however, tend to obscure properties of electrons in materials, the understanding of which is necessary for a proper prediction of a semiconductor or metal's QE and ɛnrms. This structure is characterized by localized resonance features as well as a universal trend at high energy. Presented in this study is a prototype analysis concerning the density of states (DOS) factor D(E) for Copper in bulk to replace the simple three-dimensional form of D(E) = (m/π2 h3)p2mE currently used in the Moments approach. This analysis demonstrates that excited state spectra of atoms, molecules and solids based on density-functional theory can be adapted as useful information for practical applications, as well as providing theoretical interpretation of density-of-states structure, e.g., qualitatively good descriptions of optical transitions in matter, in addition to DFT's utility in providing the optical constants and material parameters also required in the Moments Approach.
Modeling Electrically Evoked Otoacoustic Emissions
NASA Astrophysics Data System (ADS)
Grosh, K.; Deo, N.; Parthasarathi, A. A.; Nuttall, A. L.; Zheng, J. F.; Ren, T. Y.
2003-02-01
Electrical evoked otoacoustic emissions (EEOAE) are used to investigate in vivo cochlear electromechanical function. Round window electrical stimulation gives rise to a broad frequency EEOAE response, from 100 Hz or below to 40 kHz in guinea pigs. Placing bipolar electrodes very close to the basilar membrane (in the scala vestibuli and scala tympani) gives rise to a much narrower frequency range of EEOAE, limited to around 20 kHz when the electrodes are placed near the 18 kHz best frequency place. Model predictions using a three dimensional fluid model in conjunction with a simple model for outer hair cell (OHC) activity are used to interpret the experimental results. The model is solved using a 2.5D finite-element formulation. Predictions show that the high-frequency limit of the excitation is determined by the spatial extent of the current stimulus (also called the current spread). The global peaks in the EEOAE spectra are interpreted as constructive interference between electrically evoked backward traveling waves and forward traveling waves reflected from the stapes. Steady-state response predictions of the model are presented.
Modelling morphology evolution during solidification of IPP in processing conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pantani, R., E-mail: rpantani@unisa.it, E-mail: fedesantis@unisa.it, E-mail: vsperanza@unisa.it, E-mail: gtitomanlio@unisa.it; De Santis, F., E-mail: rpantani@unisa.it, E-mail: fedesantis@unisa.it, E-mail: vsperanza@unisa.it, E-mail: gtitomanlio@unisa.it; Speranza, V., E-mail: rpantani@unisa.it, E-mail: fedesantis@unisa.it, E-mail: vsperanza@unisa.it, E-mail: gtitomanlio@unisa.it
During polymer processing, crystallization takes place during or soon after flow. In most of cases, the flow field dramatically influences both the crystallization kinetics and the crystal morphology. On their turn, crystallinity and morphology affect product properties. Consequently, in the last decade, researchers tried to identify the main parameters determining crystallinity and morphology evolution during solidification In processing conditions. In this work, we present an approach to model flow-induced crystallization with the aim of predicting the morphology after processing. The approach is based on: interpretation of the FIC as the effect of molecular stretch on the thermodynamic crystallization temperature; modelingmore » the molecular stretch evolution by means of a model simple and easy to be implemented in polymer processing simulation codes; identification of the effect of flow on nucleation density and spherulites growth rate by means of simple experiments; determination of the condition under which fibers form instead of spherulites. Model predictions reproduce most of the features of final morphology observed in the samples after solidification.« less
On Global Optimal Sailplane Flight Strategy
NASA Technical Reports Server (NTRS)
Sander, G. J.; Litt, F. X.
1979-01-01
The derivation and interpretation of the necessary conditions that a sailplane cross-country flight has to satisfy to achieve the maximum global flight speed is considered. Simple rules are obtained for two specific meteorological models. The first one uses concentrated lifts of various strengths and unequal distance. The second one takes into account finite, nonuniform space amplitudes for the lifts and allows, therefore, for dolphin style flight. In both models, altitude constraints consisting of upper and lower limits are shown to be essential to model realistic problems. Numerical examples illustrate the difference with existing techniques based on local optimality conditions.
Atomic Dynamics in Simple Liquid: de Gennes Narrowing Revisited
NASA Astrophysics Data System (ADS)
Wu, Bin; Iwashita, Takuya; Egami, Takeshi
2018-03-01
The de Gennes narrowing phenomenon is frequently observed by neutron or x -ray scattering measurements of the dynamics of complex systems, such as liquids, proteins, colloids, and polymers. The characteristic slowing down of dynamics in the vicinity of the maximum of the total scattering intensity is commonly attributed to enhanced cooperativity. In this Letter, we present an alternative view on its origin through the examination of the time-dependent pair correlation function, the van Hove correlation function, for a model liquid in two, three, and four dimensions. We find that the relaxation time increases monotonically with distance and the dependence on distance varies with dimension. We propose a heuristic explanation of this dependence based on a simple geometrical model. This finding sheds new light on the interpretation of the de Gennes narrowing phenomenon and the α -relaxation time.
Deciphering mRNA Sequence Determinants of Protein Production Rate
NASA Astrophysics Data System (ADS)
Szavits-Nossan, Juraj; Ciandrini, Luca; Romano, M. Carmen
2018-03-01
One of the greatest challenges in biophysical models of translation is to identify coding sequence features that affect the rate of translation and therefore the overall protein production in the cell. We propose an analytic method to solve a translation model based on the inhomogeneous totally asymmetric simple exclusion process, which allows us to unveil simple design principles of nucleotide sequences determining protein production rates. Our solution shows an excellent agreement when compared to numerical genome-wide simulations of S. cerevisiae transcript sequences and predicts that the first 10 codons, which is the ribosome footprint length on the mRNA, together with the value of the initiation rate, are the main determinants of protein production rate under physiological conditions. Finally, we interpret the obtained analytic results based on the evolutionary role of the codons' choice for regulating translation rates and ribosome densities.
Semen quality detection using time of flight and acoustic wave sensors
NASA Astrophysics Data System (ADS)
Newton, M. I.; Evans, C. R.; Simons, J. J.; Hughes, D. C.
2007-04-01
The authors report a real-time technique for assessing the number of motile sperm in a semen sample. The time of flight technique uses a flow channel with detection at the end of the channel using quartz crystal microbalances. Data presented suggest that a simple rigid mass model may be used in interpreting the change in resonant frequency using an effective mass for the sperm.
Systematic properties of proton single-particle energies
NASA Astrophysics Data System (ADS)
Mairle, G.
1985-03-01
Single-particle energies of protons in the 1f7/2, 2p3/2, 2p1/2, 1f5/2 and 1g9/2 shells of medium-weight nuclei were determined from proton pickup and stripping experiments. The data reveal a simple linear dependence on mass number A and isospin To of the target nuclei which can be interpreted in terms of an extended Bansal-French model.
How to Detect the Location and Time of a Covert Chemical Attack: A Bayesian Approach
2009-12-01
Inverse Problems, Design and Optimization Symposium 2004. Rio de Janeiro , Brazil. Chan, R., and Yee, E. (1997). A simple model for the probability...sensor interpretation applications and has been successfully applied, for example, to estimate the source strength of pollutant releases in multi...coagulation, and second-order pollutant diffusion in sorption- desorption, are not linear. Furthermore, wide uncertainty bounds exist for several of
Cassels, Susan; Pearson, Cynthia R; Kurth, Ann E; Martin, Diane P; Simoni, Jane M; Matediana, Eduardo; Gloyd, Stephen
2009-07-01
Mathematical models are increasingly used in social and behavioral studies of HIV transmission; however, model structures must be chosen carefully to best answer the question at hand and conclusions must be interpreted cautiously. In Pearson et al. (2007), we presented a simple analytically tractable deterministic model to estimate the number of secondary HIV infections stemming from a population of HIV-positive Mozambicans and to evaluate how the estimate would change under different treatment and behavioral scenarios. In a subsequent application of the model with a different data set, we observed that the model produced an unduly conservative estimate of the number of new HIV-1 infections. In this brief report, our first aim is to describe a revision of the model to correct for this underestimation. Specifically, we recommend adjusting the population-level sexually transmitted infection (STI) parameters to be applicable to the individual-level model specification by accounting for the proportion of individuals uninfected with an STI. In applying the revised model to the original data, we noted an estimated 40 infections/1000 HIV-positive persons per year (versus the original 23 infections/1000 HIV-positive persons per year). In addition, the revised model estimated that highly active antiretroviral therapy (HAART) along with syphilis and herpes simplex virus type 2 (HSV-2) treatments combined could reduce HIV-1 transmission by 72% (versus 86% according to the original model). The second aim of this report is to discuss the advantages and disadvantages of mathematical models in the field and the implications of model interpretation. We caution that simple models should be used for heuristic purposes only. Since these models do not account for heterogeneity in the population and significantly simplify HIV transmission dynamics, they should be used to describe general characteristics of the epidemic and demonstrate the importance or sensitivity of parameters in the model.
NASA Technical Reports Server (NTRS)
Godlewski, M. P.; Brandhorst, H. W., Jr.; Lindholm, F. A.; Sah, C. T.
1976-01-01
An experimental method is presented that can be used to interpret the relative roles of bandgap narrowing and recombination processes in the diffused layer. This method involves measuring the device time constant by open-circuit voltage decay and the base region diffusion length by X-ray excitation. A unique illuminated diode method is used to obtain the diode saturation current. These data are interpreted using a simple model to determine individually the minority carrier lifetime and the excess charge. These parameters are then used to infer the relative importance of bandgap narrowing and recombination processes in the diffused layer.
MX Siting Investigation. Gravity Survey - Southern Snake Valley (Ferguson Desert), Utah.
1980-03-28
Topographic Center (DMAHTC), head- quartered in Cheyenne, Wyoming. DMAHTC reduces the data to Simple Bouguer Anomaly (see Section A1.4, Appendix Al.0...Valley, Utah . . . . . ......... . . . . . 3 3 Complete Bouguer Anomaly Contours 4 Interpreted Gravity Profile SE-3,4 5 Interpreted Gravity Profile SE...observations and reduced them to Simple Bouguer Anomalies (SBA) for each station as described in Appendix Al.0. Up to three levels of terrain corrections were
NASA Astrophysics Data System (ADS)
Jarzyna, Jadwiga A.; Krakowska, Paulina I.; Puskarczyk, Edyta; Wawrzyniak-Guz, Kamila; Zych, Marcin
2018-03-01
More than 70 rock samples from so-called sweet spots, i.e. the Ordovician Sa Formation and Silurian Ja Member of Pa Formation from the Baltic Basin (North Poland) were examined in the laboratory to determine bulk and grain density, total and effective/dynamic porosity, absolute permeability, pore diameters size, total surface area, and natural radioactivity. Results of the pyrolysis, i.e., TOC (Total Organic Carbon) together with S1 and S2 - parameters used to determine the hydrocarbon generation potential of rocks, were also considered. Elemental composition from chemical analyses and mineral composition from XRD measurements were also included. SCAL analysis, NMR experiments, Pressure Decay Permeability measurements together with water immersion porosimetry and adsorption/ desorption of nitrogen vapors method were carried out along with the comprehensive interpretation of the outcomes. Simple and multiple linear statistical regressions were used to recognize mutual relationships between parameters. Observed correlations and in some cases big dispersion of data and discrepancies in the property values obtained from different methods were the basis for building shale gas rock model for well logging interpretation. The model was verified by the result of the Monte Carlo modelling of spectral neutron-gamma log response in comparison with GEM log results.
Ku, Hyung-Keun; Lim, Hyuk-Min; Oh, Kyong-Hwa; Yang, Hyo-Jin; Jeong, Ji-Seon; Kim, Sook-Kyung
2013-03-01
The Bradford assay is a simple method for protein quantitation, but variation in the results between proteins is a matter of concern. In this study, we compared and normalized quantitative values from two models for protein quantitation, where the residues in the protein that bind to anionic Coomassie Brilliant Blue G-250 comprise either Arg and Lys (Method 1, M1) or Arg, Lys, and His (Method 2, M2). Use of the M2 model yielded much more consistent quantitation values compared with use of the M1 model, which exhibited marked overestimations against protein standards. Copyright © 2012 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cimpoesu, Dorin, E-mail: cdorin@uaic.ro; Stoleriu, Laurentiu; Stancu, Alexandru
2013-12-14
We propose a generalized Stoner-Wohlfarth (SW) type model to describe various experimentally observed angular dependencies of the switching field in non-single-domain magnetic particles. Because the nonuniform magnetic states are generally characterized by complicated spin configurations with no simple analytical description, we maintain the macrospin hypothesis and we phenomenologically include the effects of nonuniformities only in the anisotropy energy, preserving as much as possible the elegance of SW model, the concept of critical curve and its geometric interpretation. We compare the results obtained with our model with full micromagnetic simulations in order to evaluate the performance and limits of our approach.
Hypo-Elastic Model for Lung Parenchyma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freed, Alan D.; Einstein, Daniel R.
2012-03-01
A simple elastic isotropic constitutive model for the spongy tissue in lung is derived from the theory of hypoelasticity. The model is shown to exhibit a pressure dependent behavior that has been interpreted by some as indicating extensional anisotropy. In contrast, we show that this behavior arises natural from an analysis of isotropic hypoelastic invariants, and is a likely result of non-linearity, not anisotropy. The response of the model is determined analytically for several boundary value problems used for material characterization. These responses give insight into both the material behavior as well as admissible bounds on parameters. The model ismore » characterized against published experimental data for dog lung. Future work includes non-elastic model behavior.« less
A Simple Model for the Evolution of Multi-Stranded Coronal Loops
NASA Technical Reports Server (NTRS)
Fuentes, M. C. Lopez; Klimchuk, J. A.
2010-01-01
We develop and analyze a simple cellular automaton (CA) model that reproduces the main properties of the evolution of soft X-ray coronal loops. We are motivated by the observation that these loops evolve in three distinguishable phases that suggest the development, maintainance, and decay of a self-organized system. The model is based on the idea that loops are made of elemental strands that are heated by the relaxation of magnetic stress in the form of nanoflares. In this vision, usually called "the Parker conjecture" (Parker 1988), the origin of stress is the displacement of the strand footpoints due to photospheric convective motions. Modeling the response and evolution of the plasma we obtain synthetic light curves that have the same characteristic properties (intensity, fluctuations, and timescales) as the observed cases. We study the dependence of these properties on the model parameters and find scaling laws that can be used as observational predictions of the model. We discuss the implications of our results for the interpretation of recent loop observations in different wavelengths. Subject headings: Sun: corona - Sun: flares - Sun: magnetic topology - Sun: X-rays, gamma rays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghosh, D.; Sarkar, S.; Sen, S.
1995-06-01
In this paper the behavior of factorial moments with rapidity window size, which is usually explained in terms of ``intermittency,`` has been interpreted by simple quantum statistical properties of the emitting system using the concept of ``modified two-source model`` as recently proposed by Ghosh and Sarkar [Phys. Lett. B 278, 465 (1992)]. The analysis has been performed using our own data of {sup 16}O-Ag/Br and {sup 24}Mg-Ag/Br interactions at a few tens of GeV energy regime.
A User-Friendly DNA Modeling Software for the Interpretation of Cryo-Electron Microscopy Data.
Larivière, Damien; Galindo-Murillo, Rodrigo; Fourmentin, Eric; Hornus, Samuel; Lévy, Bruno; Papillon, Julie; Ménétret, Jean-François; Lamour, Valérie
2017-01-01
The structural modeling of a macromolecular machine is like a "Lego" approach that is challenged when blocks, like proteins imported from the Protein Data Bank, are to be assembled with an element adopting a serpentine shape, such as DNA templates. DNA must then be built ex nihilo, but modeling approaches are either not user-friendly or very long and fastidious. In this method chapter we show how to use GraphiteLifeExplorer, a software with a simple graphical user interface that enables the sketching of free forms of DNA, of any length, at the atomic scale, as fast as drawing a line on a sheet of paper. We took as an example the nucleoprotein complex of DNA gyrase, a bacterial topoisomerase whose structure has been determined using cryo-electron microscopy (Cryo-EM). Using GraphiteLifeExplorer, we could model in one go a 155 bp long and twisted DNA duplex that wraps around DNA gyrase in the cryo-EM map, improving the quality and interpretation of the final model compared to the initially published data.
Understanding disease mechanisms with models of signaling pathway activities.
Sebastian-Leon, Patricia; Vidal, Enrique; Minguez, Pablo; Conesa, Ana; Tarazona, Sonia; Amadoz, Alicia; Armero, Carmen; Salavert, Francisco; Vidal-Puig, Antonio; Montaner, David; Dopazo, Joaquín
2014-10-25
Understanding the aspects of the cell functionality that account for disease or drug action mechanisms is one of the main challenges in the analysis of genomic data and is on the basis of the future implementation of precision medicine. Here we propose a simple probabilistic model in which signaling pathways are separated into elementary sub-pathways or signal transmission circuits (which ultimately trigger cell functions) and then transforms gene expression measurements into probabilities of activation of such signal transmission circuits. Using this model, differential activation of such circuits between biological conditions can be estimated. Thus, circuit activation statuses can be interpreted as biomarkers that discriminate among the compared conditions. This type of mechanism-based biomarkers accounts for cell functional activities and can easily be associated to disease or drug action mechanisms. The accuracy of the proposed model is demonstrated with simulations and real datasets. The proposed model provides detailed information that enables the interpretation disease mechanisms as a consequence of the complex combinations of altered gene expression values. Moreover, it offers a framework for suggesting possible ways of therapeutic intervention in a pathologically perturbed system.
Improved Pseudo-section Representation for CSAMT Data in Geothermal Exploration
NASA Astrophysics Data System (ADS)
Grandis, Hendra; Sumintadireja, Prihadi
2017-04-01
Controlled-Source Audio-frequency Magnetotellurics (CSAMT) is a frequency domain sounding technique employing typically a grounded electric dipole as the primary electromagnetic (EM) source to infer the subsurface resistivity distribution. The use of an artificial source provides coherent signals with higher signal-to-noise ratio and overcomes the problems with randomness and fluctuation of the natural EM fields used in MT. However, being an extension of MT, the CSAMT data still uses apparent resistivity and phase for data representation. The finite transmitter-receiver distance in CSAMT leads to a somewhat “distorted” response of the subsurface compared to MT data. We propose a simple technique to present CSAMT data as an apparent resistivity pseudo-section with more meaningful information for qualitative interpretation. Tests with synthetic and field CSAMT data showed that the simple technique is valid only for sounding curves exhibiting a transition from high - low - high resistivity (i.e. H-type) prevailing in data from a geothermal prospect. For quantitative interpretation, we recommend the use of the full-solution of CSAMT modelling since our technique is not valid for more general cases.
Is Seismically Determined Q an Intrinsic Material Property?
NASA Astrophysics Data System (ADS)
Langston, C. A.
2003-12-01
The seismic quality factor, Q, has a well-defined physical meaning as an intrinsic material property associated with a visco-elastic or a non-linear stress-strain constitutive relation for a material. Measurement of Q from seismic waves, however, involves interpreting seismic wave amplitude and phase as deviations from some ideal elastic wave propagation model. Thus, assumptions in the elastic wave propagation model become the basis for attributing anelastic properties to the earth continuum. Scientifically, the resulting Q model derived from seismic data is no more than a hypothesis that needs to be verified by other independent experiments concerning the continuum constitutive law and through careful examination of the truth of the assumptions in the wave propagation model. A case in point concerns the anelasticity of Mississippi embayment sediments in the central U.S. that has important implications for evaluation of earthquake strong ground motions. Previous body wave analyses using converted Sp phases have suggested that Qs is ~30 in the sediments based on simple ray theory assumptions. However, detailed modeling of 1D heterogeneity in the sediments shows that Qs cannot be resolved by the Sp data. An independent experiment concerning the amplitude decay of surface waves propagating in the sediments shows that Qs must be generally greater than 80 but is also subject to scattering attenuation. Apparent Q effects seen in direct P and S waves can also be produced by wave tunneling mechanisms in relatively simple 1D heterogeneity. Heterogeneity is a general geophysical attribute of the earth as shown by many high-resolution data sets and should be used as the first litmus test on assumptions made in seismic Q studies before a Q model can be interpreted as an intrinsic material property.
Hydrogen Donor-Acceptor Fluctuations from Kinetic Isotope Effects: A Phenomenological Model
Roston, Daniel; Cheatum, Christopher M.; Kohen, Amnon
2012-01-01
Kinetic isotope effects (KIEs) and their temperature dependence can probe the structural and dynamic nature of enzyme-catalyzed proton or hydride transfers. The molecular interpretation of their temperature dependence requires expensive and specialized QM/MM calculations to provide a quantitative molecular understanding. Currently available phenomenological models use a non-adiabatic assumption that is not appropriate for most hydride and proton-transfer reactions, while others require more parameters than the experimental data justify. Here we propose a phenomenological interpretation of KIEs based on a simple method to quantitatively link the size and temperature dependence of KIEs to a conformational distribution of the catalyzed reaction. The present model assumes adiabatic hydrogen tunneling, and by fitting experimental KIE data, the model yields a population distribution for fluctuations of the distance between donor and acceptor atoms. Fits to data from a variety of proton and hydride transfers catalyzed by enzymes and their mutants, as well as non-enzymatic reactions, reveal that steeply temperature-dependent KIEs indicate the presence of at least two distinct conformational populations, each with different kinetic behaviors. We present the results of these calculations for several published cases and discuss how the predictions of the calculations might be experimentally tested. The current analysis does not replace molecular quantum mechanics/molecular mechanics (QM/MM) investigations, but it provides a fast and accessible way to quantitatively interpret KIEs in the context of a Marcus-like model. PMID:22857146
Brandt, Silke; Lieven, Elena; Tomasello, Michael
2016-01-01
ABSTRACT Children and adults follow cues such as case marking and word order in their assignment of semantic roles in simple transitives (e.g., the dog chased the cat). It has been suggested that the same cues are used for the interpretation of complex sentences, such as transitive relative clauses (RCs) (e.g., that’s the dog that chased the cat) (Bates, Devescovi, & D’Amico, 1999). We used a pointing paradigm to test German-speaking 3-, 4-, and 6-year-old children’s sensitivity to case marking and word order in their interpretation of simple transitives and transitive RCs. In Experiment 1, case marking was ambiguous. The only cue available was word order. In Experiment 2, case was marked on lexical NPs or demonstrative pronouns. In Experiment 3, case was marked on lexical NPs or personal pronouns. Whereas the younger children mainly followed word order, the older children were more likely to base their interpretations on the more reliable case-marking cue. In most cases, children from both age groups were more likely to use these cues in their interpretation of simple transitives than in their interpretation of transitive RCs. Finally, children paid more attention to nominative case when it was marked on first-person personal pronouns than when it was marked on third-person lexical NPs or demonstrative pronouns, such as der Löwe ‘the-NOM lion’ or der ‘he-NOM.’ They were able to successfully integrate this case-marking cue in their sentence processing even when it appeared late in the sentence. We discuss four potential reasons for these differences across development, constructions, and lexical items. (1) Older children are relatively more sensitive to cue reliability. (2) Word order is more reliable in simple transitives than in transitive RCs. (3) The processing of case marking might initially be item-specific. (4) The processing of case marking might depend on its saliency and position in the sentence. PMID:27019652
On one-parametric formula relating the frequencies of twin-peak quasi-periodic oscillations
NASA Astrophysics Data System (ADS)
Török, Gabriel; Goluchová, Kateřina; Šrámková, Eva; Horák, Jiří; Bakala, Pavel; Urbanec, Martin
2018-01-01
Twin-peak quasi-periodic oscillations (QPOs) are observed in several low-mass X-ray binary systems containing neutron stars. Timing the analysis of X-ray fluxes of more than dozen of such systems reveals remarkable correlations between the frequencies of two characteristic peaks present in the power density spectra. The individual correlations clearly differ, but they roughly follow a common individual pattern. High values of measured QPO frequencies and strong modulation of the X-ray flux both suggest that the observed correlations are connected to orbital motion in the innermost part of an accretion disc. Several attempts to model these correlations with simple geodesic orbital models or phenomenological relations have failed in the past. We find and explore a surprisingly simple analytic relation that reproduces individual correlations for a group of several sources through a single parameter. When an additional free parameter is considered within our relation, it well reproduces the data of a large group of 14 sources. The very existence and form of this simple relation support the hypothesis of the orbital origin of QPOs and provide the key for further development of QPO models. We discuss a possible physical interpretation of our relation's parameters and their links to concrete QPO models.
NASA Technical Reports Server (NTRS)
Won, C. C.
1993-01-01
This work describes a modeling and design method whereby a piezoelectric system is formulated by two sets of second-order equations, one for the mechanical system, and the other for the electrical system, coupled through the piezoelectric effect. The solution to this electromechanical coupled system gives a physical interpretation of the piezoelectric effect as a piezoelectric transformer that is a part of the piezoelectric system, which transfers the applied mechanical force into a force-controlled current source, and short circuit mechanical compliance into capacitance. It also transfers the voltage source into a voltage-controlled relative velocity input, and free motional capacitance into mechanical compliance. The formulation and interpretation simplify the modeling of smart structures and lead to physical insight that aids the designer. Due to its physical realization, the smart structural system can be unconditional stable and effectively control responses. This new concept has been demonstrated in three numerical examples for a simple piezoelectric system.
Experimental recovery of quantum correlations in absence of system-environment back-action
Xu, Jin-Shi; Sun, Kai; Li, Chuan-Feng; Xu, Xiao-Ye; Guo, Guang-Can; Andersson, Erika; Lo Franco, Rosario; Compagno, Giuseppe
2013-01-01
Revivals of quantum correlations in composite open quantum systems are a useful dynamical feature against detrimental effects of the environment. Their occurrence is attributed to flows of quantum information back and forth from systems to quantum environments. However, revivals also show up in models where the environment is classical, thus unable to store quantum correlations, and forbids system-environment back-action. This phenomenon opens basic issues about its interpretation involving the role of classical environments, memory effects, collective effects and system-environment correlations. Moreover, an experimental realization of back-action-free quantum revivals has applicative relevance as it leads to recover quantum resources without resorting to more demanding structured environments and correction procedures. Here we introduce a simple two-qubit model suitable to address these issues. We then report an all-optical experiment which simulates the model and permits us to recover and control, against decoherence, quantum correlations without back-action. We finally give an interpretation of the phenomenon by establishing the roles of the involved parties. PMID:24287554
NASA Astrophysics Data System (ADS)
Ivonin, D. V.; Skrunes, S.; Brekke, C.; Ivanov, A. Yu.
2016-03-01
A simple automatic multipolarization technique for discrimination of main types of thin oil films (of thickness less than the radio wave skin depth) from natural ones is proposed. It is based on a new multipolarization parameter related to the ratio between the damping in the slick of specially normalized resonant and nonresonant signals calculated using the normalized radar cross-section model proposed by Kudryavtsev et al. (2003a). The technique is tested on RADARSAT-2 copolarization (VV/HH) synthetic aperture radar images of slicks of a priori known provenance (mineral oils, e.g., emulsion and crude oil, and plant oil served to model a natural slick) released during annual oil-on-water exercises in the North Sea in 2011 and 2012. It has been shown that the suggested multipolarization parameter gives new capabilities in interpreting slicks visible on synthetic aperture radar images while allowing discrimination between mineral oil and plant oil slicks.
Experimental recovery of quantum correlations in absence of system-environment back-action.
Xu, Jin-Shi; Sun, Kai; Li, Chuan-Feng; Xu, Xiao-Ye; Guo, Guang-Can; Andersson, Erika; Lo Franco, Rosario; Compagno, Giuseppe
2013-01-01
Revivals of quantum correlations in composite open quantum systems are a useful dynamical feature against detrimental effects of the environment. Their occurrence is attributed to flows of quantum information back and forth from systems to quantum environments. However, revivals also show up in models where the environment is classical, thus unable to store quantum correlations, and forbids system-environment back-action. This phenomenon opens basic issues about its interpretation involving the role of classical environments, memory effects, collective effects and system-environment correlations. Moreover, an experimental realization of back-action-free quantum revivals has applicative relevance as it leads to recover quantum resources without resorting to more demanding structured environments and correction procedures. Here we introduce a simple two-qubit model suitable to address these issues. We then report an all-optical experiment which simulates the model and permits us to recover and control, against decoherence, quantum correlations without back-action. We finally give an interpretation of the phenomenon by establishing the roles of the involved parties.
Westö, Johan; May, Patrick J C
2018-05-02
Receptive field (RF) models are an important tool for deciphering neural responses to sensory stimuli. The two currently popular RF models are multi-filter linear-nonlinear (LN) models and context models. Models are, however, never correct and they rely on assumptions to keep them simple enough to be interpretable. As a consequence, different models describe different stimulus-response mappings, which may or may not be good approximations of real neural behavior. In the current study, we take up two tasks: First, we introduce new ways to estimate context models with realistic nonlinearities, that is, with logistic and exponential functions. Second, we evaluate context models and multi-filter LN models in terms of how well they describe recorded data from complex cells in cat primary visual cortex. Our results, based on single-spike information and correlation coefficients, indicate that context models outperform corresponding multi-filter LN models of equal complexity (measured in terms of number of parameters), with the best increase in performance being achieved by the novel context models. Consequently, our results suggest that the multi-filter LN-model framework is suboptimal for describing the behavior of complex cells: the context-model framework is clearly superior while still providing interpretable quantizations of neural behavior.
Method and system for automated on-chip material and structural certification of MEMS devices
Sinclair, Michael B.; DeBoer, Maarten P.; Smith, Norman F.; Jensen, Brian D.; Miller, Samuel L.
2003-05-20
A new approach toward MEMS quality control and materials characterization is provided by a combined test structure measurement and mechanical response modeling approach. Simple test structures are cofabricated with the MEMS devices being produced. These test structures are designed to isolate certain types of physical response, so that measurement of their behavior under applied stress can be easily interpreted as quality control and material properties information.
NASA Astrophysics Data System (ADS)
Howell, Robert R.; Radebaugh, Jani; M. C Lopes, Rosaly; Kerber, Laura; Solomonidou, Anezina; Watkins, Bryn
2017-10-01
Using remote sensing of planetary volcanism on objects such as Io to determine eruption conditions is challenging because the emitting region is typically not resolved and because exposed lava cools so quickly. A model of the cooling rate and eruption mechanism is typically used to predict the amount of surface area at different temperatures, then that areal distribution is convolved with a Planck blackbody emission curve, and the predicted spectra is compared with observation. Often the broad nature of the Planck curve makes interpretation non-unique. However different eruption mechanisms (for example cooling fire fountain droplets vs. cooling flows) have very different area vs. temperature distributions which can often be characterized by simple power laws. Furthermore different composition magmas have significantly different upper limit cutoff temperatures. In order to test these models in August 2016 and May 2017 we obtained spatially resolved observations of spreading Kilauea pahoehoe flows and fire fountains using a three-wavelength near-infrared prototype camera system. We have measured the area vs. temperature distribution for the flows and find that over a relatively broad temperature range the distribution does follow a power law matching the theoretical predictions. As one approaches the solidus temperature the observed area drops below the simple model predictions by an amount that seems to vary inversely with the vigor of the spreading rate. At these highest temperatures the simple models are probably inadequate. It appears necessary to model the visco-elastic stretching of the very thin crust which covers even the most recently formed surfaces. That deviation between observations and the simple models may be particularly important when using such remote sensing observations to determine magma eruption temperatures.
The Productivity Dilemma in Workplace Health Promotion.
Cherniack, Martin
2015-01-01
Worksite-based programs to improve workforce health and well-being (Workplace Health Promotion (WHP)) have been advanced as conduits for improved worker productivity and decreased health care costs. There has been a countervailing health economics contention that return on investment (ROI) does not merit preventive health investment. METHODS/PROCEDURES: Pertinent studies were reviewed and results reconsidered. A simple economic model is presented based on conventional and alternate assumptions used in cost benefit analysis (CBA), such as discounting and negative value. The issues are presented in the format of 3 conceptual dilemmas. In some occupations such as nursing, the utility of patient survival and staff health is undervalued. WHP may miss important components of work related health risk. Altering assumptions on discounting and eliminating the drag of negative value radically change the CBA value. Simple monetization of a work life and calculation of return on workforce health investment as a simple alternate opportunity involve highly selective interpretations of productivity and utility.
Linear stiff string vibrations in musical acoustics: Assessment and comparison of models.
Ducceschi, Michele; Bilbao, Stefan
2016-10-01
Strings are amongst the most common elements found in musical instruments and an appropriate physical description of string dynamics is essential to modelling, analysis, and simulation. For linear vibration in a single polarisation, the most common model is based on the Euler-Bernoulli beam equation under tension. In spite of its simple form, such a model gives unbounded phase and group velocities at large wavenumbers, and such behaviour may be interpreted as unphysical. The Timoshenko model has, therefore, been employed in more recent works to overcome such shortcoming. This paper presents a third model based on the shear beam equations. The three models are here assessed and compared with regard to the perceptual considerations in musical acoustics.
Fretter, Christoph; Lesne, Annick; Hilgetag, Claus C.; Hütt, Marc-Thorsten
2017-01-01
Simple models of excitable dynamics on graphs are an efficient framework for studying the interplay between network topology and dynamics. This topic is of practical relevance to diverse fields, ranging from neuroscience to engineering. Here we analyze how a single excitation propagates through a random network as a function of the excitation threshold, that is, the relative amount of activity in the neighborhood required for the excitation of a node. We observe that two sharp transitions delineate a region of sustained activity. Using analytical considerations and numerical simulation, we show that these transitions originate from the presence of barriers to propagation and the excitation of topological cycles, respectively, and can be predicted from the network topology. Our findings are interpreted in the context of network reverberations and self-sustained activity in neural systems, which is a question of long-standing interest in computational neuroscience. PMID:28186182
Atomic Dynamics in Simple Liquid: de Gennes Narrowing Revisited
Wu, Bin; Iwashita, Takuya; Egami, Takeshi
2018-03-27
The de Gennes narrowing phenomenon is frequently observed by neutron or x-ray scattering measurements of the dynamics of complex systems, such as liquids, proteins, colloids, and polymers. The characteristic slowing down of dynamics in the vicinity of the maximum of the total scattering intensity is commonly attributed to enhanced cooperativity. In this Letter, we present an alternative view on its origin through the examination of the time-dependent pair correlation function, the van Hove correlation function, for a model liquid in two, three, and four dimensions. We find that the relaxation time increases monotonically with distance and the dependence on distancemore » varies with dimension. We propose a heuristic explanation of this dependence based on a simple geometrical model. Furthermore, this finding sheds new light on the interpretation of the de Gennes narrowing phenomenon and the α-relaxation time.« less
Atomic Dynamics in Simple Liquid: de Gennes Narrowing Revisited
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Bin; Iwashita, Takuya; Egami, Takeshi
The de Gennes narrowing phenomenon is frequently observed by neutron or x-ray scattering measurements of the dynamics of complex systems, such as liquids, proteins, colloids, and polymers. The characteristic slowing down of dynamics in the vicinity of the maximum of the total scattering intensity is commonly attributed to enhanced cooperativity. In this Letter, we present an alternative view on its origin through the examination of the time-dependent pair correlation function, the van Hove correlation function, for a model liquid in two, three, and four dimensions. We find that the relaxation time increases monotonically with distance and the dependence on distancemore » varies with dimension. We propose a heuristic explanation of this dependence based on a simple geometrical model. Furthermore, this finding sheds new light on the interpretation of the de Gennes narrowing phenomenon and the α-relaxation time.« less
Fretter, Christoph; Lesne, Annick; Hilgetag, Claus C; Hütt, Marc-Thorsten
2017-02-10
Simple models of excitable dynamics on graphs are an efficient framework for studying the interplay between network topology and dynamics. This topic is of practical relevance to diverse fields, ranging from neuroscience to engineering. Here we analyze how a single excitation propagates through a random network as a function of the excitation threshold, that is, the relative amount of activity in the neighborhood required for the excitation of a node. We observe that two sharp transitions delineate a region of sustained activity. Using analytical considerations and numerical simulation, we show that these transitions originate from the presence of barriers to propagation and the excitation of topological cycles, respectively, and can be predicted from the network topology. Our findings are interpreted in the context of network reverberations and self-sustained activity in neural systems, which is a question of long-standing interest in computational neuroscience.
Reactive extraction of lactic acid with trioctylamine/methylene chloride/n-hexane
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, D.H.; Hong, W.H.
The trioctylamine (TOA)/methylene chloride (MC)/n-hexane system was used as the extraction agent for the extraction of lactic acid. Curves of equilibrium and hydration were obtained at various temperatures and concentrations of TOA. A modified mass action model was proposed to interpret the equilibrium and the hydration curves. The reaction mechanism and the corresponding parameters which best represent the equilibrium data were estimated, and the concentration of water in the organic phase was predicted by inserting the parameters into the simple mathematical equation of the modified model. The concentration of MC and the change of temperature were important factors for themore » extraction and the stripping process. The stripping was performed by a simple distillation which was a combination of temperature-swing regeneration and diluent-swing regeneration. The type of inactive diluent has no influence on the stripping. The stripping efficiencies were about 70%.« less
NASA Astrophysics Data System (ADS)
Fretter, Christoph; Lesne, Annick; Hilgetag, Claus C.; Hütt, Marc-Thorsten
2017-02-01
Simple models of excitable dynamics on graphs are an efficient framework for studying the interplay between network topology and dynamics. This topic is of practical relevance to diverse fields, ranging from neuroscience to engineering. Here we analyze how a single excitation propagates through a random network as a function of the excitation threshold, that is, the relative amount of activity in the neighborhood required for the excitation of a node. We observe that two sharp transitions delineate a region of sustained activity. Using analytical considerations and numerical simulation, we show that these transitions originate from the presence of barriers to propagation and the excitation of topological cycles, respectively, and can be predicted from the network topology. Our findings are interpreted in the context of network reverberations and self-sustained activity in neural systems, which is a question of long-standing interest in computational neuroscience.
Marinsky, J.A.; Baldwin, Robert F.; Reddy, M.M.
1985-01-01
It has been shown that the apparent enhancement of divalent metal ion binding to polyions such as polystyrenesulfonate (PSS) and dextran sulfate (DS) by decreasing the ionic strength of these mixed counterion systems (M2+, M+, X-, polyion) can be anticipated with the Donnan-based model developed by one of us (J.A.M.). Ion-exchange distribution methods have been employed to measure the removal by the polyion of trace divalent metal ion from simple salt (NaClO4)-polyion (NaPSS) mixtures. These data and polyion interaction data published earlier by Mattai and Kwak for the mixed counterion systems MgCl2-LiCl-DS and MgCl2-CsCl-DS have been shown to be amenable to rather precise analysis by this model. ?? 1985 American Chemical Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fanizza, G.; Nugier, F., E-mail: giuseppe.fanizza@ba.infn.it, E-mail: fabienjean.nugier@unibo.it
We present in this paper a new application of the geodesic light-cone (GLC) gauge for weak lensing calculations. Using interesting properties of this gauge, we derive an exact expression of the amplification matrix—involving convergence, magnification and shear—and of the deformation matrix—involving the optical scalars. These expressions are simple and non-perturbative as long as no caustics are created on the past light-cone and are, by construction, free from the thin lens approximation. We apply these general expressions on the example of an Lemaȋtre-Tolman-Bondi (LTB) model with an off-center observer and obtain explicit forms for the lensing quantities as a direct consequencemore » of the non-perturbative transformation between GLC and LTB coordinates. We show their evolution in redshift after a numerical integration, for underdense and overdense LTB models, and interpret their respective variations in the simple non-curvature case.« less
Perona, Paolo; Dürrenmatt, David J; Characklis, Gregory W
2013-03-30
We propose a theoretical river modeling framework for generating variable flow patterns in diverted-streams (i.e., no reservoir). Using a simple economic model and the principle of equal marginal utility in an inverse fashion we first quantify the benefit of the water that goes to the environment in relation to that of the anthropic activity. Then, we obtain exact expressions for optimal water allocation rules between the two competing uses, as well as the related statistical distributions. These rules are applied using both synthetic and observed streamflow data, to demonstrate that this approach may be useful in 1) generating more natural flow patterns in the river reach downstream of the diversion, thus reducing the ecodeficit; 2) obtaining a more enlightened economic interpretation of Minimum Flow Release (MFR) strategies, and; 3) comparing the long-term costs and benefits of variable versus MFR policies and showing the greater ecological sustainability of this new approach. Copyright © 2013 Elsevier Ltd. All rights reserved.
Applications of Perron-Frobenius theory to population dynamics.
Li, Chi-Kwong; Schneider, Hans
2002-05-01
By the use of Perron-Frobenius theory, simple proofs are given of the Fundamental Theorem of Demography and of a theorem of Cushing and Yicang on the net reproductive rate occurring in matrix models of population dynamics. The latter result, which is closely related to the Stein-Rosenberg theorem in numerical linear algebra, is further refined with some additional nonnegative matrix theory. When the fertility matrix is scaled by the net reproductive rate, the growth rate of the model is $1$. More generally, we show how to achieve a given growth rate for the model by scaling the fertility matrix. Demographic interpretations of the results are given.
A color prediction model for imagery analysis
NASA Technical Reports Server (NTRS)
Skaley, J. E.; Fisher, J. R.; Hardy, E. E.
1977-01-01
A simple model has been devised to selectively construct several points within a scene using multispectral imagery. The model correlates black-and-white density values to color components of diazo film so as to maximize the color contrast of two or three points per composite. The CIE (Commission Internationale de l'Eclairage) color coordinate system is used as a quantitative reference to locate these points in color space. Superimposed on this quantitative reference is a perceptional framework which functionally contrasts color values in a psychophysical sense. This methodology permits a more quantitative approach to the manual interpretation of multispectral imagery while resulting in improved accuracy and lower costs.
NASA Astrophysics Data System (ADS)
Chen, Zi-Yu; Chen, Shi; Dan, Jia-Kun; Li, Jian-Feng; Peng, Qi-Xian
2011-10-01
A simple one-dimensional analytical model for electromagnetic emission from an unmagnetized wakefield excited by an intense short-pulse laser in the nonlinear regime has been developed in this paper. The expressions for the spectral and angular distributions of the radiation have been derived. The model suggests that the origin of the radiation can be attributed to the violent sudden acceleration of plasma electrons experiencing the accelerating potential of the laser wakefield. The radiation process could help to provide a qualitative interpretation of existing experimental results, and offers useful information for future laser wakefield experiments.
NASA Astrophysics Data System (ADS)
Zhu, Ying; Tan, Tuck Lee
2016-04-01
An effective and simple analytical method using Fourier transform infrared (FTIR) spectroscopy to distinguish wild-grown high-quality Ganoderma lucidum (G. lucidum) from cultivated one is of essential importance for its quality assurance and medicinal value estimation. Commonly used chemical and analytical methods using full spectrum are not so effective for the detection and interpretation due to the complex system of the herbal medicine. In this study, two penalized discriminant analysis models, penalized linear discriminant analysis (PLDA) and elastic net (Elnet),using FTIR spectroscopy have been explored for the purpose of discrimination and interpretation. The classification performances of the two penalized models have been compared with two widely used multivariate methods, principal component discriminant analysis (PCDA) and partial least squares discriminant analysis (PLSDA). The Elnet model involving a combination of L1 and L2 norm penalties enabled an automatic selection of a small number of informative spectral absorption bands and gave an excellent classification accuracy of 99% for discrimination between spectra of wild-grown and cultivated G. lucidum. Its classification performance was superior to that of the PLDA model in a pure L1 setting and outperformed the PCDA and PLSDA models using full wavelength. The well-performed selection of informative spectral features leads to substantial reduction in model complexity and improvement of classification accuracy, and it is particularly helpful for the quantitative interpretations of the major chemical constituents of G. lucidum regarding its anti-cancer effects.
Learning Natural Selection in 4th Grade with Multi-Agent-Based Computational Models
NASA Astrophysics Data System (ADS)
Dickes, Amanda Catherine; Sengupta, Pratim
2013-06-01
In this paper, we investigate how elementary school students develop multi-level explanations of population dynamics in a simple predator-prey ecosystem, through scaffolded interactions with a multi-agent-based computational model (MABM). The term "agent" in an MABM indicates individual computational objects or actors (e.g., cars), and these agents obey simple rules assigned or manipulated by the user (e.g., speeding up, slowing down, etc.). It is the interactions between these agents, based on the rules assigned by the user, that give rise to emergent, aggregate-level behavior (e.g., formation and movement of the traffic jam). Natural selection is such an emergent phenomenon, which has been shown to be challenging for novices (K16 students) to understand. Whereas prior research on learning evolutionary phenomena with MABMs has typically focused on high school students and beyond, we investigate how elementary students (4th graders) develop multi-level explanations of some introductory aspects of natural selection—species differentiation and population change—through scaffolded interactions with an MABM that simulates predator-prey dynamics in a simple birds-butterflies ecosystem. We conducted a semi-clinical interview based study with ten participants, in which we focused on the following: a) identifying the nature of learners' initial interpretations of salient events or elements of the represented phenomena, b) identifying the roles these interpretations play in the development of their multi-level explanations, and c) how attending to different levels of the relevant phenomena can make explicit different mechanisms to the learners. In addition, our analysis also shows that although there were differences between high- and low-performing students (in terms of being able to explain population-level behaviors) in the pre-test, these differences disappeared in the post-test.
Light distribution in diffractive multifocal optics and its optimization.
Portney, Valdemar
2011-11-01
To expand a geometrical model of diffraction efficiency and its interpretation to the multifocal optic and to introduce formulas for analysis of far and near light distribution and their application to multifocal intraocular lenses (IOLs) and to diffraction efficiency optimization. Medical device consulting firm, Newport Coast, California, USA. Experimental study. Application of a geometrical model to the kinoform (single focus diffractive optical element) was expanded to a multifocal optic to produce analytical definitions of light split between far and near images and light loss to other diffraction orders. The geometrical model gave a simple interpretation of light split in a diffractive multifocal IOL. An analytical definition of light split between far, near, and light loss was introduced as curve fitting formulas. Several examples of application to common multifocal diffractive IOLs were developed; for example, to light-split change with wavelength. The analytical definition of diffraction efficiency may assist in optimization of multifocal diffractive optics that minimize light loss. Formulas for analysis of light split between different foci of multifocal diffractive IOLs are useful in interpreting diffraction efficiency dependence on physical characteristics, such as blaze heights of the diffractive grooves and wavelength of light, as well as for optimizing multifocal diffractive optics. Disclosure is found in the footnotes. Copyright © 2011 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
A mathematical function for the description of nutrient-response curve
Ahmadi, Hamed
2017-01-01
Several mathematical equations have been proposed to modeling nutrient-response curve for animal and human justified on the goodness of fit and/or on the biological mechanism. In this paper, a functional form of a generalized quantitative model based on Rayleigh distribution principle for description of nutrient-response phenomena is derived. The three parameters governing the curve a) has biological interpretation, b) may be used to calculate reliable estimates of nutrient response relationships, and c) provide the basis for deriving relationships between nutrient and physiological responses. The new function was successfully applied to fit the nutritional data obtained from 6 experiments including a wide range of nutrients and responses. An evaluation and comparison were also done based simulated data sets to check the suitability of new model and four-parameter logistic model for describing nutrient responses. This study indicates the usefulness and wide applicability of the new introduced, simple and flexible model when applied as a quantitative approach to characterizing nutrient-response curve. This new mathematical way to describe nutritional-response data, with some useful biological interpretations, has potential to be used as an alternative approach in modeling nutritional responses curve to estimate nutrient efficiency and requirements. PMID:29161271
Multi-Agent Market Modeling of Foreign Exchange Rates
NASA Astrophysics Data System (ADS)
Zimmermann, Georg; Neuneier, Ralph; Grothmann, Ralph
A market mechanism is basically driven by a superposition of decisions of many agents optimizing their profit. The oeconomic price dynamic is a consequence of the cumulated excess demand/supply created on this micro level. The behavior analysis of a small number of agents is well understood through the game theory. In case of a large number of agents one may use the limiting case that an individual agent does not have an influence on the market, which allows the aggregation of agents by statistic methods. In contrast to this restriction, we can omit the assumption of an atomic market structure, if we model the market through a multi-agent approach. The contribution of the mathematical theory of neural networks to the market price formation is mostly seen on the econometric side: neural networks allow the fitting of high dimensional nonlinear dynamic models. Furthermore, in our opinion, there is a close relationship between economics and the modeling ability of neural networks because a neuron can be interpreted as a simple model of decision making. With this in mind, a neural network models the interaction of many decisions and, hence, can be interpreted as the price formation mechanism of a market.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nomura, Yasunori; Salzetta, Nico; Sanches, Fabio
We study the Hilbert space structure of classical spacetimes under the assumption that entanglement in holographic theories determines semiclassical geometry. We show that this simple assumption has profound implications; for example, a superposition of classical spacetimes may lead to another classical spacetime. Despite its unconventional nature, this picture admits the standard interpretation of superpositions of well-defined semiclassical spacetimes in the limit that the number of holographic degrees of freedom becomes large. We illustrate these ideas using a model for the holographic theory of cosmological spacetimes.
Interpretation of OAO-2 ultraviolet light curves of beta Doradus
NASA Technical Reports Server (NTRS)
Hutchinson, J. L.; Lillie, C. F.; Hill, S. J.
1975-01-01
Middle-ultraviolet light curves of beta Doradus, obtained by OAO-2, are presented along with other evidence indicating that the small additional bumps observed on the rising branches of these curves have their origin in shock-wave phenomena in the upper atmosphere of this classical Cepheid. A simple piston-driven spherical hydrodynamic model of the atmosphere is developed to explain the bumps, and the calculations are compared with observations. The model is found to be consistent with the shapes of the light curves as well as with measurements of the H-alpha radial velocities.
Karev, Georgy P; Wolf, Yuri I; Koonin, Eugene V
2003-10-12
The distributions of many genome-associated quantities, including the membership of paralogous gene families can be approximated with power laws. We are interested in developing mathematical models of genome evolution that adequately account for the shape of these distributions and describe the evolutionary dynamics of their formation. We show that simple stochastic models of genome evolution lead to power-law asymptotics of protein domain family size distribution. These models, called Birth, Death and Innovation Models (BDIM), represent a special class of balanced birth-and-death processes, in which domain duplication and deletion rates are asymptotically equal up to the second order. The simplest, linear BDIM shows an excellent fit to the observed distributions of domain family size in diverse prokaryotic and eukaryotic genomes. However, the stochastic version of the linear BDIM explored here predicts that the actual size of large paralogous families is reached on an unrealistically long timescale. We show that introduction of non-linearity, which might be interpreted as interaction of a particular order between individual family members, allows the model to achieve genome evolution rates that are much better compatible with the current estimates of the rates of individual duplication/loss events.
NASA Technical Reports Server (NTRS)
Saunders, D. F.; Thomas, G. E. (Principal Investigator); Kinsman, F. E.; Beatty, D. F.
1973-01-01
The author has identified the following significant results. This study was performed to investigate applications of ERTS-1 imagery in commercial reconnaissance for mineral and hydrocarbon resources. ERTS-1 imagery collected over five areas in North America (Montana; Colorado; New Mexico-West Texas; Superior Province, Canada; and North Slope, Alaska) has been analyzed for data content including linears, lineaments, and curvilinear anomalies. Locations of these features were mapped and compared with known locations of mineral and hydrocarbon accumulations. Results were analyzed in the context of a simple-shear, block-coupling model. Data analyses have resulted in detection of new lineaments, some of which may be continental in extent, detection of many curvilinear patterns not generally seen on aerial photos, strong evidence of continental regmatic fracture patterns, and realization that geological features can be explained in terms of a simple-shear, block-coupling model. The conculsions are that ERTS-1 imagery is of great value in photogeologic/geomorphic interpretations of regional features, and the simple-shear, block-coupling model provides a means of relating data from ERTS imagery to structures that have controlled emplacement of ore deposits and hydrocarbon accumulations, thus providing a basis for a new approach for reconnaissance for mineral, uranium, gas, and oil deposits and structures.
ROC-ing along: Evaluation and interpretation of receiver operating characteristic curves.
Carter, Jane V; Pan, Jianmin; Rai, Shesh N; Galandiuk, Susan
2016-06-01
It is vital for clinicians to understand and interpret correctly medical statistics as used in clinical studies. In this review, we address current issues and focus on delivering a simple, yet comprehensive, explanation of common research methodology involving receiver operating characteristic (ROC) curves. ROC curves are used most commonly in medicine as a means of evaluating diagnostic tests. Sample data from a plasma test for the diagnosis of colorectal cancer were used to generate a prediction model. These are actual, unpublished data that have been used to describe the calculation of sensitivity, specificity, positive predictive and negative predictive values, and accuracy. The ROC curves were generated to determine the accuracy of this plasma test. These curves are generated by plotting the sensitivity (true-positive rate) on the y axis and 1 - specificity (false-positive rate) on the x axis. Curves that approach closest to the coordinate (x = 0, y = 1) are more highly predictive, whereas ROC curves that lie close to the line of equality indicate that the result is no better than that obtained by chance. The optimum sensitivity and specificity can be determined from the graph as the point where the minimum distance line crosses the ROC curve. This point corresponds to the Youden index (J), a function of sensitivity and specificity used commonly to rate diagnostic tests. The area under the curve is used to quantify the overall ability of a test to discriminate between 2 outcomes. By following these simple guidelines, interpretation of ROC curves will be less difficult and they can then be interpreted more reliably when writing, reviewing, or analyzing scientific papers. Copyright © 2016 Elsevier Inc. All rights reserved.
A simple integrated assessment approach to global change simulation and evaluation
NASA Astrophysics Data System (ADS)
Ogutu, Keroboto; D'Andrea, Fabio; Ghil, Michael
2016-04-01
We formulate and study the Coupled Climate-Economy-Biosphere (CoCEB) model, which constitutes the basis of our idealized integrated assessment approach to simulating and evaluating global change. CoCEB is composed of a physical climate module, based on Earth's energy balance, and an economy module that uses endogenous economic growth with physical and human capital accumulation. A biosphere model is likewise under study and will be coupled to the existing two modules. We concentrate on the interactions between the two subsystems: the effect of climate on the economy, via damage functions, and the effect of the economy on climate, via a control of the greenhouse gas emissions. Simple functional forms of the relation between the two subsystems permit simple interpretations of the coupled effects. The CoCEB model is used to make hypotheses on the long-term effect of investment in emission abatement, and on the comparative efficacy of different approaches to abatement, in particular by investing in low carbon technology, in deforestation reduction or in carbon capture and storage (CCS). The CoCEB model is very flexible and transparent, and it allows one to easily formulate and compare different functional representations of climate change mitigation policies. Using different mitigation measures and their cost estimates, as found in the literature, one is able to compare these measures in a coherent way.
Solving da Vinci stereopsis with depth-edge-selective V2 cells
Assee, Andrew; Qian, Ning
2007-01-01
We propose a new model for da Vinci stereopsis based on a coarse-to-fine disparity-energy computation in V1 and disparity-boundary-selective units in V2. Unlike previous work, our model contains only binocular cells, relies on distributed representations of disparity, and has a simple V1-to-V2 feedforward structure. We demonstrate with random dot stereograms that the V2 stage of our model is able to determine the location and the eye-of-origin of monocularly occluded regions and improve disparity map computation. We also examine a few related issues. First, we argue that since monocular regions are binocularly defined, they cannot generally be detected by monocular cells. Second, we show that our coarse-to-fine V1 model for conventional stereopsis explains double matching in Panum’s limiting case. This provides computational support to the notion that the perceived depth of a monocular bar next to a binocular rectangle may not be da Vinci stereopsis per se (Gillam et al., 2003). Third, we demonstrate that some stimuli previously deemed invalid have simple, valid geometric interpretations. Our work suggests that studies of da Vinci stereopsis should focus on stimuli more general than the bar-and-rectangle type and that disparity-boundary-selective V2 cells may provide a simple physiological mechanism for da Vinci stereopsis. PMID:17698163
Jarnevich, Catherine S.; Talbert, Marian; Morisette, Jeffrey T.; Aldridge, Cameron L.; Brown, Cynthia; Kumar, Sunil; Manier, Daniel; Talbert, Colin; Holcombe, Tracy R.
2017-01-01
Evaluating the conditions where a species can persist is an important question in ecology both to understand tolerances of organisms and to predict distributions across landscapes. Presence data combined with background or pseudo-absence locations are commonly used with species distribution modeling to develop these relationships. However, there is not a standard method to generate background or pseudo-absence locations, and method choice affects model outcomes. We evaluated combinations of both model algorithms (simple and complex generalized linear models, multivariate adaptive regression splines, Maxent, boosted regression trees, and random forest) and background methods (random, minimum convex polygon, and continuous and binary kernel density estimator (KDE)) to assess the sensitivity of model outcomes to choices made. We evaluated six questions related to model results, including five beyond the common comparison of model accuracy assessment metrics (biological interpretability of response curves, cross-validation robustness, independent data accuracy and robustness, and prediction consistency). For our case study with cheatgrass in the western US, random forest was least sensitive to background choice and the binary KDE method was least sensitive to model algorithm choice. While this outcome may not hold for other locations or species, the methods we used can be implemented to help determine appropriate methodologies for particular research questions.
Water and Solute Flux Simulation Using Hydropedology Survey Data in South African Catchments
NASA Astrophysics Data System (ADS)
Lorentz, Simon; van Tol, Johan; le Roux, Pieter
2017-04-01
Hydropedology surveys include linking soil profile information in hillslope transects in order to define dominant subsurface flow mechanisms and pathways. This information is useful for deriving hillslope response functions, which aid storage and travel time estimates of water and solute movement in the sub-surface. In this way, the "soft" data of the hydropedological survey can be included in simple hydrological models, where detailed modelling of processes and pathways is prohibitive. Hydropedology surveys were conducted in two catchments and the information used to improve the prediction of water and solute responses. Typical hillslope response functions are then derived using a 2-D finite element model of the hydropedological features. Similar response types are mapped. These mapped response units are invoked in a simple SCS based, hydrological and solute transport model to yield water and solute fluxes at the catchment outlets. The first catchment (1.6 km2) comprises commercial forestry in a sedimentary geology of sandstone and mudstone formation while the second catchment (6.1 km2) includes mine waste impoundments in a granitic geology. In this paper, we demonstrate the method of combining hydropedological interpretation with catchment hydrology and solute transport simulation. The forested catchment, with three dominant hillslope response types, have solute response times in excess of 90 days, whereas the granitic responses occur within 10 days. The use of the hydropedological data improves the solute distribution response and storage simulation, compared to simulations without the hydropedology interpretation. The hydrological responses are similar, with and without the use of the hydropedology data, but the simulated distribution of water in the catchment is improved using the techniques demonstrated.
England, A.W.
1976-01-01
The microwave emissivity of relatively low-loss media such as snow, ice, frozen ground, and lunar soil is strongly influenced by fine-scale layering and by internal scattering. Radiometric data, however, are commonly interpreted using a model of emission from a homogeneous, dielectric halfspace whose emissivity derives exclusively from dielectric properties. Conclusions based upon these simple interpretations can be erroneous. Examples are presented showing that the emission from fresh or hardpacked snow over either frozen or moist soil is governed dominantly by the size distribution of ice grains in the snowpack. Similarly, the thickness of seasonally frozen soil and the concentration of rock clasts in lunar soil noticeably affect, respectively, the emissivities of northern latitude soils in winter and of the lunar regolith. Petrophysical data accumulated in support of the geophysical interpretation of microwave data must include measurements of not only dielectric properties, but also of geometric factors such as finescale layering and size distributions of grains, inclusions, and voids. ?? 1976 Birkha??user Verlag.
Admixture, Population Structure, and F-Statistics.
Peter, Benjamin M
2016-04-01
Many questions about human genetic history can be addressed by examining the patterns of shared genetic variation between sets of populations. A useful methodological framework for this purpose isF-statistics that measure shared genetic drift between sets of two, three, and four populations and can be used to test simple and complex hypotheses about admixture between populations. This article provides context from phylogenetic and population genetic theory. I review how F-statistics can be interpreted as branch lengths or paths and derive new interpretations, using coalescent theory. I further show that the admixture tests can be interpreted as testing general properties of phylogenies, allowing extension of some ideas applications to arbitrary phylogenetic trees. The new results are used to investigate the behavior of the statistics under different models of population structure and show how population substructure complicates inference. The results lead to simplified estimators in many cases, and I recommend to replace F3 with the average number of pairwise differences for estimating population divergence. Copyright © 2016 by the Genetics Society of America.
Time-frequency analysis of acoustic scattering from elastic objects
NASA Astrophysics Data System (ADS)
Yen, Nai-Chyuan; Dragonette, Louis R.; Numrich, Susan K.
1990-06-01
A time-frequency analysis of acoustic scattering from elastic objects was carried out using the time-frequency representation based on a modified version of the Wigner distribution function (WDF) algorithm. A simple and efficient processing algorithm was developed, which provides meaningful interpretation of the scattering physics. The time and frequency representation derived from the WDF algorithm was further reduced to a display which is a skeleton plot, called a vein diagram, that depicts the essential features of the form function. The physical parameters of the scatterer are then extracted from this diagram with the proper interpretation of the scattering phenomena. Several examples, based on data obtained from numerically simulated models and laboratory measurements for elastic spheres and shells, are used to illustrate the capability and proficiency of the algorithm.
Effects of chirp on two-dimensional Fourier transform electronic spectra.
Tekavec, Patrick F; Myers, Jeffrey A; Lewis, Kristin L M; Fuller, Franklin D; Ogilvie, Jennifer P
2010-05-24
We examine the effect that pulse chirp has on the shape of two- dimensional electronic spectra through calculations and experiments. For the calculations we use a model two electronic level system with a solvent interaction represented by a simple Gaussian correlation function and compare the resulting spectra to experiments carried out on an organic dye molecule (Rhodamine 800). Both calculations and experiments show that distortions due to chirp are most significant when the pulses used in the experiment have different amounts of chirp, introducing peak shape asymmetry that could be interpreted as spectrally dependent relaxation. When all pulses have similar chirp the distortions are reduced but still affect the anti-diagonal symmetry of the peak shapes and introduce negative features that could be interpreted as excited state absorption.
Thermodynamics and Mechanics of Membrane Curvature Generation and Sensing by Proteins and Lipids
Baumgart, Tobias; Capraro, Benjamin R.; Zhu, Chen; Das, Sovan L.
2014-01-01
Research investigating lipid membrane curvature generation and sensing is a rapidly developing frontier in membrane physical chemistry and biophysics. The fast recent progress is based on the discovery of a plethora of proteins involved in coupling membrane shape to cellular membrane function, the design of new quantitative experimental techniques to study aspects of membrane curvature, and the development of analytical theories and simulation techniques that allow a mechanistic interpretation of quantitative measurements. The present review first provides an overview of important classes of membrane proteins for which function is coupled to membrane curvature. We then survey several mechanisms that are assumed to underlie membrane curvature sensing and generation. Finally, we discuss relatively simple thermodynamic/mechanical models that allow quantitative interpretation of experimental observations. PMID:21219150
Quantifying the influence of sediment source area sampling on detrital thermochronometer data
NASA Astrophysics Data System (ADS)
Whipp, D. M., Jr.; Ehlers, T. A.; Coutand, I.; Bookhagen, B.
2014-12-01
Detrital thermochronology offers a unique advantage over traditional bedrock thermochronology because of its sensitivity to sediment production and transportation to sample sites. In mountainous regions, modern fluvial sediment is often collected and dated to determine the past (105 to >107 year) exhumation history of the upstream drainage area. Though potentially powerful, the interpretation of detrital thermochronometer data derived from modern fluvial sediment is challenging because of spatial and temporal variations in sediment production and transport, and target mineral concentrations. Thermochronometer age prediction models provide a quantitative basis for data interpretation, but it can be difficult to separate variations in catchment bedrock ages from the effects of variable basin denudation and sediment transport. We present two examples of quantitative data interpretation using detrital thermochronometer data from the Himalaya, focusing on the influence of spatial and temporal variations in basin denudation on predicted age distributions. We combine age predictions from the 3D thermokinematic numerical model Pecube with simple models for sediment sampling in the upstream drainage basin area to assess the influence of variations in sediment production by different geomorphic processes or scaled by topographic metrics. We first consider a small catchment from the central Himalaya where bedrock landsliding appears to have affected the observed muscovite 40Ar/39Ar age distributions. Using a simple model of random landsliding with a power-law landslide frequency-area relationship we find that the sediment residence time in the catchment has a major influence on predicted age distributions. In the second case, we compare observed detrital apatite fission-track age distributions from 16 catchments in the Bhutan Himalaya to ages predicted using Pecube and scaled by various topographic metrics. Preliminary results suggest that predicted age distributions scaled by the rock uplift rate in Pecube are statistically equivalent to the observed age distributions for ~75% of the catchments, but may improve when scaled by local relief or specific stream power weighted by satellite-derived precipitation. Ongoing work is exploring the effect of scaling by other topographic metrics.
Castorina, P; Delsanto, P P; Guiot, C
2006-05-12
A classification in universality classes of broad categories of phenomenologies, belonging to physics and other disciplines, may be very useful for a cross fertilization among them and for the purpose of pattern recognition and interpretation of experimental data. We present here a simple scheme for the classification of nonlinear growth problems. The success of the scheme in predicting and characterizing the well known Gompertz, West, and logistic models, suggests to us the study of a hitherto unexplored class of nonlinear growth problems.
Advancements in dynamic kill calculations for blowout wells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kouba, G.E.; MacDougall, G.R.; Schumacher, B.W.
1993-09-01
This paper addresses the development, interpretation, and use of dynamic kill equations. To this end, three simple calculation techniques are developed for determining the minimum dynamic kill rate. Two techniques contain only single-phase calculations and are independent of reservoir inflow performance. Despite these limitations, these two methods are useful for bracketing the minimum flow rates necessary to kill a blowing well. For the third technique, a simplified mechanistic multiphase-flow model is used to determine a most-probable minimum kill rate.
Rotation and plasma stability in the Fitzpatrick-Aydemir model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pustovitov, V. D.
2007-08-15
The rotational stabilization of the resistive wall modes (RWMs) is analyzed within the single-mode cylindrical Fitzpatrick-Aydemir model [R. Fitzpatrick, Phys. Plasmas 9, 3459 (2002)]. Here, the consequences of the Fitzpatrick-Aydemir dispersion relation are derived in terms of the observable growth rate and toroidal rotation frequency of the mode, which allows easy interpretation of the results and comparison with experimental observations. It is shown that this model, developed for the plasma with weak dissipation, predicts the rotational destabilization of RWM in the typical range of the RWM rotation. The model predictions are compared with those obtained in a similar model, butmore » with the Boozer boundary conditions at the plasma surface [A. H. Boozer, Phys. Plasmas 11, 110 (2004)]. Simple experimental tests of the model are proposed.« less
Phase transitions in the q -voter model with noise on a duplex clique
NASA Astrophysics Data System (ADS)
Chmiel, Anna; Sznajd-Weron, Katarzyna
2015-11-01
We study a nonlinear q -voter model with stochastic noise, interpreted in the social context as independence, on a duplex network. To study the role of the multilevelness in this model we propose three methods of transferring the model from a mono- to a multiplex network. They take into account two criteria: one related to the status of independence (LOCAL vs GLOBAL) and one related to peer pressure (AND vs OR). In order to examine the influence of the presence of more than one level in the social network, we perform simulations on a particularly simple multiplex: a duplex clique, which consists of two fully overlapped complete graphs (cliques). Solving numerically the rate equation and simultaneously conducting Monte Carlo simulations, we provide evidence that even a simple rearrangement into a duplex topology may lead to significant changes in the observed behavior. However, qualitative changes in the phase transitions can be observed for only one of the considered rules: LOCAL&AND. For this rule the phase transition becomes discontinuous for q =5 , whereas for a monoplex such behavior is observed for q =6 . Interestingly, only this rule admits construction of realistic variants of the model, in line with recent social experiments.
Interpreting lateral dynamic weight shifts using a simple inverted pendulum model.
Kennedy, Michael W; Bretl, Timothy; Schmiedeler, James P
2014-01-01
Seventy-five young, healthy adults completed a lateral weight-shifting activity in which each shifted his/her center of pressure (CoP) to visually displayed target locations with the aid of visual CoP feedback. Each subject's CoP data were modeled using a single-link inverted pendulum system with a spring-damper at the joint. This extends the simple inverted pendulum model of static balance in the sagittal plane to lateral weight-shifting balance. The model controlled pendulum angle using PD control and a ramp setpoint trajectory, and weight-shifting was characterized by both shift speed and a non-minimum phase (NMP) behavior metric. This NMP behavior metric examines the force magnitude at shift initiation and provides weight-shifting balance performance information that parallels the examination of peak ground reaction forces in gait analysis. Control parameters were optimized on a subject-by-subject basis to match balance metrics for modeled results to metric values calculated from experimental data. Overall, the model matches experimental data well (average percent error of 0.35% for shifting speed and 0.05% for NMP behavior). These results suggest that the single-link inverted pendulum model can be used effectively to capture lateral weight-shifting balance, as it has been shown to model static balance. Copyright © 2014 Elsevier B.V. All rights reserved.
Relaxation processes in a low-order three-dimensional magnetohydrodynamics model
NASA Technical Reports Server (NTRS)
Stribling, Troy; Matthaeus, William H.
1991-01-01
The time asymptotic behavior of a Galerkin model of 3D magnetohydrodynamics (MHD) has been interpreted using the selective decay and dynamic alignment relaxation theories. A large number of simulations has been performed that scan a parameter space defined by the rugged ideal invariants, including energy, cross helicity, and magnetic helicity. It is concluded that time asymptotic state can be interpreted as a relaxation to minimum energy. A simple decay model, based on absolute equilibrium theory, is found to predict a mapping of initial onto time asymptotic states, and to accurately describe the long time behavior of the runs when magnetic helicity is present. Attention is also given to two processes, operating on time scales shorter than selective decay and dynamic alignment, in which the ratio of kinetic to magnetic energy relaxes to values 0(1). The faster of the two processes takes states initially dominant in magnetic energy to a state of near-equipartition between kinetic and magnetic energy through power law growth of kinetic energy. The other process takes states initially dominant in kinetic energy to the near-equipartitioned state through exponential growth of magnetic energy.
Global performance enhancements via pedestal optimisation on ASDEX Upgrade
NASA Astrophysics Data System (ADS)
Dunne, M. G.; Frassinetti, L.; Beurskens, M. N. A.; Cavedon, M.; Fietz, S.; Fischer, R.; Giannone, L.; Huijsmans, G. T. A.; Kurzan, B.; Laggner, F.; McCarthy, P. J.; McDermott, R. M.; Tardini, G.; Viezzer, E.; Willensdorfer, M.; Wolfrum, E.; The EUROfusion MST1 Team; The ASDEX Upgrade Team
2017-02-01
Results of experimental scans of heating power, plasma shape, and nitrogen content are presented, with a focus on global performance and pedestal alteration. In detailed scans at low triangularity, it is shown that the increase in stored energy due to nitrogen seeding stems from the pedestal. It is also shown that the confinement increase is driven through the temperature pedestal at the three heating power levels studied. In a triangularity scan, an orthogonal effect of shaping and seeding is observed, where increased plasma triangularity increases the pedestal density, while impurity seeding (carbon and nitrogen) increases the pedestal temperature in addition to this effect. Modelling of these effects was also undertaken, with interpretive and predictive models being employed. The interpretive analysis shows a general agreement of the experimental pedestals in separate power, shaping, and seeding scans with peeling-ballooning theory. Predictive analysis was used to isolate the individual effects, showing that the trends of additional heating power and increased triangularity can be recoverd. However, a simple change of the effective charge in the plasma cannot explain the observed levels of confinement improvement in the present models.
Aerogel Algorithm for Shrapnel Penetration Experiments
NASA Astrophysics Data System (ADS)
Tokheim, R. E.; Erlich, D. C.; Curran, D. R.; Tobin, M.; Eder, D.
2004-07-01
To aid in assessing shrapnel produced by laser-irradiated targets, we have performed shrapnel collection "BB gun" experiments in aerogel and have developed a simple analytical model for deceleration of the shrapnel particles in the aerogel. The model is similar in approach to that of Anderson and Ahrens (J. Geophys. Res., 99 El, 2063-2071, Jan. 1994) and accounts for drag, aerogel compaction heating, and the velocity threshold for shrapnel ablation due to conductive heating. Model predictions are correlated with the BB gun results at impact velocities up to a few hundred m/s and with NASA data for impact velocities up to 6 km/s. The model shows promising agreement with the data and will be used to plan and interpret future experiments.
Dynamical systems approach to the study of a sociophysics agent-based model
NASA Astrophysics Data System (ADS)
Timpanaro, André M.; Prado, Carmen P. C.
2011-03-01
The Sznajd model is a Potts-like model that has been studied in the context of sociophysics [1,2] (where spins are interpreted as opinions). In a recent work [3], we generalized the Sznajd model to include assymetric interactions between the spins (interpreted as biases towards opinions) and used dynamical systems techniques to tackle its mean-field version, given by the flow: ησ = ∑ σ' = 1Mησησ'(ησρσ'→σ-σ'ρσ→σ'). Where hs is the proportion of agents with opinion (spin) σ', M is the number of opinions and σ'→σ' is the probability weight for an agent with opinion σ being convinced by another agent with opinion σ'. We made Monte Carlo simulations of the model in a complex network (using Barabási-Albert networks [4]) and they displayed the same attractors than the mean-field. Using linear stability analysis, we were able to determine the mean-field attractor structure analytically and to show that it has connections with well known graph theory problems (maximal independent sets and positive fluxes in directed graphs). Our dynamical systems approach is quite simple and can be used also in other models, like the voter model.
The interpretation of polycrystalline coherent inelastic neutron scattering from aluminium
Roach, Daniel L.; Ross, D. Keith; Gale, Julian D.; Taylor, Jon W.
2013-01-01
A new approach to the interpretation and analysis of coherent inelastic neutron scattering from polycrystals (poly-CINS) is presented. This article describes a simulation of the one-phonon coherent inelastic scattering from a lattice model of an arbitrary crystal system. The one-phonon component is characterized by sharp features, determined, for example, by boundaries of the (Q, ω) regions where one-phonon scattering is allowed. These features may be identified with the same features apparent in the measured total coherent inelastic cross section, the other components of which (multiphonon or multiple scattering) show no sharp features. The parameters of the model can then be relaxed to improve the fit between model and experiment. This method is of particular interest where no single crystals are available. To test the approach, the poly-CINS has been measured for polycrystalline aluminium using the MARI spectrometer (ISIS), because both lattice dynamical models and measured dispersion curves are available for this material. The models used include a simple Lennard-Jones model fitted to the elastic constants of this material plus a number of embedded atom method force fields. The agreement obtained suggests that the method demonstrated should be effective in developing models for other materials where single-crystal dispersion curves are not available. PMID:24282332
Dynamical systems approach to the study of a sociophysics agent-based model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Timpanaro, Andre M.; Prado, Carmen P. C.
2011-03-24
The Sznajd model is a Potts-like model that has been studied in the context of sociophysics [1,2](where spins are interpreted as opinions). In a recent work [3], we generalized the Sznajd model to include assymetric interactions between the spins (interpreted as biases towards opinions) and used dynamical systems techniques to tackle its mean-field version, given by the flow: {eta}{sub {sigma}} = {Sigma}{sub {sigma}}'{sup M} = 1{eta}{sub {sigma}}{eta}{sigma}'({eta}{sub {sigma}}{rho}{sigma}'{yields}{sigma}-{sigma}'{rho}{sigma}{yields}{sigma}').Where hs is the proportion of agents with opinion (spin){sigma}', M is the number of opinions and {sigma}'{yields}{sigma}' is the probability weight for an agent with opinion {sigma} being convinced by another agentmore » with opinion {sigma}'. We made Monte Carlo simulations of the model in a complex network (using Barabasi-Albert networks [4]) and they displayed the same attractors than the mean-field. Using linear stability analysis, we were able to determine the mean-field attractor structure analytically and to show that it has connections with well known graph theory problems (maximal independent sets and positive fluxes in directed graphs). Our dynamical systems approach is quite simple and can be used also in other models, like the voter model.« less
Zhu, Ying; Tan, Tuck Lee
2016-04-15
An effective and simple analytical method using Fourier transform infrared (FTIR) spectroscopy to distinguish wild-grown high-quality Ganoderma lucidum (G. lucidum) from cultivated one is of essential importance for its quality assurance and medicinal value estimation. Commonly used chemical and analytical methods using full spectrum are not so effective for the detection and interpretation due to the complex system of the herbal medicine. In this study, two penalized discriminant analysis models, penalized linear discriminant analysis (PLDA) and elastic net (Elnet),using FTIR spectroscopy have been explored for the purpose of discrimination and interpretation. The classification performances of the two penalized models have been compared with two widely used multivariate methods, principal component discriminant analysis (PCDA) and partial least squares discriminant analysis (PLSDA). The Elnet model involving a combination of L1 and L2 norm penalties enabled an automatic selection of a small number of informative spectral absorption bands and gave an excellent classification accuracy of 99% for discrimination between spectra of wild-grown and cultivated G. lucidum. Its classification performance was superior to that of the PLDA model in a pure L1 setting and outperformed the PCDA and PLSDA models using full wavelength. The well-performed selection of informative spectral features leads to substantial reduction in model complexity and improvement of classification accuracy, and it is particularly helpful for the quantitative interpretations of the major chemical constituents of G. lucidum regarding its anti-cancer effects. Copyright © 2016 Elsevier B.V. All rights reserved.
Sutton, Ann; Trudeau, Natacha; Morford, Jill; Rios, Monica; Poirier, Marie-Andrée
2010-01-01
Children who require augmentative and alternative communication (AAC) systems while they are in the process of acquiring language face unique challenges because they use graphic symbols for communication. In contrast to the situation of typically developing children, they use different modalities for comprehension (auditory) and expression (visual). This study explored the ability of three- and four-year-old children without disabilities to perform tasks involving sequences of graphic symbols. Thirty participants were asked to transpose spoken simple sentences into graphic symbols by selecting individual symbols corresponding to the spoken words, and to interpret graphic symbol utterances by selecting one of four photographs corresponding to a sequence of three graphic symbols. The results showed that these were not simple tasks for the participants, and few of them performed in the expected manner - only one in transposition, and only one-third of participants in interpretation. Individual response strategies in some cases lead to contrasting response patterns. Children at this age level have not yet developed the skills required to deal with graphic symbols even though they have mastered the corresponding spoken language structures.
Enhancements to the Engine Data Interpretation System (EDIS)
NASA Technical Reports Server (NTRS)
Hofmann, Martin O.
1993-01-01
The Engine Data Interpretation System (EDIS) expert system project assists the data review personnel at NASA/MSFC in performing post-test data analysis and engine diagnosis of the Space Shuttle Main Engine (SSME). EDIS uses knowledge of the engine, its components, and simple thermodynamic principles instead of, and in addition to, heuristic rules gathered from the engine experts. EDIS reasons in cooperation with human experts, following roughly the pattern of logic exhibited by human experts. EDIS concentrates on steady-state static faults, such as small leaks, and component degradations, such as pump efficiencies. The objective of this contract was to complete the set of engine component models, integrate heuristic rules into EDIS, integrate the Power Balance Model into EDIS, and investigate modification of the qualitative reasoning mechanisms to allow 'fuzzy' value classification. The results of this contract is an operational version of EDIS. EDIS will become a module of the Post-Test Diagnostic System (PTDS) and will, in this context, provide system-level diagnostic capabilities which integrate component-specific findings provided by other modules.
Enhancements to the Engine Data Interpretation System (EDIS)
NASA Technical Reports Server (NTRS)
Hofmann, Martin O.
1993-01-01
The Engine Data Interpretation System (EDIS) expert system project assists the data review personnel at NASA/MSFC in performing post-test data analysis and engine diagnosis of the Space Shuttle Main Engine (SSME). EDIS uses knowledge of the engine, its components, and simple thermodynamic principles instead of, and in addition to, heuristic rules gathered from the engine experts. EDIS reasons in cooperation with human experts, following roughly the pattern of logic exhibited by human experts. EDIS concentrates on steady-state static faults, such as small leaks, and component degradations, such as pump efficiencies. The objective of this contract was to complete the set of engine component models, integrate heuristic rules into EDIS, integrate the Power Balance Model into EDIS, and investigate modification of the qualitative reasoning mechanisms to allow 'fuzzy' value classification. The result of this contract is an operational version of EDIS. EDIS will become a module of the Post-Test Diagnostic System (PTDS) and will, in this context, provide system-level diagnostic capabilities which integrate component-specific findings provided by other modules.
Formal verification of a microcoded VIPER microprocessor using HOL
NASA Technical Reports Server (NTRS)
Levitt, Karl; Arora, Tejkumar; Leung, Tony; Kalvala, Sara; Schubert, E. Thomas; Windley, Philip; Heckman, Mark; Cohen, Gerald C.
1993-01-01
The Royal Signals and Radar Establishment (RSRE) and members of the Hardware Verification Group at Cambridge University conducted a joint effort to prove the correspondence between the electronic block model and the top level specification of Viper. Unfortunately, the proof became too complex and unmanageable within the given time and funding constraints, and is thus incomplete as of the date of this report. This report describes an independent attempt to use the HOL (Cambridge Higher Order Logic) mechanical verifier to verify Viper. Deriving from recent results in hardware verification research at UC Davis, the approach has been to redesign the electronic block model to make it microcoded and to structure the proof in a series of decreasingly abstract interpreter levels, the lowest being the electronic block level. The highest level is the RSRE Viper instruction set. Owing to the new approach and some results on the proof of generic interpreters as applied to simple microprocessors, this attempt required an effort approximately an order of magnitude less than the previous one.
Climatic impact of Amazon deforestation - a mechanistic model study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ning Zeng; Dickinson, R.E.; Xubin Zeng
1996-04-01
Recent general circulation model (GCM) experiments suggest a drastic change in the regional climate, especially the hydrological cycle, after hypothesized Amazon basinwide deforestation. To facilitate the theoretical understanding os such a change, we develop an intermediate-level model for tropical climatology, including atmosphere-land-ocean interaction. The model consists of linearized steady-state primitive equations with simplified thermodynamics. A simple hydrological cycle is also included. Special attention has been paid to land-surface processes. It generally better simulates tropical climatology and the ENSO anomaly than do many of the previous simple models. The climatic impact of Amazon deforestation is studied in the context of thismore » model. Model results show a much weakened Atlantic Walker-Hadley circulation as a result of the existence of a strong positive feedback loop in the atmospheric circulation system and the hydrological cycle. The regional climate is highly sensitive to albedo change and sensitive to evapotranspiration change. The pure dynamical effect of surface roughness length on convergence is small, but the surface flow anomaly displays intriguing features. Analysis of the thermodynamic equation reveals that the balance between convective heating, adiabatic cooling, and radiation largely determines the deforestation response. Studies of the consequences of hypothetical continuous deforestation suggest that the replacement of forest by desert may be able to sustain a dry climate. Scaling analysis motivated by our modeling efforts also helps to interpret the common results of many GCM simulations. When a simple mixed-layer ocean model is coupled with the atmospheric model, the results suggest a 1{degrees}C decrease in SST gradient across the equatorial Atlantic Ocean in response to Amazon deforestation. The magnitude depends on the coupling strength. 66 refs., 16 figs., 4 tabs.« less
New Age of 3D Geological Modelling or Complexity is not an Issue Anymore
NASA Astrophysics Data System (ADS)
Mitrofanov, Aleksandr
2017-04-01
Geological model has a significant value in almost all types of researches related to regional mapping, geodynamics and especially to structural and resource geology of mineral deposits. Well-developed geological model must take into account all vital features of modelling object without over-simplification and also should adequately represent the interpretation of the geologist. In recent years with the gradual exhaustion deposits with relatively simple morphology geologists from all over the world are faced with the necessity of building the representative models for more and more structurally complex objects. Meanwhile, the amount of tools used for that has not significantly changed in the last two-three decades. The most widespread method of wireframe geological modelling now was developed in 1990s and is fully based on engineering design set of instruments (so-called CAD). Strings and polygons representing the section-based interpretation are being used as an intermediate step in the process of wireframes generation. Despite of significant time required for this type of modelling, it still can provide sufficient results for simple and medium-complexity geological objects. However, with the increasing complexity more and more vital features of the deposit are being sacrificed because of fundamental inability (or much greater time required for modelling) of CAD-based explicit techniques to develop the wireframes of the appropriate complexity. At the same time alternative technology which is not based on sectional approach and which uses the fundamentally different mathematical algorithms is being actively developed in the variety of other disciplines: medicine, advanced industrial design, game and cinema industry. In the recent years this implicit technology started to being developed for geological modelling purpose and nowadays it is represented by very powerful set of tools that has been integrated in almost all major commercial software packages. Implicit modelling allows to develop geological models that really correspond with complicated geological reality. Models can include fault blocking, complex structural trends and folding; can be based on excessive input dataset (like lots of drilling on the mining stage) or, on the other hand, on a quite few drillholes intersections with significant input from geological interpretation of the deposit. In any case implicit modelling, if is used correctly, allows to incorporate the whole batch of geological data and relatively quickly get the easily adjustable, flexible and robust geological wireframes that can be used as a reliable foundation on the following stages of geological investigations. In SRK practice nowadays almost all the wireframe models used for structural and resource geology are developed with implicit modelling tools which significantly increased the speed and quality of geological modelling.
The Productivity Dilemma in Workplace Health Promotion
Cherniack, Martin
2015-01-01
Background. Worksite-based programs to improve workforce health and well-being (Workplace Health Promotion (WHP)) have been advanced as conduits for improved worker productivity and decreased health care costs. There has been a countervailing health economics contention that return on investment (ROI) does not merit preventive health investment. Methods/Procedures. Pertinent studies were reviewed and results reconsidered. A simple economic model is presented based on conventional and alternate assumptions used in cost benefit analysis (CBA), such as discounting and negative value. The issues are presented in the format of 3 conceptual dilemmas. Principal Findings. In some occupations such as nursing, the utility of patient survival and staff health is undervalued. WHP may miss important components of work related health risk. Altering assumptions on discounting and eliminating the drag of negative value radically change the CBA value. Significance. Simple monetization of a work life and calculation of return on workforce health investment as a simple alternate opportunity involve highly selective interpretations of productivity and utility. PMID:26380374
Lunar crater volumes - Interpretation by models of impact cratering and upper crustal structure
NASA Technical Reports Server (NTRS)
Croft, S. K.
1978-01-01
Lunar crater volumes can be divided by size into two general classes with distinctly different functional dependence on diameter. Craters smaller than approximately 12 km in diameter are morphologically simple and increase in volume as the cube of the diameter, while craters larger than about 20 km are complex and increase in volume at a significantly lower rate implying shallowing. Ejecta and interior volumes are not identical and their ratio, Schroeters Ratio (SR), increases from about 0.5 for simple craters to about 1.5 for complex craters. The excess of ejecta volume causing the increase, can be accounted for by a discontinuity in lunar crust porosity at 1.5-2 km depth. The diameter range of significant increase in SR corresponds with the diameter range of transition from simple to complex crater morphology. This observation, combined with theoretical rebound calculation, indicates control of the transition diameter by the porosity structure of the upper crust.
Experimental study of the oscillation of spheres in an acoustic levitator.
Andrade, Marco A B; Pérez, Nicolás; Adamowski, Julio C
2014-10-01
The spontaneous oscillation of solid spheres in a single-axis acoustic levitator is experimentally investigated by using a high speed camera to record the position of the levitated sphere as a function of time. The oscillations in the axial and radial directions are systematically studied by changing the sphere density and the acoustic pressure amplitude. In order to interpret the experimental results, a simple model based on a spring-mass system is applied in the analysis of the sphere oscillatory behavior. This model requires the knowledge of the acoustic pressure distribution, which was obtained numerically by using a linear finite element method (FEM). Additionally, the linear acoustic pressure distribution obtained by FEM was compared with that measured with a laser Doppler vibrometer. The comparison between numerical and experimental pressure distributions shows good agreement for low values of pressure amplitude. When the pressure amplitude is increased, the acoustic pressure distribution becomes nonlinear, producing harmonics of the fundamental frequency. The experimental results of the spheres oscillations for low pressure amplitudes are consistent with the results predicted by the simple model based on a spring-mass system.
Mergers of Non-spinning Black-hole Binaries: Gravitational Radiation Characteristics
NASA Technical Reports Server (NTRS)
Baker, John G.; Boggs, William D.; Centrella, Joan; Kelly, Bernard J.; McWilliams, Sean T.; vanMeter, James R.
2008-01-01
We present a detailed descriptive analysis of the gravitational radiation from black-hole binary mergers of non-spinning black holes, based on numerical simulations of systems varying from equal-mass to a 6:1 mass ratio. Our primary goal is to present relatively complete information about the waveforms, including all the leading multipolar components, to interested researchers. In our analysis, we pursue the simplest physical description of the dominant features in the radiation, providing an interpretation of the waveforms in terms of an implicit rotating source. This interpretation applies uniformly to the full wavetrain, from inspiral through ringdown. We emphasize strong relationships among the l = m modes that persist through the full wavetrain. Exploring the structure of the waveforms in more detail, we conduct detailed analytic fitting of the late-time frequency evolution, identifying a key quantitative feature shared by the l = m modes among all mass-ratios. We identify relationships, with a simple interpretation in terms of the implicit rotating source, among the evolution of frequency and amplitude, which hold for the late-time radiation. These detailed relationships provide sufficient information about the late-time radiation to yield a predictive model for the late-time waveforms, an alternative to the common practice of modeling by a sum of quasinormal mode overtones. We demonstrate an application of this in a new effective-one-body-based analytic waveform model.
Mergers of nonspinning black-hole binaries: Gravitational radiation characteristics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, John G.; Centrella, Joan; Kelly, Bernard J.
2008-08-15
We present a detailed descriptive analysis of the gravitational radiation from black-hole binary mergers of nonspinning black holes, based on numerical simulations of systems varying from equal mass to a 6 ratio 1 mass ratio. Our primary goal is to present relatively complete information about the waveforms, including all the leading multipolar components, to interested researchers. In our analysis, we pursue the simplest physical description of the dominant features in the radiation, providing an interpretation of the waveforms in terms of an implicit rotating source. This interpretation applies uniformly to the full wave train, from inspiral through ringdown. We emphasizemore » strong relationships among the l=m modes that persist through the full wave train. Exploring the structure of the waveforms in more detail, we conduct detailed analytic fitting of the late-time frequency evolution, identifying a key quantitative feature shared by the l=m modes among all mass ratios. We identify relationships, with a simple interpretation in terms of the implicit rotating source, among the evolution of frequency and amplitude, which hold for the late-time radiation. These detailed relationships provide sufficient information about the late-time radiation to yield a predictive model for the late-time waveforms, an alternative to the common practice of modeling by a sum of quasinormal mode overtones. We demonstrate an application of this in a new effective-one-body-based analytic waveform model.« less
Cylindrically symmetric Green's function approach for modeling the crystal growth morphology of ice.
Libbrecht, K G
1999-08-01
We describe a front-tracking Green's function approach to modeling cylindrically symmetric crystal growth. This method is simple to implement, and with little computer power can adequately model a wide range of physical situations. We apply the method to modeling the hexagonal prism growth of ice crystals, which is governed primarily by diffusion along with anisotropic surface kinetic processes. From ice crystal growth observations in air, we derive measurements of the kinetic growth coefficients for the basal and prism faces as a function of temperature, for supersaturations near the water saturation level. These measurements are interpreted in the context of a model for the nucleation and growth of ice, in which the growth dynamics are dominated by the structure of a disordered layer on the ice surfaces.
Self-sustained peristaltic waves: Explicit asymptotic solutions
NASA Astrophysics Data System (ADS)
Dudchenko, O. A.; Guria, G. Th.
2012-02-01
A simple nonlinear model for the coupled problem of fluid flow and contractile wall deformation is proposed to describe peristalsis. In the context of the model the ability of a transporting system to perform autonomous peristaltic pumping is interpreted as the ability to propagate sustained waves of wall deformation. Piecewise-linear approximations of nonlinear functions are used to analytically demonstrate the existence of traveling-wave solutions. Explicit formulas are derived which relate the speed of self-sustained peristaltic waves to the rheological properties of the transporting vessel and the transported fluid. The results may contribute to the development of diagnostic and therapeutic procedures for cases of peristaltic motility disorders.
Non-Linear Approach in Kinesiology Should Be Preferred to the Linear--A Case of Basketball.
Trninić, Marko; Jeličić, Mario; Papić, Vladan
2015-07-01
In kinesiology, medicine, biology and psychology, in which research focus is on dynamical self-organized systems, complex connections exist between variables. Non-linear nature of complex systems has been discussed and explained by the example of non-linear anthropometric predictors of performance in basketball. Previous studies interpreted relations between anthropometric features and measures of effectiveness in basketball by (a) using linear correlation models, and by (b) including all basketball athletes in the same sample of participants regardless of their playing position. In this paper the significance and character of linear and non-linear relations between simple anthropometric predictors (AP) and performance criteria consisting of situation-related measures of effectiveness (SE) in basketball were determined and evaluated. The sample of participants consisted of top-level junior basketball players divided in three groups according to their playing time (8 minutes and more per game) and playing position: guards (N = 42), forwards (N = 26) and centers (N = 40). Linear (general model) and non-linear (general model) regression models were calculated simultaneously and separately for each group. The conclusion is viable: non-linear regressions are frequently superior to linear correlations when interpreting actual association logic among research variables.
A Two-dimensional Version of the Niblett-Bostick Transformation for Magnetotelluric Interpretations
NASA Astrophysics Data System (ADS)
Esparza, F.
2005-05-01
An imaging technique for two-dimensional magnetotelluric interpretations is developed following the well known Niblett-Bostick transformation for one-dimensional profiles. The algorithm uses a Hopfield artificial neural network to process series and parallel magnetotelluric impedances along with their analytical influence functions. The adaptive, weighted average approximation preserves part of the nonlinearity of the original problem. No initial model in the usual sense is required for the recovery of a functional model. Rather, the built-in relationship between model and data considers automatically, all at the same time, many half spaces whose electrical conductivities vary according to the data. The use of series and parallel impedances, a self-contained pair of invariants of the impedance tensor, avoids the need to decide on best angles of rotation for TE and TM separations. Field data from a given profile can thus be fed directly into the algorithm without much processing. The solutions offered by the Hopfield neural network correspond to spatial averages computed through rectangular windows that can be chosen at will. Applications of the algorithm to simple synthetic models and to the COPROD2 data set illustrate the performance of the approximation.
On the interpretation of domain averaged Fermi hole analyses of correlated wavefunctions.
Francisco, E; Martín Pendás, A; Costales, Aurora
2014-03-14
Few methods allow for a physically sound analysis of chemical bonds in cases where electron correlation may be a relevant factor. The domain averaged Fermi hole (DAFH) analysis, a tool firstly proposed by Robert Ponec in the 1990's to provide interpretations of the chemical bonding existing between two fragments Ω and Ω' that divide the real space exhaustively, is one of them. This method allows for a partition of the delocalization index or bond order between Ω and Ω' into one electron contributions, but the chemical interpretation of its parameters has been firmly established only for single determinant wavefunctions. In this paper we report a general interpretation based on the concept of excluded density that is also valid for correlated descriptions. Both analytical models and actual computations on a set of simple molecules (H2, N2, LiH, and CO) are discussed, and a classification of the possible DAFH situations is presented. Our results show that this kind of analysis may reveal several correlated assisted bonding patterns that might be difficult to detect using other methods. In agreement with previous knowledge, we find that the effective bond order in covalent links decreases due to localization of electrons driven by Coulomb correlation.
METALLICITY GRADIENTS THROUGH DISK INSTABILITY: A SIMPLE MODEL FOR THE MILKY WAY'S BOXY BULGE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martinez-Valpuesta, Inma; Gerhard, Ortwin, E-mail: imv@mpe.mpg.de, E-mail: gerhard@mpe.mpg.de
2013-03-20
Observations show a clear vertical metallicity gradient in the Galactic bulge, which is often taken as a signature of dissipative processes in the formation of a classical bulge. Various evidence shows, however, that the Milky Way is a barred galaxy with a boxy bulge representing the inner three-dimensional part of the bar. Here we show with a secular evolution N-body model that a boxy bulge formed through bar and buckling instabilities can show vertical metallicity gradients similar to the observed gradient if the initial axisymmetric disk had a comparable radial metallicity gradient. In this framework, the range of metallicities inmore » bulge fields constrains the chemical structure of the Galactic disk at early times before bar formation. Our secular evolution model was previously shown to reproduce inner Galaxy star counts and we show here that it also has cylindrical rotation. We use it to predict a full mean metallicity map across the Galactic bulge from a simple metallicity model for the initial disk. This map shows a general outward gradient on the sky as well as longitudinal perspective asymmetries. We also briefly comment on interpreting metallicity gradient observations in external boxy bulges.« less
NMR signals within the generalized Langevin model for fractional Brownian motion
NASA Astrophysics Data System (ADS)
Lisý, Vladimír; Tóthová, Jana
2018-03-01
The methods of Nuclear Magnetic Resonance belong to the best developed and often used tools for studying random motion of particles in different systems, including soft biological tissues. In the long-time limit the current mathematical description of the experiments allows proper interpretation of measurements of normal and anomalous diffusion. The shorter-time dynamics is however correctly considered only in a few works that do not go beyond the standard memoryless Langevin description of the Brownian motion (BM). In the present work, the attenuation function S (t) for an ensemble of spin-bearing particles in a magnetic-field gradient, expressed in a form applicable for any kind of stationary stochastic dynamics of spins with or without a memory, is calculated in the frame of the model of fractional BM. The solution of the model for particles trapped in a harmonic potential is obtained in an exceedingly simple way and used for the calculation of S (t). In the limit of free particles coupled to a fractal heat bath, the results compare favorably with experiments acquired in human neuronal tissues. The effect of the trap is demonstrated by introducing a simple model for the generalized diffusion coefficient of the particle.
Using energy budgets to combine ecology and toxicology in a mammalian sentinel species
NASA Astrophysics Data System (ADS)
Desforges, Jean-Pierre W.; Sonne, Christian; Dietz, Rune
2017-04-01
Process-driven modelling approaches can resolve many of the shortcomings of traditional descriptive and non-mechanistic toxicology. We developed a simple dynamic energy budget (DEB) model for the mink (Mustela vison), a sentinel species in mammalian toxicology, which coupled animal physiology, ecology and toxicology, in order to mechanistically investigate the accumulation and adverse effects of lifelong dietary exposure to persistent environmental toxicants, most notably polychlorinated biphenyls (PCBs). Our novel mammalian DEB model accurately predicted, based on energy allocations to the interconnected metabolic processes of growth, development, maintenance and reproduction, lifelong patterns in mink growth, reproductive performance and dietary accumulation of PCBs as reported in the literature. Our model results were consistent with empirical data from captive and free-ranging studies in mink and other wildlife and suggest that PCB exposure can have significant population-level impacts resulting from targeted effects on fetal toxicity, kit mortality and growth and development. Our approach provides a simple and cross-species framework to explore the mechanistic interactions of physiological processes and ecotoxicology, thus allowing for a deeper understanding and interpretation of stressor-induced adverse effects at all levels of biological organization.
NASA Astrophysics Data System (ADS)
Lechner, H. N.; Waite, G. P.; Wauthier, D. C.; Escobar-Wolf, R. P.; Lopez-Hetland, B.
2017-12-01
Geodetic data from an eight-station GPS network at Pacaya volcano Guatemala allows us to produce a simple analytical model of deformation sources associated with the 2010 eruption and the eruptive period in 2013-2014. Deformation signals for both eruptive time-periods indicate downward vertical and outward horizontal motion at several stations surrounding the volcano. The objective of this research was to better understand the magmatic plumbing system and sources of this deformation. Because this down-and-out displacement is difficult to explain with a single source, we chose a model that includes a combination of a dike and spherical source. Our modelling suggests that deformation is dominated the inflation of a shallow dike seated high within the volcanic edifice and deflation of a deeper, spherical source below the SW flank of the volcano. The source parameters for the dike feature are in good agreement with the observed orientation of recent vent emplacements on the edifice as well the horizontal displacement, while the parameters for a deeper spherical source accommodate the downward vertical motion. This study presents GPS observations at Pacaya dating back to 2009 and provides a glimpse of simple models of possible deformation sources.
A Fuzzy Cognitive Model of aeolian instability across the South Texas Sandsheet
NASA Astrophysics Data System (ADS)
Houser, C.; Bishop, M. P.; Barrineau, C. P.
2014-12-01
Characterization of aeolian systems is complicated by rapidly changing surface-process regimes, spatio-temporal scale dependencies, and subjective interpretation of imagery and spatial data. This paper describes the development and application of analytical reasoning to quantify instability of an aeolian environment using scale-dependent information coupled with conceptual knowledge of process and feedback mechanisms. Specifically, a simple Fuzzy Cognitive Model (FCM) for aeolian landscape instability was developed that represents conceptual knowledge of key biophysical processes and feedbacks. Model inputs include satellite-derived surface biophysical and geomorphometric parameters. FCMs are a knowledge-based Artificial Intelligence (AI) technique that merges fuzzy logic and neural computing in which knowledge or concepts are structured as a web of relationships that is similar to both human reasoning and the human decision-making process. Given simple process-form relationships, the analytical reasoning model is able to map the influence of land management practices and the geomorphology of the inherited surface on aeolian instability within the South Texas Sandsheet. Results suggest that FCMs can be used to formalize process-form relationships and information integration analogous to human cognition with future iterations accounting for the spatial interactions and temporal lags across the sand sheets.
Stochastic oscillations in models of epidemics on a network of cities
NASA Astrophysics Data System (ADS)
Rozhnova, G.; Nunes, A.; McKane, A. J.
2011-11-01
We carry out an analytic investigation of stochastic oscillations in a susceptible-infected-recovered model of disease spread on a network of n cities. In the model a fraction fjk of individuals from city k commute to city j, where they may infect, or be infected by, others. Starting from a continuous-time Markov description of the model the deterministic equations, which are valid in the limit when the population of each city is infinite, are recovered. The stochastic fluctuations about the fixed point of these equations are derived by use of the van Kampen system-size expansion. The fixed point structure of the deterministic equations is remarkably simple: A unique nontrivial fixed point always exists and has the feature that the fraction of susceptible, infected, and recovered individuals is the same for each city irrespective of its size. We find that the stochastic fluctuations have an analogously simple dynamics: All oscillations have a single frequency, equal to that found in the one-city case. We interpret this phenomenon in terms of the properties of the spectrum of the matrix of the linear approximation of the deterministic equations at the fixed point.
Yurkin, Alexander; Tozzi, Arturo; Peters, James F; Marijuán, Pedro C
2017-12-01
The present Addendum complements the accompanying paper "Cellular Gauge Symmetry and the Li Organization Principle"; it illustrates a recently-developed geometrical physical model able to assess electronic movements and energetic paths in atomic shells. The model describes a multi-level system of circular, wavy and zigzag paths which can be projected onto a horizontal tape. This model ushers in a visual interpretation of the distribution of atomic electrons' energy levels and the corresponding quantum numbers through rather simple tools, such as compasses, rulers and straightforward calculations. Here we show how this geometrical model, with the due corrections, among them the use of geodetic curves, might be able to describe and quantify the structure and the temporal development of countless physical and biological systems, from Langevin equations for random paths, to symmetry breaks occurring ubiquitously in physical and biological phenomena, to the relationships among different frequencies of EEG electric spikes. Therefore, in our work we explore the possible association of binomial distribution and geodetic curves configuring a uniform approach for the research of natural phenomena, in biology, medicine or the neurosciences. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Beniest, A.; Koptev, A.; Leroy, S. D.
2016-12-01
Anomalous features along the South American and African rifted margins at depth and at the surface have been recognised with gravity and magnetic modelling. They include high velocity/high density bodies at lower crustal level and topography variations that are usually interpreted as aborted rifts. We present fully-coupled lithosphere-scale numerical models that permit us to explain both features in a relatively simple framework of an interaction between rheologically stratified continental lithosphere and an active mantle plume. We used 2D and 3D numerical models to investigate the impact of thermo-rheological structure of the continental lithosphere and initial plume position on continental rifting and breakup processes. Based on the results of our 2D experiments, three main types of continental break-up are revealed: A) mantle plume-induced break-up, directly located above the centre of the mantle anomaly, B) mantle plume-induced break-up, 50 to 250 km displaced from the initial plume location and C) self-induced break-up due to convection and/or slab-subduction/delamination, considerably shifted (300 to 800 km) from the initial plume position. With our 3D, laterally homogenous initial setup, we show that a complex system, with the axis of continental break-up 100's of km's shifted from the original plume location, can arise spontaneously from simple and perfectly symmetric preliminary settings. Our modelling demonstrates that fragments of a laterally migrating plume head become glued to the base of the lithosphere and remain at both sides of the newly-formed oceanic basin after continental break-up. Underplated plume material soldered into lower parts of lithosphere can be interpreted as the high-velocity/high density magmatic bodies at lower crustal levels. In the very early stages of rifting, first impingement of the vertically upwelled mantle plume to the lithospheric base leads to surface topographic variations. Given the shifted position of the final spreading centre with respect to initial plume position, these topographic variations resemble aborted rifts that are observed on passive margins. Lastly, after continuous extension and transition to the spreading state, strain rate relocalizations develop that can be interpreted as ridge jumps that are commonly observed in nature.
A Simple Mathematical Model for Standard Model of Elementary Particles and Extension Thereof
NASA Astrophysics Data System (ADS)
Sinha, Ashok
2016-03-01
An algebraically (and geometrically) simple model representing the masses of the elementary particles in terms of the interaction (strong, weak, electromagnetic) constants is developed, including the Higgs bosons. The predicted Higgs boson mass is identical to that discovered by LHC experimental programs; while possibility of additional Higgs bosons (and their masses) is indicated. The model can be analyzed to explain and resolve many puzzles of particle physics and cosmology including the neutrino masses and mixing; origin of the proton mass and the mass-difference between the proton and the neutron; the big bang and cosmological Inflation; the Hubble expansion; etc. A novel interpretation of the model in terms of quaternion and rotation in the six-dimensional space of the elementary particle interaction-space - or, equivalently, in six-dimensional spacetime - is presented. Interrelations among particle masses are derived theoretically. A new approach for defining the interaction parameters leading to an elegant and symmetrical diagram is delineated. Generalization of the model to include supersymmetry is illustrated without recourse to complex mathematical formulation and free from any ambiguity. This Abstract represents some results of the Author's Independent Theoretical Research in Particle Physics, with possible connection to the Superstring Theory. However, only very elementary mathematics and physics is used in my presentation.
Interpreting high time resolution galactic cosmic ray observations in a diffusive context
NASA Astrophysics Data System (ADS)
Jordan, A.; Spence, H. E.; Blake, J. B.; Shaul, D. A.
2009-12-01
We interpret galactic cosmic ray (GCR) variations near Earth within a diffusive context. The variations occur on time-/size-scales ranging from Forbush decreases (Fds), to substructure embedded within Fds, to smaller amplitude and shorter duration variations during relatively benign interplanetary conditions. We use high time resolution GCR observations from the High Sensitivity Telescope (HIST) on Polar and from the Spectrometer for INTEGRAL (SPI) and also use solar wind plasma and magnetic field observations from ACE and/or Wind. To calculate the coefficient of diffusion, we combine these datasets with a simple convection-diffusion model for relativistic charged particles in a magnetic field. We find reasonable agreement between our and previous estimates of the coefficient. We also show whether changes in the coefficient of diffusion are sufficient to explain the above GCR variations.
Fortuna Tessera, Venus - Evidence of horizontal convergence and crustal thickening
NASA Technical Reports Server (NTRS)
Vorder Bruegge, R. W.; Head, J. W.
1989-01-01
Structural and tectonic patterns mapped in Fortuna Tessera are interpreted to reflect a change in the style and intensity of deformation from east to west, beginning with simple tessera terrain at relatively low topographic elevations in the east and progressing through increasingly complex deformation patterns and higher topography to Maxwell Montes in the West. These morphologic and topographic patterns are consistent with east-to-west convergence and compression and the increasing elevations are interpreted to be due to crustal thickening processes associated with the convergent deformational environment. Using an Airy isostatic model, crustal thicknesses of approximately 35 km for the initial tessera terrain, and crustal thicknesses of over 100 km for the Maxwell Montes region are predicted. Detailed mapping with Magellan data will permit the deconvolution of individual components and structures in this terrain.
Applied statistics in ecology: common pitfalls and simple solutions
E. Ashley Steel; Maureen C. Kennedy; Patrick G. Cunningham; John S. Stanovick
2013-01-01
The most common statistical pitfalls in ecological research are those associated with data exploration, the logic of sampling and design, and the interpretation of statistical results. Although one can find published errors in calculations, the majority of statistical pitfalls result from incorrect logic or interpretation despite correct numerical calculations. There...
Simple atmospheric perturbation models for sonic-boom-signature distortion studies
NASA Technical Reports Server (NTRS)
Ehernberger, L. J.; Wurtele, Morton G.; Sharman, Robert D.
1994-01-01
Sonic-boom propagation from flight level to ground is influenced by wind and speed-of-sound variations resulting from temperature changes in both the mean atmospheric structure and small-scale perturbations. Meteorological behavior generally produces complex combinations of atmospheric perturbations in the form of turbulence, wind shears, up- and down-drafts and various wave behaviors. Differences between the speed of sound at the ground and at flight level will influence the threshold flight Mach number for which the sonic boom first reaches the ground as well as the width of the resulting sonic-boom carpet. Mean atmospheric temperature and wind structure as a function of altitude vary with location and time of year. These average properties of the atmosphere are well-documented and have been used in many sonic-boom propagation assessments. In contrast, smaller scale atmospheric perturbations are also known to modulate the shape and amplitude of sonic-boom signatures reaching the ground, but specific perturbation models have not been established for evaluating their effects on sonic-boom propagation. The purpose of this paper is to present simple examples of atmospheric vertical temperature gradients, wind shears, and wave motions that can guide preliminary assessments of nonturbulent atmospheric perturbation effects on sonic-boom propagation to the ground. The use of simple discrete atmospheric perturbation structures can facilitate the interpretation of the resulting sonic-boom propagation anomalies as well as intercomparisons among varied flight conditions and propagation models.
NONMEMory: a run management tool for NONMEM.
Wilkins, Justin J
2005-06-01
NONMEM is an extremely powerful tool for nonlinear mixed-effect modelling and simulation of pharmacokinetic and pharmacodynamic data. However, it is a console-based application whose output does not lend itself to rapid interpretation or efficient management. NONMEMory has been created to be a comprehensive project manager for NONMEM, providing detailed summary, comparison and overview of the runs comprising a given project, including the display of output data, simple post-run processing, fast diagnostic plots and run output management, complementary to other available modelling aids. Analysis time ought not to be spent on trivial tasks, and NONMEMory's role is to eliminate these as far as possible by increasing the efficiency of the modelling process. NONMEMory is freely available from http://www.uct.ac.za/depts/pha/nonmemory.php.
Automated Discovery and Modeling of Sequential Patterns Preceding Events of Interest
NASA Technical Reports Server (NTRS)
Rohloff, Kurt
2010-01-01
The integration of emerging data manipulation technologies has enabled a paradigm shift in practitioners' abilities to understand and anticipate events of interest in complex systems. Example events of interest include outbreaks of socio-political violence in nation-states. Rather than relying on human-centric modeling efforts that are limited by the availability of SMEs, automated data processing technologies has enabled the development of innovative automated complex system modeling and predictive analysis technologies. We introduce one such emerging modeling technology - the sequential pattern methodology. We have applied the sequential pattern methodology to automatically identify patterns of observed behavior that precede outbreaks of socio-political violence such as riots, rebellions and coups in nation-states. The sequential pattern methodology is a groundbreaking approach to automated complex system model discovery because it generates easily interpretable patterns based on direct observations of sampled factor data for a deeper understanding of societal behaviors that is tolerant of observation noise and missing data. The discovered patterns are simple to interpret and mimic human's identifications of observed trends in temporal data. Discovered patterns also provide an automated forecasting ability: we discuss an example of using discovered patterns coupled with a rich data environment to forecast various types of socio-political violence in nation-states.
NASA Astrophysics Data System (ADS)
Durang, Xavier; Henkel, Malte
2017-12-01
Motivated by an analogy with the spherical model of a ferromagnet, the three Arcetri models are defined. They present new universality classes, either for the growth of interfaces, or else for lattice gases. They are distinct from the common Edwards-Wilkinson and Kardar-Parisi-Zhang universality classes. Their non-equilibrium evolution can be studied by the exact computation of their two-time correlators and responses. In both interpretations, the first model has a critical point in any dimension and shows simple ageing at and below criticality. The exact universal exponents are found. The second and third model are solved at zero temperature, in one dimension, where both show logarithmic sub-ageing, of which several distinct types are identified. Physically, the second model describes a lattice gas and the third model describes interface growth. A clear physical picture on the subsequent time and length scales of the sub-ageing process emerges.
High-Performance Liquid Chromatography in the Undergraduate Chemical Engineering Laboratory
ERIC Educational Resources Information Center
Frey, Douglas D.; Guo, Hui; Karnik, Nikhila
2013-01-01
This article describes the assembly of a simple, low-cost, high-performance liquid chromatography (HPLC) system and its use in the undergraduate chemical engineering laboratory course to perform simple experiments. By interpreting the results from these experiments students are able to gain significant experience in the general method of…
Simple Techniques for Microclimate Measurement.
ERIC Educational Resources Information Center
Unwin, D. M.
1978-01-01
Describes simple ways of measuring the very local climate near the ground, and explains what these measurements mean. Equipment included a solar radiometer, a dew point instrument, and a thermocouple psychrometer. Examples are given of field measurements taken with some of the equipment and the results and their interpretation are discussed.…
A Gauge-generalized Solution for Non-Keplerian Motion in the Frenet-Serret Frame
NASA Astrophysics Data System (ADS)
Garber, Darren D.
2009-05-01
The customary modeling of perturbed planetary and spacecraft motion as a continuous sequence of unperturbed two-body orbits (instantaneous ellipses) is conveniently assigned a physical interpretation through the Keplerian and Delaunay elements and complemented mathematically by the Lagrange-type equations which describe the evolution of these variables. If however the actual motion is very non-Keplerian (i.e. the perturbed orbit varies greatly from a two-body orbit), then its modeling by a sequence of conics is not necessarily optimal in terms of its mathematical description and its resulting physical interpretation. Since, in principle a curve of any type can be represented as a sequence of points from a family of curves of any other type (Efroimsky 2005), alternate non-conic curves can be utilized to better describe the perturbed non-Keplerian motion of the body both mathematically and with a physically relevant interpretation. Non-Keplerian motion exists in both celestial mechanics and astrodynamics as evident by the complex interactions within star clusters and also as the result of a spacecraft accelerating via ion propulsion, solar sails and electro-dynamic tethers. For these cases, the sequence of simple orbits to describe the motion is not based on conics, but instead a family of spirals. The selection of spirals as the underlying simple motion is supported by the fact that it is unnecessary to describe the motion in terms of instantaneous orbits tangent to the actual trajectory (Efroimsky 2002, Newman & Efroimsky 2003) and at times there is an advantage to deviate from osculation, in order to greatly simplify the resulting mathematics via gauge freedom (Efroimsky & Goldreich 2003, Slabinski 2003, Gurfil 2004). From these two principles, (1) spirals as instantaneous orbits, and (2) controlled deviation from osculation, new planetary equations are derived for new non-osculating elements in the Frenet-Serret frame with the gauge function as a measure of non-osculation.
A population genetic interpretation of GWAS findings for human quantitative traits
Bullaughey, Kevin; Hudson, Richard R.; Sella, Guy
2018-01-01
Human genome-wide association studies (GWASs) are revealing the genetic architecture of anthropomorphic and biomedical traits, i.e., the frequencies and effect sizes of variants that contribute to heritable variation in a trait. To interpret these findings, we need to understand how genetic architecture is shaped by basic population genetics processes—notably, by mutation, natural selection, and genetic drift. Because many quantitative traits are subject to stabilizing selection and because genetic variation that affects one trait often affects many others, we model the genetic architecture of a focal trait that arises under stabilizing selection in a multidimensional trait space. We solve the model for the phenotypic distribution and allelic dynamics at steady state and derive robust, closed-form solutions for summary statistics of the genetic architecture. Our results provide a simple interpretation for missing heritability and why it varies among traits. They predict that the distribution of variances contributed by loci identified in GWASs is well approximated by a simple functional form that depends on a single parameter: the expected contribution to genetic variance of a strongly selected site affecting the trait. We test this prediction against the results of GWASs for height and body mass index (BMI) and find that it fits the data well, allowing us to make inferences about the degree of pleiotropy and mutational target size for these traits. Our findings help to explain why the GWAS for height explains more of the heritable variance than the similarly sized GWAS for BMI and to predict the increase in explained heritability with study sample size. Considering the demographic history of European populations, in which these GWASs were performed, we further find that most of the associations they identified likely involve mutations that arose shortly before or during the Out-of-Africa bottleneck at sites with selection coefficients around s = 10−3. PMID:29547617
Bastien, Renaud; Meroz, Yasmine
2016-12-01
Nutation is an oscillatory movement that plants display during their development. Despite its ubiquity among plants movements, the relation between the observed movement and the underlying biological mechanisms remains unclear. Here we show that the kinematics of the full organ in 3D give a simple picture of plant nutation, where the orientation of the curvature along the main axis of the organ aligns with the direction of maximal differential growth. Within this framework we reexamine the validity of widely used experimental measurements of the apical tip as markers of growth dynamics. We show that though this relation is correct under certain conditions, it does not generally hold, and is not sufficient to uncover the specific role of each mechanism. As an example we re-interpret previously measured experimental observations using our model.
NASA Astrophysics Data System (ADS)
Frassi, Chiara
2016-04-01
Three main tectono-metamorphic units are classically recognized along the Himalayan belt: the Lesser Himalayan (LH), the Greater Himalayan sequence (GHS) and the Tibetan Sedimentary sequence (TSS). The GHS may be interpreted as a low-viscosity tabular body of mid-crustal rocks extruded southward in Miocene times beneath the Tibetan plateau between two parallel and opposite-sense crustal-scale shear zones: the Main Central thrust at the base, and the South Tibetan Detachment system at the top. The pre-/syn-shearing mineral assemblage documented within these crustal-scale shear zones indicates that the metamorphic grade increases toward the core of the GHS producing an inverted and a normal thermal gradient respectively on the top and on the bottom of the slab. In addition, thermal profiles estimated using both petrology- and microstructures/fabrics-based thermometers indicate that the metamorphic isograds are condensed. Although horizontal extension and vorticity estimates collected across the GHS could be strongly biased by the criteria used to define the map position of the MCT, published vorticity data document general shear flow (1>Wk>0) within the slab with a pure-shear component of flow slightly predominant within the core of the GHS whereas the simple-shear component seems to dominate at the top of the slab. The lower boundary of the GHS records a general shear flow with a comparable contribution of simple and pure shearing. The associated crustal extrusion is compatible with Couette - Poiseuille velocity flow profile as assumed in crustal-scale channel flow-type models In this study, the quartz c-axis petrofabrics, vorticity and deformation-temperature studies are integrated with microstructures and metamorphic studies to individuate the location of the MCT and to document the spatial distribution of ductile deformation patterns across the lower portion of the GHS exposed in the Chaudabise river valley in western Nepal. My results indicate that the Main Central Thrust is located ˜5 km structurally below the previous mapped locations. Deformation temperature increases up structural section from ˜450°C to ˜650°C and overlaps with peak metamorphic temperature indicating that penetrative shearing was responsible for the exhumation of the GHS occurred at "close" to peak metamorphic conditions. I interpreted the telescoping and the inversion of the paleo-isotherms at the base of the GHS as produced mainly by a sub-simple shearing (Wm = 0.88-1) pervasively distributed through the lower portion of the GHS. The results are consistent with hybrid channel flow-type models where the boundary between lower and upper portions of the GHS, broadly corresponding to the tectono-metamorphic discontinuity recently documented in west Nepal, represents the limit between buried material, affected by dominant simple shearing, and exhumed material affected by a general flow dominates by pure shearing. This interpretation is consistent with the recent models suggesting the simultaneous operation of channel flow- and critical wedge-type processes at different structural depth.
Bardhan, Jaydeep P
2008-10-14
The importance of molecular electrostatic interactions in aqueous solution has motivated extensive research into physical models and numerical methods for their estimation. The computational costs associated with simulations that include many explicit water molecules have driven the development of implicit-solvent models, with generalized-Born (GB) models among the most popular of these. In this paper, we analyze a boundary-integral equation interpretation for the Coulomb-field approximation (CFA), which plays a central role in most GB models. This interpretation offers new insights into the nature of the CFA, which traditionally has been assessed using only a single point charge in the solute. The boundary-integral interpretation of the CFA allows the use of multiple point charges, or even continuous charge distributions, leading naturally to methods that eliminate the interpolation inaccuracies associated with the Still equation. This approach, which we call boundary-integral-based electrostatic estimation by the CFA (BIBEE/CFA), is most accurate when the molecular charge distribution generates a smooth normal displacement field at the solute-solvent boundary, and CFA-based GB methods perform similarly. Conversely, both methods are least accurate for charge distributions that give rise to rapidly varying or highly localized normal displacement fields. Supporting this analysis are comparisons of the reaction-potential matrices calculated using GB methods and boundary-element-method (BEM) simulations. An approximation similar to BIBEE/CFA exhibits complementary behavior, with superior accuracy for charge distributions that generate rapidly varying normal fields and poorer accuracy for distributions that produce smooth fields. This approximation, BIBEE by preconditioning (BIBEE/P), essentially generates initial guesses for preconditioned Krylov-subspace iterative BEMs. Thus, iterative refinement of the BIBEE/P results recovers the BEM solution; excellent agreement is obtained in only a few iterations. The boundary-integral-equation framework may also provide a means to derive rigorous results explaining how the empirical correction terms in many modern GB models significantly improve accuracy despite their simple analytical forms.
What can we learn from PISA?: Investigating PISA's approach to scientific literacy
NASA Astrophysics Data System (ADS)
Schwab, Cheryl Jean
This dissertation is an investigation of the relationship between the multidimensional conception of scientific literacy and its assessment. The Programme for International Student Assessment (PISA), developed under the auspices of the Organization for Economic Cooperation and Development (OECD), offers a unique opportunity to evaluate the assessment of scientific literacy. PISA developed a continuum of performance for scientific literacy across three competencies (i.e., process, content, and situation). Foundational to the interpretation of PISA science assessment is PISA's definition of scientific literacy, which I argue incorporates three themes drawn from history: (a) scientific way of thinking, (b) everyday relevance of science, and (c) scientific literacy for all students. Three coordinated studies were conducted to investigate the validity of PISA science assessment and offer insight into the development of items to assess scientific 2 literacy. Multidimensional models of the internal structure of the PISA 2003 science items were found not to reflect the complex character of PISA's definition of scientific literacy. Although the multidimensional models across the three competencies significantly decreased the G2 statistic from the unidimensional model, high correlations between the dimensions suggest that the dimensions are similar. A cognitive analysis of student verbal responses to PISA science items revealed that students were using competencies of scientific literacy, but the competencies were not elicited by the PISA science items at the depth required by PISA's definition of scientific literacy. Although student responses contained only knowledge of scientific facts and simple scientific concepts, students were using more complex skills to interpret and communicate their responses. Finally the investigation of different scoring approaches and item response models illustrated different ways to interpret student responses to assessment items. These analyses highlighted the complexities of students' responses to the PISA science items and the use of the ordered partition model to accommodate different but equal item responses. The results of the three investigations are used to discuss ways to improve the development and interpretation of PISA's science items.
Fuel cells and the theory of metals.
NASA Technical Reports Server (NTRS)
Bocciarelli, C. V.
1972-01-01
Metal theory is used to study the role of metal catalysts in electrocatalysis, with particular reference to alkaline hydrogen-oxygen fuel cells. Use is made of a simple model, analogous to that used to interpret field emission in vacuum. Theoretical values for all the quantities in the Tafel equation are obtained in terms of bulk properties of the metal catalysts (such as free electron densities and Fermi level). The reasons why some processes are reversible (H-electrodes) and some irreversible (O-electrodes) are identified. Selection rules for desirable properties of catalytic materials are established.
Entropy, Ergodicity, and Stem Cell Multipotency
NASA Astrophysics Data System (ADS)
Ridden, Sonya J.; Chang, Hannah H.; Zygalakis, Konstantinos C.; MacArthur, Ben D.
2015-11-01
Populations of mammalian stem cells commonly exhibit considerable cell-cell variability. However, the functional role of this diversity is unclear. Here, we analyze expression fluctuations of the stem cell surface marker Sca1 in mouse hematopoietic progenitor cells using a simple stochastic model and find that the observed dynamics naturally lie close to a critical state, thereby producing a diverse population that is able to respond rapidly to environmental changes. We propose an information-theoretic interpretation of these results that views cellular multipotency as an instance of maximum entropy statistical inference.
Saturated Fats and Cardiovascular Disease: Interpretations Not as Simple as They Once Were.
Bier, Dennis M
2016-09-09
Historically, the so-called "lipid hypothesis" has focused on the detrimental role of saturated fats per se in enhancing the risks of cardiovascular disease. Recently, a body of new information and systematic analyses of available data have questioned simple interpretation of the relationship of dietary saturated fats and of individual saturated fatty acids to CVD risk. Thus, current assessments of risks due to dietary fat consumption that emphasize the confounding nature of the dietary macronutrients substituted for dietary saturated fats and give broader recognition to the effect of patterns of food intake as a whole are the most productive approach to an overall healthy diet.
NASA Astrophysics Data System (ADS)
Bellerby, Tim
2014-05-01
Model Integration System (MIST) is open-source environmental modelling programming language that directly incorporates data parallelism. The language is designed to enable straightforward programming structures, such as nested loops and conditional statements to be directly translated into sequences of whole-array (or more generally whole data-structure) operations. MIST thus enables the programmer to use well-understood constructs, directly relating to the mathematical structure of the model, without having to explicitly vectorize code or worry about details of parallelization. A range of common modelling operations are supported by dedicated language structures operating on cell neighbourhoods rather than individual cells (e.g.: the 3x3 local neighbourhood needed to implement an averaging image filter can be simply accessed from within a simple loop traversing all image pixels). This facility hides details of inter-process communication behind more mathematically relevant descriptions of model dynamics. The MIST automatic vectorization/parallelization process serves both to distribute work among available nodes and separately to control storage requirements for intermediate expressions - enabling operations on very large domains for which memory availability may be an issue. MIST is designed to facilitate efficient interpreter based implementations. A prototype open source interpreter is available, coded in standard FORTRAN 95, with tools to rapidly integrate existing FORTRAN 77 or 95 code libraries. The language is formally specified and thus not limited to FORTRAN implementation or to an interpreter-based approach. A MIST to FORTRAN compiler is under development and volunteers are sought to create an ANSI-C implementation. Parallel processing is currently implemented using OpenMP. However, parallelization code is fully modularised and could be replaced with implementations using other libraries. GPU implementation is potentially possible.
NASA Astrophysics Data System (ADS)
Pekşen, Ertan; Yas, Türker; Kıyak, Alper
2014-09-01
We examine the one-dimensional direct current method in anisotropic earth formation. We derive an analytic expression of a simple, two-layered anisotropic earth model. Further, we also consider a horizontally layered anisotropic earth response with respect to the digital filter method, which yields a quasi-analytic solution over anisotropic media. These analytic and quasi-analytic solutions are useful tests for numerical codes. A two-dimensional finite difference earth model in anisotropic media is presented in order to generate a synthetic data set for a simple one-dimensional earth. Further, we propose a particle swarm optimization method for estimating the model parameters of a layered anisotropic earth model such as horizontal and vertical resistivities, and thickness. The particle swarm optimization is a naturally inspired meta-heuristic algorithm. The proposed method finds model parameters quite successfully based on synthetic and field data. However, adding 5 % Gaussian noise to the synthetic data increases the ambiguity of the value of the model parameters. For this reason, the results should be controlled by a number of statistical tests. In this study, we use probability density function within 95 % confidence interval, parameter variation of each iteration and frequency distribution of the model parameters to reduce the ambiguity. The result is promising and the proposed method can be used for evaluating one-dimensional direct current data in anisotropic media.
On the representability problem and the physical meaning of coarse-grained models
NASA Astrophysics Data System (ADS)
Wagner, Jacob W.; Dama, James F.; Durumeric, Aleksander E. P.; Voth, Gregory A.
2016-07-01
In coarse-grained (CG) models where certain fine-grained (FG, i.e., atomistic resolution) observables are not directly represented, one can nonetheless identify indirect the CG observables that capture the FG observable's dependence on CG coordinates. Often, in these cases it appears that a CG observable can be defined by analogy to an all-atom or FG observable, but the similarity is misleading and significantly undermines the interpretation of both bottom-up and top-down CG models. Such problems emerge especially clearly in the framework of the systematic bottom-up CG modeling, where a direct and transparent correspondence between FG and CG variables establishes precise conditions for consistency between CG observables and underlying FG models. Here we present and investigate these representability challenges and illustrate them via the bottom-up conceptual framework for several simple analytically tractable polymer models. The examples provide special focus on the observables of configurational internal energy, entropy, and pressure, which have been at the root of controversy in the CG literature, as well as discuss observables that would seem to be entirely missing in the CG representation but can nonetheless be correlated with CG behavior. Though we investigate these problems in the framework of systematic coarse-graining, the lessons apply to top-down CG modeling also, with crucial implications for simulation at constant pressure and surface tension and for the interpretations of structural and thermodynamic correlations for comparison to experiment.
The Phyre2 web portal for protein modelling, prediction and analysis
Kelley, Lawrence A; Mezulis, Stefans; Yates, Christopher M; Wass, Mark N; Sternberg, Michael JE
2017-01-01
Summary Phyre2 is a suite of tools available on the web to predict and analyse protein structure, function and mutations. The focus of Phyre2 is to provide biologists with a simple and intuitive interface to state-of-the-art protein bioinformatics tools. Phyre2 replaces Phyre, the original version of the server for which we previously published a protocol. In this updated protocol, we describe Phyre2, which uses advanced remote homology detection methods to build 3D models, predict ligand binding sites, and analyse the effect of amino-acid variants (e.g. nsSNPs) for a user’s protein sequence. Users are guided through results by a simple interface at a level of detail determined by them. This protocol will guide a user from submitting a protein sequence to interpreting the secondary and tertiary structure of their models, their domain composition and model quality. A range of additional available tools is described to find a protein structure in a genome, to submit large number of sequences at once and to automatically run weekly searches for proteins difficult to model. The server is available at http://www.sbg.bio.ic.ac.uk/phyre2. A typical structure prediction will be returned between 30mins and 2 hours after submission. PMID:25950237
Annual forest inventory estimates based on the moving average
Francis A. Roesch; James R. Steinman; Michael T. Thompson
2002-01-01
Three interpretations of the simple moving average estimator, as applied to the USDA Forest Service's annual forest inventory design, are presented. A corresponding approach to composite estimation over arbitrarily defined land areas and time intervals is given for each interpretation, under the assumption that the investigator is armed with only the spatial/...
Building a Smart Portal for Astronomy
NASA Astrophysics Data System (ADS)
Derriere, S.; Boch, T.
2011-07-01
The development of a portal for accessing astronomical resources is not an easy task. The ever-increasing complexity of the data products can result in very complex user interfaces, requiring a lot of effort and learning from the user in order to perform searches. This is often a design choice, where the user must explicitly set many constraints, while the portal search logic remains simple. We investigated a different approach, where the query interface is kept as simple as possible (ideally, a simple text field, like for Google search), and the search logic is made much more complex to interpret the query in a relevant manner. We will present the implications of this approach in terms of interpretation and categorization of the query parameters (related to astronomical vocabularies), translation (mapping) of these concepts into the portal components metadata, identification of query schemes and use cases matching the input parameters, and delivery of query results to the user.
NASA Astrophysics Data System (ADS)
Cheyney, S.; Fishwick, S.; Hill, I. A.; Linford, N. T.
2015-08-01
Despite the development of advanced processing and interpretation tools for magnetic data sets in the fields of mineral and hydrocarbon industries, these methods have not achieved similar levels of adoption for archaeological or very near surface surveys. Using a synthetic data set we demonstrate that certain methodologies and assumptions used to successfully invert more regional-scale data can lead to large discrepancies between the true and recovered depths when applied to archaeological-type anomalies. We propose variations to the current approach, analysing the choice of the depth-weighting function, mesh design and parameter constraints, to develop an appropriate technique for the 3-D inversion of archaeological-scale data sets. The results show a successful recovery of a synthetic scenario, as well as a case study of a Romano-Celtic temple in the UK. For the case study, the final susceptibility model is compared with two coincident ground penetrating radar surveys, showing a high correlation with the comparative depth slices. The new approach takes interpretation of archaeological data sets beyond a simple 2-D visual interpretation based on pattern recognition.
Cole, James C.; Harris, Anita G.; Wahl, Ronald R.
1997-01-01
This map displays interpreted structural and stratigraphic relations among the Paleozoic and older rocks of the Nevada Test Site region beneath the Miocene volcanic rocks and younger alluvium in the Yucca Flat and northern Frenchman Flat basins. These interpretations are based on a comprehensive examination and review of data for more than 77 drillholes that penetrated part of the pre-Tertiary basement beneath these post-middle Miocene structural basins. Biostratigraphic data from conodont fossils were newly obtained for 31 of these holes, and a thorough review of all prior microfossil paleontologic data is incorporated in the analysis. Subsurface relationships are interpreted in light of a revised regional geologic framework synthesized from detailed geologic mapping in the ranges surrounding Yucca Flat, from comprehensive stratigraphic studies in the region, and from additional detailed field studies on and around the Nevada Test Site.All available data indicate the subsurface geology of Yucca Flat is considerably more complicated than previous interpretations have suggested. The western part of the basin, in particular, is underlain by relics of the eastward-vergent Belted Range thrust system that are folded back toward the west and thrust by local, west-vergent contractional structures of the CP thrust system. Field evidence from the ranges surrounding the north end of Yucca Flat indicate that two significant strike-slip faults track southward beneath the post-middle Miocene basin fill, but their subsurface traces cannot be closely defined from the available evidence. In contrast, the eastern part of the Yucca Flat basin is interpreted to be underlain by a fairly simple north-trending, broad syncline in the pre-Tertiary units. Far fewer data are available for the northern Frenchman Flat basin, but regional analysis indicates the pre- Tertiary structure there should also be relatively simple and not affected by thrusting.This new interpretation has implications for ground water flow through pre-Tertiary rocks beneath the Yucca Flat and northern Frenchman Flat areas, and has consequences for ground water modeling and model validation. Our data indicate that the Mississippian Chainman Shale is not a laterally extensive confining unit in the western part of the basin because it is folded back onto itself by the convergent structures of the Belted Range and CP thrust systems. Early and Middle Paleozoic limestone and dolomite are present beneath most of both basins and, regardless of structural complications, are interpreted to form a laterally continuous and extensive carbonate aquifer. Structural culmination that marks the French Peak accommodation zone along the topographic divide between the two basins provides a lateral pathway through highly fractured rock between the volcanic aquifers of Yucca Flat and the regional carbonate aquifer. This pathway may accelerate the migration of ground-water contaminants introduced by underground nuclear testing toward discharge areas beyond the Nevada Test Site boundaries. Predictive three-dimensional models of hydrostratigraphic units and ground-water flow in the pre-Tertiary rocks of subsurface Yucca Flat are likely to be unrealistic due to the extreme structural complexities. The interpretation of hydrologic and geochemical data obtained from monitoring wells will be difficult to extrapolate through the flow system until more is known about the continuity of hydrostratigraphic units.
Mira variables: An informal review
NASA Technical Reports Server (NTRS)
Wing, R. F.
1980-01-01
The structure of the Mira variables is discussed with particular emphasis on the extent of their observable atmospheres, the various methods for measuring the sizes of these atmospheres, and the manner in which the size changes through the cycle. The results obtained by direct, photometric and spectroscopic methods are compared, and the problems of interpretation are addressed. Also, a simple model for the atmospheric structure and motions of Miras based on recent observations of the doubling of infrared molecualr times is described. This model, consisting of two atmospheric layers plus a circumstellar shell, provides a physically plausible picture of the atmosphere which is consistent with the photometrically measured magnitude and temperature variations as well as the spectroscopic data.
Inelastic response of silicon to shock compression.
Higginbotham, A; Stubley, P G; Comley, A J; Eggert, J H; Foster, J M; Kalantar, D H; McGonegle, D; Patel, S; Peacock, L J; Rothman, S D; Smith, R F; Suggit, M J; Wark, J S
2016-04-13
The elastic and inelastic response of [001] oriented silicon to laser compression has been a topic of considerable discussion for well over a decade, yet there has been little progress in understanding the basic behaviour of this apparently simple material. We present experimental x-ray diffraction data showing complex elastic strain profiles in laser compressed samples on nanosecond timescales. We also present molecular dynamics and elasticity code modelling which suggests that a pressure induced phase transition is the cause of the previously reported 'anomalous' elastic waves. Moreover, this interpretation allows for measurement of the kinetic timescales for transition. This model is also discussed in the wider context of reported deformation of silicon to rapid compression in the literature.
Drift and observations in cosmic-ray modulation, 1
NASA Technical Reports Server (NTRS)
Potgieter, M. S.
1985-01-01
It is illustrated that a relative simple drift model can, in contrast with no drift models, simultaneously fit proton and electron spectra observed in 1965-66 and 1977, using a single set of modulation parameters except for a change in the IMF polarity. This result is interpreted together with the observation of Evenson and Meyer that electrons are recovering more rapidly than protons after 1980, in contrast with what Burger and Swanenburg observed in 1968-72, as a charge sign dependent effect due to the occurrence of drift in cosmic ray modulation. The same set of parameters produces a shift in the phase and amplitude of the diurnal anisotropy vector, consistent with observations in 1969-71 and 1980-81.
Radiating dipoles in photonic crystals
Busch; Vats; John; Sanders
2000-09-01
The radiation dynamics of a dipole antenna embedded in a photonic crystal are modeled by an initially excited harmonic oscillator coupled to a non-Markovian bath of harmonic oscillators representing the colored electromagnetic vacuum within the crystal. Realistic coupling constants based on the natural modes of the photonic crystal, i.e., Bloch waves and their associated dispersion relation, are derived. For simple model systems, well-known results such as decay times and emission spectra are reproduced. This approach enables direct incorporation of realistic band structure computations into studies of radiative emission from atoms and molecules within photonic crystals. We therefore provide a predictive and interpretative tool for experiments in both the microwave and optical regimes.
Gönci, Balázs; Németh, Valéria; Balogh, Emeric; Szabó, Bálint; Dénes, Ádám; Környei, Zsuzsanna; Vicsek, Tamás
2010-12-20
Because of its relevance to everyday life, the spreading of viral infections has been of central interest in a variety of scientific communities involved in fighting, preventing and theoretically interpreting epidemic processes. Recent large scale observations have resulted in major discoveries concerning the overall features of the spreading process in systems with highly mobile susceptible units, but virtually no data are available about observations of infection spreading for a very large number of immobile units. Here we present the first detailed quantitative documentation of percolation-type viral epidemics in a highly reproducible in vitro system consisting of tens of thousands of virtually motionless cells. We use a confluent astroglial monolayer in a Petri dish and induce productive infection in a limited number of cells with a genetically modified herpesvirus strain. This approach allows extreme high resolution tracking of the spatio-temporal development of the epidemic. We show that a simple model is capable of reproducing the basic features of our observations, i.e., the observed behaviour is likely to be applicable to many different kinds of systems. Statistical physics inspired approaches to our data, such as fractal dimension of the infected clusters as well as their size distribution, seem to fit into a percolation theory based interpretation. We suggest that our observations may be used to model epidemics in more complex systems, which are difficult to study in isolation.
Gönci, Balázs; Németh, Valéria; Balogh, Emeric; Szabó, Bálint; Dénes, Ádám; Környei, Zsuzsanna; Vicsek, Tamás
2010-01-01
Because of its relevance to everyday life, the spreading of viral infections has been of central interest in a variety of scientific communities involved in fighting, preventing and theoretically interpreting epidemic processes. Recent large scale observations have resulted in major discoveries concerning the overall features of the spreading process in systems with highly mobile susceptible units, but virtually no data are available about observations of infection spreading for a very large number of immobile units. Here we present the first detailed quantitative documentation of percolation-type viral epidemics in a highly reproducible in vitro system consisting of tens of thousands of virtually motionless cells. We use a confluent astroglial monolayer in a Petri dish and induce productive infection in a limited number of cells with a genetically modified herpesvirus strain. This approach allows extreme high resolution tracking of the spatio-temporal development of the epidemic. We show that a simple model is capable of reproducing the basic features of our observations, i.e., the observed behaviour is likely to be applicable to many different kinds of systems. Statistical physics inspired approaches to our data, such as fractal dimension of the infected clusters as well as their size distribution, seem to fit into a percolation theory based interpretation. We suggest that our observations may be used to model epidemics in more complex systems, which are difficult to study in isolation. PMID:21187920
Interpreting experimental data on egg production--applications of dynamic differential equations.
France, J; Lopez, S; Kebreab, E; Dijkstra, J
2013-09-01
This contribution focuses on applying mathematical models based on systems of ordinary first-order differential equations to synthesize and interpret data from egg production experiments. Models based on linear systems of differential equations are contrasted with those based on nonlinear systems. Regression equations arising from analytical solutions to linear compartmental schemes are considered as candidate functions for describing egg production curves, together with aspects of parameter estimation. Extant candidate functions are reviewed, a role for growth functions such as the Gompertz equation suggested, and a function based on a simple new model outlined. Structurally, the new model comprises a single pool with an inflow and an outflow. Compartmental simulation models based on nonlinear systems of differential equations, and thus requiring numerical solution, are next discussed, and aspects of parameter estimation considered. This type of model is illustrated in relation to development and evaluation of a dynamic model of calcium and phosphorus flows in layers. The model consists of 8 state variables representing calcium and phosphorus pools in the crop, stomachs, plasma, and bone. The flow equations are described by Michaelis-Menten or mass action forms. Experiments that measure Ca and P uptake in layers fed different calcium concentrations during shell-forming days are used to evaluate the model. In addition to providing a useful management tool, such a simulation model also provides a means to evaluate feeding strategies aimed at reducing excretion of potential pollutants in poultry manure to the environment.
Introducing Multisensor Satellite Radiance-Based Evaluation for Regional Earth System Modeling
NASA Technical Reports Server (NTRS)
Matsui, T.; Santanello, J.; Shi, J. J.; Tao, W.-K.; Wu, D.; Peters-Lidard, C.; Kemp, E.; Chin, M.; Starr, D.; Sekiguchi, M.;
2014-01-01
Earth System modeling has become more complex, and its evaluation using satellite data has also become more difficult due to model and data diversity. Therefore, the fundamental methodology of using satellite direct measurements with instrumental simulators should be addressed especially for modeling community members lacking a solid background of radiative transfer and scattering theory. This manuscript introduces principles of multisatellite, multisensor radiance-based evaluation methods for a fully coupled regional Earth System model: NASA-Unified Weather Research and Forecasting (NU-WRF) model. We use a NU-WRF case study simulation over West Africa as an example of evaluating aerosol-cloud-precipitation-land processes with various satellite observations. NU-WRF-simulated geophysical parameters are converted to the satellite-observable raw radiance and backscatter under nearly consistent physics assumptions via the multisensor satellite simulator, the Goddard Satellite Data Simulator Unit. We present varied examples of simple yet robust methods that characterize forecast errors and model physics biases through the spatial and statistical interpretation of various satellite raw signals: infrared brightness temperature (Tb) for surface skin temperature and cloud top temperature, microwave Tb for precipitation ice and surface flooding, and radar and lidar backscatter for aerosol-cloud profiling simultaneously. Because raw satellite signals integrate many sources of geophysical information, we demonstrate user-defined thresholds and a simple statistical process to facilitate evaluations, including the infrared-microwave-based cloud types and lidar/radar-based profile classifications.
Thermodynamic Entropy and the Accessible States of Some Simple Systems
ERIC Educational Resources Information Center
Sands, David
2008-01-01
Comparison of the thermodynamic entropy with Boltzmann's principle shows that under conditions of constant volume the total number of arrangements in a simple thermodynamic system with temperature-independent constant-volume heat capacity, C, is T[superscript C/k]. A physical interpretation of this function is given for three such systems: an…
A Novel Method for Discovering Fuzzy Sequential Patterns Using the Simple Fuzzy Partition Method.
ERIC Educational Resources Information Center
Chen, Ruey-Shun; Hu, Yi-Chung
2003-01-01
Discusses sequential patterns, data mining, knowledge acquisition, and fuzzy sequential patterns described by natural language. Proposes a fuzzy data mining technique to discover fuzzy sequential patterns by using the simple partition method which allows the linguistic interpretation of each fuzzy set to be easily obtained. (Author/LRW)
ERIC Educational Resources Information Center
Ginsberg, Edw S.
2018-01-01
The compatibility of the Newtonian formulation of mechanical energy and the transformation equations of Galilean relativity is demonstrated for three simple examples of motion treated in most introductory physics courses (free fall, a frictionless inclined plane, and a mass/spring system). Only elementary concepts and mathematics, accessible to…
A Multilevel Multiset Time-Series Model for Describing Complex Developmental Processes
Ma, Xin; Shen, Jianping
2017-01-01
The authors sought to develop an analytical platform where multiple sets of time series can be examined simultaneously. This multivariate platform capable of testing interaction effects among multiple sets of time series can be very useful in empirical research. The authors demonstrated that the multilevel framework can readily accommodate this analytical capacity. Given their intention to use the multilevel multiset time-series model to pursue complicated research purposes, their resulting model is relatively simple to specify, to run, and to interpret. These advantages make the adoption of their model relatively effortless as long as researchers have the basic knowledge and skills in working with multilevel growth modeling. With multiple potential extensions of their model, the establishment of this analytical platform for analysis of multiple sets of time series can inspire researchers to pursue far more advanced research designs to address complex developmental processes in reality. PMID:29881094
A Simple Pythagorean Interpretation of E2 = p2c2 + (mc2)2
NASA Astrophysics Data System (ADS)
Tobar, J. A.; Vargas, E. L.; Andrianarijaona, V. M.
2015-03-01
We are considering the relationship between the relativistic energy, the momentum, and the rest energy, E2 =p2c2 + (mc2)2 , and using geometrical means to analyze each individual portion in a spatial setting. The aforementioned equation suggests that pc and mc2 could be thought of as the two axis of a plane. According to de Broglie's hypothesis λ = h / p therefore suggesting that the pc-axis is connected to the wave properties of a moving object, and subsequently, the mc2-axis is connected to the particle properties. Consequently, these two axis could represent the particle and wave properties of the moving object. An overview of possible models and meaningful interpretations will be presented. Authors wish to give special thanks to Pacific Union College Student Senate in Angwin, California, for their financial support.
The roar of Yasur: Handheld audio recorder monitoring of Vanuatu volcanic vent activity
NASA Astrophysics Data System (ADS)
Lorenz, Ralph D.; Turtle, Elizabeth P.; Howell, Robert; Radebaugh, Jani; Lopes, Rosaly M. C.
2016-08-01
We describe how near-field audio recording using a pocket digital sound recorder can usefully document volcanic activity, demonstrating the approach at Yasur, Vanuatu in May 2014. Prominent emissions peak at 263 Hz, interpreted as an organ-pipe mode. High-pass filtering was found to usefully discriminate volcano vent noise from wind noise, and autocorrelation of the high pass acoustic power reveals a prominent peak in exhalation intervals of 2.5, 4 and 8 s, with a number of larger explosive events at 200 s intervals. We suggest that this compact and inexpensive audio instrumentation can usefully supplement other field monitoring such as seismic or infrasound. A simple estimate of acoustic power interpreted with a dipole jet noise model yielded vent velocities too low to be compatible with pyroclast emission, suggesting difficulties with this approach at audio frequencies (perhaps due to acoustic absorption by volcanic gases).
Origins of the temperature dependence of hammerhead ribozyme catalysis.
Peracchi, A
1999-01-01
The difficulties in interpreting the temperature dependence of protein enzyme reactions are well recognized. Here, the hammerhead ribozyme cleavage was investigated under single-turnover conditions between 0 and 60 degrees C as a model for RNA-catalyzed reactions. Under the adopted conditions, the chemical step appears to be rate-limiting. However, the observed rate of cleavage is affected by pre-catalytic equilibria involving deprotonation of an essential group and binding of at least one low-affinity Mg2+ion. Thus, the apparent entropy and enthalpy of activation include contributions from the temperature dependence of these equilibria, precluding a simple physical interpretation of the observed activation parameters. Similar pre-catalytic equilibria likely contribute to the observed activation parameters for ribozyme reactions in general. The Arrhenius plot for the hammerhead reaction is substantially curved over the temperature range considered, which suggests the occurrence of a conformational change of the ribozyme ground state around physiological temperatures. PMID:10390528
Network-induced oscillatory behavior in material flow networks and irregular business cycles
NASA Astrophysics Data System (ADS)
Helbing, Dirk; Lämmer, Stefen; Witt, Ulrich; Brenner, Thomas
2004-11-01
Network theory is rapidly changing our understanding of complex systems, but the relevance of topological features for the dynamic behavior of metabolic networks, food webs, production systems, information networks, or cascade failures of power grids remains to be explored. Based on a simple model of supply networks, we offer an interpretation of instabilities and oscillations observed in biological, ecological, economic, and engineering systems. We find that most supply networks display damped oscillations, even when their units—and linear chains of these units—behave in a nonoscillatory way. Moreover, networks of damped oscillators tend to produce growing oscillations. This surprising behavior offers, for example, a different interpretation of business cycles and of oscillating or pulsating processes. The network structure of material flows itself turns out to be a source of instability, and cyclical variations are an inherent feature of decentralized adjustments.
Diffusion processes in tumors: A nuclear medicine approach
NASA Astrophysics Data System (ADS)
Amaya, Helman
2016-07-01
The number of counts used in nuclear medicine imaging techniques, only provides physical information about the desintegration of the nucleus present in the the radiotracer molecules that were uptaken in a particular anatomical region, but that information is not a real metabolic information. For this reason a mathematical method was used to find a correlation between number of counts and 18F-FDG mass concentration. This correlation allows a better interpretation of the results obtained in the study of diffusive processes in an agar phantom, and based on it, an image from the PETCETIX DICOM sample image set from OsiriX-viewer software was processed. PET-CT gradient magnitude and Laplacian images could show direct information on diffusive processes for radiopharmaceuticals that enter into the cells by simple diffusion. In the case of the radiopharmaceutical 18F-FDG is necessary to include pharmacokinetic models, to make a correct interpretation of the gradient magnitude and Laplacian of counts images.
A scene-analysis approach to remote sensing. [San Francisco, California
NASA Technical Reports Server (NTRS)
Tenenbaum, J. M. (Principal Investigator); Fischler, M. A.; Wolf, H. C.
1978-01-01
The author has identified the following significant results. Geometric correspondance between a sensed image and a symbolic map is established in an initial stage of processing by adjusting parameters of a sensed model so that the image features predicted from the map optimally match corresponding features extracted from the sensed image. Information in the map is then used to constrain where to look in an image, what to look for, and how to interpret what is seen. For simple monitoring tasks involving multispectral classification, these constraints significantly reduce computation, simplify interpretation, and improve the utility of the resulting information. Previously intractable tasks requiring spatial and textural analysis may become straightforward in the context established by the map knowledge. The use of map-guided image analysis in monitoring the volume of water in a reservoir, the number of boxcars in a railyard, and the number of ships in a harbor is demonstrated.
Self organising hypothesis networks: a new approach for representing and structuring SAR knowledge
2014-01-01
Background Combining different sources of knowledge to build improved structure activity relationship models is not easy owing to the variety of knowledge formats and the absence of a common framework to interoperate between learning techniques. Most of the current approaches address this problem by using consensus models that operate at the prediction level. We explore the possibility to directly combine these sources at the knowledge level, with the aim to harvest potentially increased synergy at an earlier stage. Our goal is to design a general methodology to facilitate knowledge discovery and produce accurate and interpretable models. Results To combine models at the knowledge level, we propose to decouple the learning phase from the knowledge application phase using a pivot representation (lingua franca) based on the concept of hypothesis. A hypothesis is a simple and interpretable knowledge unit. Regardless of its origin, knowledge is broken down into a collection of hypotheses. These hypotheses are subsequently organised into hierarchical network. This unification permits to combine different sources of knowledge into a common formalised framework. The approach allows us to create a synergistic system between different forms of knowledge and new algorithms can be applied to leverage this unified model. This first article focuses on the general principle of the Self Organising Hypothesis Network (SOHN) approach in the context of binary classification problems along with an illustrative application to the prediction of mutagenicity. Conclusion It is possible to represent knowledge in the unified form of a hypothesis network allowing interpretable predictions with performances comparable to mainstream machine learning techniques. This new approach offers the potential to combine knowledge from different sources into a common framework in which high level reasoning and meta-learning can be applied; these latter perspectives will be explored in future work. PMID:24959206
Complexity of life via collective mind
NASA Technical Reports Server (NTRS)
Zak, Michail
2004-01-01
e mind is introduced as a set of simple intelligent units (say, neurons, or interacting agents), which can communicate by exchange of information without explicit global control. Incomplete information is compensated by a sequence of random guesses symmetrically distributed around expectations with prescribed variances. Both the expectations and variances are the invariants characterizing the whole class of agents. These invariants are stored as parameters of the collective mind, while they contribute into dynamical formalism of the agents' evolution, and in particular, into the reflective chains of their nested abstract images of the selves and non-selves. The proposed model consists of the system of stochastic differential equations in the Langevin form representing the motor dynamics, and the corresponding Fokker-Planck equation representing the mental dynamics (Motor dynamics describes the motion in physical space, while mental dynamics simulates the evolution of initial errors in terms of the probability density). The main departure of this model from Newtonian and statistical physics is due to a feedback from the mental to the motor dynamics which makes the Fokker-Planck equation nonlinear. Interpretation of this model from mathematical and physical viewpoints, as well as possible interpretation from biological, psychological, and social viewpoints are discussed. The model is illustrated by the dynamics of a dialog.
NASA Astrophysics Data System (ADS)
Falvo, Cyril
2018-02-01
The theory of linear and non-linear infrared response of vibrational Holstein polarons in one-dimensional lattices is presented in order to identify the spectral signatures of self-trapping phenomena. Using a canonical transformation, the optical response is computed from the small polaron point of view which is valid in the anti-adiabatic limit. Two types of phonon baths are considered: optical phonons and acoustical phonons, and simple expressions are derived for the infrared response. It is shown that for the case of optical phonons, the linear response can directly probe the polaron density of states. The model is used to interpret the experimental spectrum of crystalline acetanilide in the C=O range. For the case of acoustical phonons, it is shown that two bound states can be observed in the two-dimensional infrared spectrum at low temperature. At high temperature, analysis of the time-dependence of the two-dimensional infrared spectrum indicates that bath mediated correlations slow down spectral diffusion. The model is used to interpret the experimental linear-spectroscopy of model α-helix and β-sheet polypeptides. This work shows that the Davydov Hamiltonian cannot explain the observations in the NH stretching range.
Charge and energy dependence of the residence time of cosmic ray nuclei below 15 GeV/nucleon
NASA Technical Reports Server (NTRS)
Soutoul, A.; Engelmann, J. J.; Ferrando, P.; Koch-Miramond, L.; Masse, P.; Webber, W. R.
1985-01-01
The relative abundance of nuclear species measured in cosmic rays at Earth has often been interpreted with the simple leaky box model. For this model to be consistent an essential requirement is that the escape length does not depend on the nuclear species. The discrepancy between escape length values derived from iron secondaries and from the B/C ratio was identified by Garcia-Munoz and his co-workers using a large amount of experimental data. Ormes and Protheroe found a similar trend in the HEAO data although they questioned its significance against uncertainties. They also showed that the change in the B/C ratio values implies a decrease of the residence time of cosmic rays at low energies in conflict with the diffusive convective picture. These conclusions crucially depend on the partial cross section values and their uncertainties. Recently new accurate cross sections of key importance for propagation calculations have been measured. Their statistical uncertainties are often better than 4% and their values significantly different from those previously accepted. Here, these new cross sections are used to compare the observed B/C+O and (Sc to Cr)/Fe ratio to those predicted with the simple leaky box model.
AFOSR Contractors Meeting in Propulsion Held in Atlantic City, New Jersey on 14 - 18 June 1993
1994-04-20
number is found to be much weaker, but not universal. Previous results in compressible shear layers are difficult to interpret as a con- sequence of...turbulence. Reference 9 provides new interpretation of measured spectra of reacting species in turbulence. RKF RENCES 1. Bilger, RW., Phys. Fluids A...A simple interpretation of Eq. (2) is that 0(n) (t) is drawn to its two neighbors at a rate proportional to its separation from them. Note that as
NASA Technical Reports Server (NTRS)
Kaupp, V. H.; Macdonald, H. C.; Waite, W. P.
1981-01-01
The initial phase of a program to determine the best interpretation strategy and sensor configuration for a radar remote sensing system for geologic applications is discussed. In this phase, terrain modeling and radar image simulation were used to perform parametric sensitivity studies. A relatively simple computer-generated terrain model is presented, and the data base, backscatter file, and transfer function for digital image simulation are described. Sets of images are presented that simulate the results obtained with an X-band radar from an altitude of 800 km and at three different terrain-illumination angles. The simulations include power maps, slant-range images, ground-range images, and ground-range images with statistical noise incorporated. It is concluded that digital image simulation and computer modeling provide cost-effective methods for evaluating terrain variations and sensor parameter changes, for predicting results, and for defining optimum sensor parameters.
Deformations of the Almheiri-Polchinski model
NASA Astrophysics Data System (ADS)
Kyono, Hideki; Okumura, Suguru; Yoshida, Kentaroh
2017-03-01
We study deformations of the Almheiri-Polchinski (AP) model by employing the Yang-Baxter deformation technique. The general deformed AdS2 metric becomes a solution of a deformed AP model. In particular, the dilaton potential is deformed from a simple quadratic form to a hyperbolic function-type potential similarly to integrable deformations. A specific solution is a deformed black hole solution. Because the deformation makes the spacetime structure around the boundary change drastically and a new naked singularity appears, the holographic interpretation is far from trivial. The Hawking temperature is the same as the undeformed case but the Bekenstein-Hawking entropy is modified due to the deformation. This entropy can also be reproduced by evaluating the renormalized stress tensor with an appropriate counter-term on the regularized screen close to the singularity.
NASA Technical Reports Server (NTRS)
Abbey, Craig K.; Eckstein, Miguel P.
2002-01-01
We consider estimation and statistical hypothesis testing on classification images obtained from the two-alternative forced-choice experimental paradigm. We begin with a probabilistic model of task performance for simple forced-choice detection and discrimination tasks. Particular attention is paid to general linear filter models because these models lead to a direct interpretation of the classification image as an estimate of the filter weights. We then describe an estimation procedure for obtaining classification images from observer data. A number of statistical tests are presented for testing various hypotheses from classification images based on some more compact set of features derived from them. As an example of how the methods we describe can be used, we present a case study investigating detection of a Gaussian bump profile.
Sequential Inverse Problems Bayesian Principles and the Logistic Map Example
NASA Astrophysics Data System (ADS)
Duan, Lian; Farmer, Chris L.; Moroz, Irene M.
2010-09-01
Bayesian statistics provides a general framework for solving inverse problems, but is not without interpretation and implementation problems. This paper discusses difficulties arising from the fact that forward models are always in error to some extent. Using a simple example based on the one-dimensional logistic map, we argue that, when implementation problems are minimal, the Bayesian framework is quite adequate. In this paper the Bayesian Filter is shown to be able to recover excellent state estimates in the perfect model scenario (PMS) and to distinguish the PMS from the imperfect model scenario (IMS). Through a quantitative comparison of the way in which the observations are assimilated in both the PMS and the IMS scenarios, we suggest that one can, sometimes, measure the degree of imperfection.
Interpretation of pH-activity profiles for acid-base catalysis from molecular simulations.
Dissanayake, Thakshila; Swails, Jason M; Harris, Michael E; Roitberg, Adrian E; York, Darrin M
2015-02-17
The measurement of reaction rate as a function of pH provides essential information about mechanism. These rates are sensitive to the pK(a) values of amino acids directly involved in catalysis that are often shifted by the enzyme active site environment. Experimentally observed pH-rate profiles are usually interpreted using simple kinetic models that allow estimation of "apparent pK(a)" values of presumed general acid and base catalysts. One of the underlying assumptions in these models is that the protonation states are uncorrelated. In this work, we introduce the use of constant pH molecular dynamics simulations in explicit solvent (CpHMD) with replica exchange in the pH-dimension (pH-REMD) as a tool to aid in the interpretation of pH-activity data of enzymes and to test the validity of different kinetic models. We apply the methods to RNase A, a prototype acid-base catalyst, to predict the macroscopic and microscopic pK(a) values, as well as the shape of the pH-rate profile. Results for apo and cCMP-bound RNase A agree well with available experimental data and suggest that deprotonation of the general acid and protonation of the general base are not strongly coupled in transphosphorylation and hydrolysis steps. Stronger coupling, however, is predicted for the Lys41 and His119 protonation states in apo RNase A, leading to the requirement for a microscopic kinetic model. This type of analysis may be important for other catalytic systems where the active forms of the implicated general acid and base are oppositely charged and more highly correlated. These results suggest a new way for CpHMD/pH-REMD simulations to bridge the gap with experiments to provide a molecular-level interpretation of pH-activity data in studies of enzyme mechanisms.
Imbalanced target prediction with pattern discovery on clinical data repositories.
Chan, Tak-Ming; Li, Yuxi; Chiau, Choo-Chiap; Zhu, Jane; Jiang, Jie; Huo, Yong
2017-04-20
Clinical data repositories (CDR) have great potential to improve outcome prediction and risk modeling. However, most clinical studies require careful study design, dedicated data collection efforts, and sophisticated modeling techniques before a hypothesis can be tested. We aim to bridge this gap, so that clinical domain users can perform first-hand prediction on existing repository data without complicated handling, and obtain insightful patterns of imbalanced targets for a formal study before it is conducted. We specifically target for interpretability for domain users where the model can be conveniently explained and applied in clinical practice. We propose an interpretable pattern model which is noise (missing) tolerant for practice data. To address the challenge of imbalanced targets of interest in clinical research, e.g., deaths less than a few percent, the geometric mean of sensitivity and specificity (G-mean) optimization criterion is employed, with which a simple but effective heuristic algorithm is developed. We compared pattern discovery to clinically interpretable methods on two retrospective clinical datasets. They contain 14.9% deaths in 1 year in the thoracic dataset and 9.1% deaths in the cardiac dataset, respectively. In spite of the imbalance challenge shown on other methods, pattern discovery consistently shows competitive cross-validated prediction performance. Compared to logistic regression, Naïve Bayes, and decision tree, pattern discovery achieves statistically significant (p-values < 0.01, Wilcoxon signed rank test) favorable averaged testing G-means and F1-scores (harmonic mean of precision and sensitivity). Without requiring sophisticated technical processing of data and tweaking, the prediction performance of pattern discovery is consistently comparable to the best achievable performance. Pattern discovery has demonstrated to be robust and valuable for target prediction on existing clinical data repositories with imbalance and noise. The prediction results and interpretable patterns can provide insights in an agile and inexpensive way for the potential formal studies.
Interpretation of pH-activity Profiles for Acid-Base Catalysis from Molecular Simulations
Dissanayake, Thakshila; Swails, Jason; Harris, Michael E.; Roitberg, Adrian E.; York, Darrin M.
2015-01-01
The measurement of reaction rate as a function of pH provides essential information about mechanism. These rates are sensitive to the pKa values of amino acids directly involved in catalysis that are often shifted by the enzyme active site environment. Experimentally observed pH-rate profiles are usually interpreted using simple kinetic models that allow estimation of “apparent pKa” values of presumed general acid and base catalysts. One of the underlying assumptions in these models is that the protonation states are uncorrelated. In the present work, we introduce the use of constant pH molecular dynamics simulations in explicit solvent (CpHMD) with replica exchange in the pH-dimension (pH-REMD) as a tool to aid in the interpretation of pH-activity data of enzymes, and test the validity of different kinetic models. We apply the methods to RNase A, a prototype acid/base catalyst, to predict the macroscopic and microscopic pKa values, as well as the shape of the pH-rate profile. Results for apo and cCMP-bound RNase A agree well with available experimental data, and suggest that deprotonation of the general acid and protonation of the general base are not strongly coupled in transphosphorylation and hydrolysis steps. Stronger coupling, however, is predicted for the Lys41 and His119 protonation states in apo RNase A, leading to the requirement for a microscopic kinetic model. This type of analysis may be important for other catalytic systems where the active forms of implicated general acid and base are oppositely charged and more highly correlated. These results suggest a new way for CpHMD/pH-REMD simulations to bridge the gap with experiments to provide a molecular-level interpretation of pH-activity data in studies of enzyme mechanisms. PMID:25615525
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spence, R.D.; Godbee, H.W.; Tallent, O.K.
1991-01-01
Despite the demonstrated importance of diffusion control in leaching, other mechanisms have been observed to play a role and leaching from porous solid bodies is not simple diffusion. Only simple diffusion theory has been developed well enough for extrapolation, as yet. The well developed diffusion theory, used in data analysis by ANSI/ANS-16.1 and the NEWBOX program, can help in trying to extrapolate and predict the performance of solidified waste forms over decades and centuries, but the limitations and increased uncertainty must be understood in so doing. Treating leaching as a semi-infinite medium problem, as done in the Cote model, resultsmore » in simpler equations, but limits, application to early leaching behavior when less than 20% of a given component has been leached. 18 refs., 2 tabs.« less
Hidden patterns of reciprocity.
Syi
2014-03-21
Reciprocity can help the evolution of cooperation. To model both types of reciprocity, we need the concept of strategy. In the case of direct reciprocity there are four second-order action rules (Simple Tit-for-tat, Contrite Tit-for-tat, Pavlov, and Grim Trigger), which are able to promote cooperation. In the case of indirect reciprocity the key component of cooperation is the assessment rule. There are, again, four elementary second-order assessment rules (Image Scoring, Simple Standing, Stern Judging, and Shunning). The eight concepts can be formalized in an ontologically thin way we need only an action predicate and a value function, two agent concepts, and the constant of goodness. The formalism helps us to discover that the action and assessment rules can be paired, and that they show the same patterns. The logic of these patterns can be interpreted with the concept of punishment that has an inherent paradoxical nature. Copyright © 2013 Elsevier Ltd. All rights reserved.
On simulation of no-slip condition in the method of discrete vortices
NASA Astrophysics Data System (ADS)
Shmagunov, O. A.
2017-10-01
When modeling flows of an incompressible fluid, it is convenient sometimes to use the method of discrete vortices (MDV), where the continuous vorticity field is approximated by a set of discrete vortex elements moving in the velocity field. The vortex elements have a clear physical interpretation, they do not require the construction of grids and are automatically adaptive, since they concentrate in the regions of greatest interest and successfully describe the flows of a non-viscous fluid. The possibility of using MDV in simulating flows of a viscous fluid was considered in the previous papers using the examples of flows past bodies with sharp edges with the no-penetration condition at solid boundaries. However, the appearance of vorticity on smooth boundaries requires the no-slip condition to be met when MDV is realized, which substantially complicates the initially simple method. In this connection, an approach is considered that allows solving the problem by simple means.
Fire forbids fifty-fifty forest
Staal, Arie; Hantson, Stijn; Holmgren, Milena; Pueyo, Salvador; Bernardi, Rafael E.; Flores, Bernardo M.; Xu, Chi; Scheffer, Marten
2018-01-01
Recent studies have interpreted patterns of remotely sensed tree cover as evidence that forest with intermediate tree cover might be unstable in the tropics, as it will tip into either a closed forest or a more open savanna state. Here we show that across all continents the frequency of wildfires rises sharply as tree cover falls below ~40%. Using a simple empirical model, we hypothesize that the steepness of this pattern causes intermediate tree cover (30‒60%) to be unstable for a broad range of assumptions on tree growth and fire-driven mortality. We show that across all continents, observed frequency distributions of tropical tree cover are consistent with this hypothesis. We argue that percolation of fire through an open landscape may explain the remarkably universal rise of fire frequency around a critical tree cover, but we show that simple percolation models cannot predict the actual threshold quantitatively. The fire-driven instability of intermediate states implies that tree cover will not change smoothly with climate or other stressors and shifts between closed forest and a state of low tree cover will likely tend to be relatively sharp and difficult to reverse. PMID:29351323
Fire forbids fifty-fifty forest.
van Nes, Egbert H; Staal, Arie; Hantson, Stijn; Holmgren, Milena; Pueyo, Salvador; Bernardi, Rafael E; Flores, Bernardo M; Xu, Chi; Scheffer, Marten
2018-01-01
Recent studies have interpreted patterns of remotely sensed tree cover as evidence that forest with intermediate tree cover might be unstable in the tropics, as it will tip into either a closed forest or a more open savanna state. Here we show that across all continents the frequency of wildfires rises sharply as tree cover falls below ~40%. Using a simple empirical model, we hypothesize that the steepness of this pattern causes intermediate tree cover (30‒60%) to be unstable for a broad range of assumptions on tree growth and fire-driven mortality. We show that across all continents, observed frequency distributions of tropical tree cover are consistent with this hypothesis. We argue that percolation of fire through an open landscape may explain the remarkably universal rise of fire frequency around a critical tree cover, but we show that simple percolation models cannot predict the actual threshold quantitatively. The fire-driven instability of intermediate states implies that tree cover will not change smoothly with climate or other stressors and shifts between closed forest and a state of low tree cover will likely tend to be relatively sharp and difficult to reverse.
Functional renormalization group analysis of tensorial group field theories on Rd
NASA Astrophysics Data System (ADS)
Geloun, Joseph Ben; Martini, Riccardo; Oriti, Daniele
2016-07-01
Rank-d tensorial group field theories are quantum field theories (QFTs) defined on a group manifold G×d , which represent a nonlocal generalization of standard QFT and a candidate formalism for quantum gravity, since, when endowed with appropriate data, they can be interpreted as defining a field theoretic description of the fundamental building blocks of quantum spacetime. Their renormalization analysis is crucial both for establishing their consistency as quantum field theories and for studying the emergence of continuum spacetime and geometry from them. In this paper, we study the renormalization group flow of two simple classes of tensorial group field theories (TGFTs), defined for the group G =R for arbitrary rank, both without and with gauge invariance conditions, by means of functional renormalization group techniques. The issue of IR divergences is tackled by the definition of a proper thermodynamic limit for TGFTs. We map the phase diagram of such models, in a simple truncation, and identify both UV and IR fixed points of the RG flow. Encouragingly, for all the models we study, we find evidence for the existence of a phase transition of condensation type.
NASA Astrophysics Data System (ADS)
Asfahani, J.; Tlas, M.
2015-10-01
An easy and practical method for interpreting residual gravity anomalies due to simple geometrically shaped models such as cylinders and spheres has been proposed in this paper. This proposed method is based on both the deconvolution technique and the simplex algorithm for linear optimization to most effectively estimate the model parameters, e.g., the depth from the surface to the center of a buried structure (sphere or horizontal cylinder) or the depth from the surface to the top of a buried object (vertical cylinder), and the amplitude coefficient from the residual gravity anomaly profile. The method was tested on synthetic data sets corrupted by different white Gaussian random noise levels to demonstrate the capability and reliability of the method. The results acquired show that the estimated parameter values derived by this proposed method are close to the assumed true parameter values. The validity of this method is also demonstrated using real field residual gravity anomalies from Cuba and Sweden. Comparable and acceptable agreement is shown between the results derived by this method and those derived from real field data.
NASA Astrophysics Data System (ADS)
Pradas, Marc; Pumir, Alain; Huber, Greg; Wilkinson, Michael
2017-07-01
Chaos is widely understood as being a consequence of sensitive dependence upon initial conditions. This is the result of an instability in phase space, which separates trajectories exponentially. Here, we demonstrate that this criterion should be refined. Despite their overall intrinsic instability, trajectories may be very strongly convergent in phase space over extremely long periods, as revealed by our investigation of a simple chaotic system (a realistic model for small bodies in a turbulent flow). We establish that this strong convergence is a multi-facetted phenomenon, in which the clustering is intense, widespread and balanced by lacunarity of other regions. Power laws, indicative of scale-free features, characterize the distribution of particles in the system. We use large-deviation and extreme-value statistics to explain the effect. Our results show that the interpretation of the ‘butterfly effect’ needs to be carefully qualified. We argue that the combination of mixing and clustering processes makes our specific model relevant to understanding the evolution of simple organisms. Lastly, this notion of convergent chaos, which implies the existence of conditions for which uncertainties are unexpectedly small, may also be relevant to the valuation of insurance and futures contracts.
Konovalov, Arkady; Krajbich, Ian
2016-01-01
Organisms appear to learn and make decisions using different strategies known as model-free and model-based learning; the former is mere reinforcement of previously rewarded actions and the latter is a forward-looking strategy that involves evaluation of action-state transition probabilities. Prior work has used neural data to argue that both model-based and model-free learners implement a value comparison process at trial onset, but model-based learners assign more weight to forward-looking computations. Here using eye-tracking, we report evidence for a different interpretation of prior results: model-based subjects make their choices prior to trial onset. In contrast, model-free subjects tend to ignore model-based aspects of the task and instead seem to treat the decision problem as a simple comparison process between two differentially valued items, consistent with previous work on sequential-sampling models of decision making. These findings illustrate a problem with assuming that experimental subjects make their decisions at the same prescribed time. PMID:27511383
NASA Astrophysics Data System (ADS)
Raman, Kumar; Casey, Dan; Callahan, Debra; Clark, Dan; Fittinghoff, David; Grim, Gary; Hatchett, Steve; Hinkel, Denise; Jones, Ogden; Kritcher, Andrea; Seek, Scott; Suter, Larry; Merrill, Frank; Wilson, Doug
2016-10-01
In experiments with cryogenic deuterium-tritium (DT) fuel layers at the National Ignition Facility (NIF), an important technique for visualizing the stagnated fuel assembly is to image the 6-12 MeV neutrons created by scatters of the 14 MeV hotspot neutrons in the surrounding cold fuel. However, such down-scattered neutron images are difficult to interpret without a model of the fuel assembly, because of the nontrivial neutron kinematics involved in forming the images. For example, the dominant scattering modes are at angles other than forward scattering and the 14 MeV neutron fluence is not uniform. Therefore, the intensity patterns in these images usually do not correspond in a simple way to patterns in the fuel distribution, even for simple fuel distributions. We describe our efforts to model synthetic images from ICF design simulations with data from the National Ignition Campaign and after. We discuss the insight this gives, both to understand how well the models are predicting fuel asymmetries and to inform how to optimize the diagnostic for the types of fuel distributions being predicted. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Marsh, Herbert W; Scalas, L Francesca; Nagengast, Benjamin
2010-06-01
Self-esteem, typically measured by the Rosenberg Self-Esteem Scale (RSE), is one of the most widely studied constructs in psychology. Nevertheless, there is broad agreement that a simple unidimensional factor model, consistent with the original design and typical application in applied research, does not provide an adequate explanation of RSE responses. However, there is no clear agreement about what alternative model is most appropriate-or even a clear rationale for how to test competing interpretations. Three alternative interpretations exist: (a) 2 substantively important trait factors (positive and negative self-esteem), (b) 1 trait factor and ephemeral method artifacts associated with positively or negatively worded items, or (c) 1 trait factor and stable response-style method factors associated with item wording. We have posited 8 alternative models and structural equation model tests based on longitudinal data (4 waves of data across 8 years with a large, representative sample of adolescents). Longitudinal models provide no support for the unidimensional model, undermine support for the 2-factor model, and clearly refute claims that wording effects are ephemeral, but they provide good support for models positing 1 substantive (self-esteem) factor and response-style method factors that are stable over time. This longitudinal methodological approach has not only resolved these long-standing issues in self-esteem research but also has broad applicability to most psychological assessments based on self-reports with a mix of positively and negatively worded items.
Submillimeter Galaxy Number Counts and Magnification by Galaxy Clusters
NASA Astrophysics Data System (ADS)
Lima, Marcos; Jain, Bhuvnesh; Devlin, Mark; Aguirre, James
2010-07-01
We present an analytical model that reproduces measured galaxy number counts from surveys in the wavelength range of 500 μm-2 mm. The model involves a single high-redshift galaxy population with a Schechter luminosity function that has been gravitationally lensed by galaxy clusters in the mass range 1013-1015 M sun. This simple model reproduces both the low-flux and the high-flux end of the number counts reported by the BLAST, SCUBA, AzTEC, and South Pole Telescope (SPT) surveys. In particular, our model accounts for the most luminous galaxies detected by SPT as the result of high magnifications by galaxy clusters (magnification factors of 10-30). This interpretation implies that submillimeter (submm) and millimeter surveys of this population may prove to be a useful addition to ongoing cluster detection surveys. The model also implies that the bulk of submm galaxies detected at wavelengths larger than 500 μm lie at redshifts greater than 2.
Model interpretation of type III radio burst characteristics. I - Spatial aspects
NASA Technical Reports Server (NTRS)
Reiner, M. J.; Stone, R. G.
1988-01-01
The ways that the finite size of the source region and directivity of the emitted radiation modify the observed characteristics of type III radio bursts as they propagate through the interplanetary medium are investigated. A simple model that simulates the radio source region is developed to provide insight into the spatial behavior of the parameters that characterize radio bursts. The model is used to demonstrate that observed radio azimuths are systematically displaced from the geometric centroid of the exciter electron beam in such a way as to cause trajectories of the radio bursts to track back to the observer at low frequencies, rather than to follow expected Archimedean spiral-like paths. The source region model is used to investigate the spatial behavior of the peak intensities of radio bursts, and it is found that the model can qualitatively account for both the frequency dependence and the east-west asymmetry of the observed peak flux densities.
Mohammed, Asadig; Murugan, Jeff; Nastase, Horatiu
2012-11-02
We present an embedding of the three-dimensional relativistic Landau-Ginzburg model for condensed matter systems in an N = 6, U(N) × U(N) Chern-Simons-matter theory [the Aharony-Bergman-Jafferis-Maldacena model] by consistently truncating the latter to an Abelian effective field theory encoding the collective dynamics of O(N) of the O(N(2)) modes. In fact, depending on the vacuum expectation value on one of the Aharony-Bergman-Jafferis-Maldacena scalars, a mass deformation parameter μ and the Chern-Simons level number k, our Abelianization prescription allows us to interpolate between the Abelian Higgs model with its usual multivortex solutions and a Ø(4) theory. We sketch a simple condensed matter model that reproduces all the salient features of the Abelianization. In this context, the Abelianization can be interpreted as giving a dimensional reduction from four dimensions.
In silico method for modelling metabolism and gene product expression at genome scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lerman, Joshua A.; Hyduke, Daniel R.; Latif, Haythem
2012-07-03
Transcription and translation use raw materials and energy generated metabolically to create the macromolecular machinery responsible for all cellular functions, including metabolism. A biochemically accurate model of molecular biology and metabolism will facilitate comprehensive and quantitative computations of an organism's molecular constitution as a function of genetic and environmental parameters. Here we formulate a model of metabolism and macromolecular expression. Prototyping it using the simple microorganism Thermotoga maritima, we show our model accurately simulates variations in cellular composition and gene expression. Moreover, through in silico comparative transcriptomics, the model allows the discovery of new regulons and improving the genome andmore » transcription unit annotations. Our method presents a framework for investigating molecular biology and cellular physiology in silico and may allow quantitative interpretation of multi-omics data sets in the context of an integrated biochemical description of an organism.« less
Manipulation Capabilities with Simple Hands
2010-01-01
allowing it to interpret online kinesthetic data, addressing two objectives: • Grasp classification: Distinguish between successful and unsuccessful...determining the grasp outcome before the grasping process is complete, by using the entire time series or kinesthetic signature of the grasping process. As...the grasp proceeds and additional kinesthetic data accumulates, the confidence also increases. In some cases Manipulation Capabilities with Simple Hands
Some Marginalist Intuition Concerning the Optimal Commodity Tax Problem
ERIC Educational Resources Information Center
Brett, Craig
2006-01-01
The author offers a simple intuition that can be exploited to derive and to help interpret some canonical results in the theory of optimal commodity taxation. He develops and explores the principle that the marginal social welfare loss per last unit of tax revenue generated be equalized across tax instruments. A simple two-consumer,…
Sound propagation from a simple source in a wind tunnel
NASA Technical Reports Server (NTRS)
Cole, J. E., III
1975-01-01
The nature of the acoustic field of a simple source in a wind tunnel under flow conditions was examined theoretically and experimentally. The motivation of the study was to establish aspects of the theoretical framework for interpreting acoustic data taken (in wind) tunnels using in wind microphones. Three distinct investigations were performed and are described in detail.
Solares, Santiago D.
2015-11-26
This study introduces a quasi-3-dimensional (Q3D) viscoelastic model and software tool for use in atomic force microscopy (AFM) simulations. The model is based on a 2-dimensional array of standard linear solid (SLS) model elements. The well-known 1-dimensional SLS model is a textbook example in viscoelastic theory but is relatively new in AFM simulation. It is the simplest model that offers a qualitatively correct description of the most fundamental viscoelastic behaviors, namely stress relaxation and creep. However, this simple model does not reflect the correct curvature in the repulsive portion of the force curve, so its application in the quantitative interpretationmore » of AFM experiments is relatively limited. In the proposed Q3D model the use of an array of SLS elements leads to force curves that have the typical upward curvature in the repulsive region, while still offering a very low computational cost. Furthermore, the use of a multidimensional model allows for the study of AFM tips having non-ideal geometries, which can be extremely useful in practice. Examples of typical force curves are provided for single- and multifrequency tappingmode imaging, for both of which the force curves exhibit the expected features. Lastly, a software tool to simulate amplitude and phase spectroscopy curves is provided, which can be easily modified to implement other controls schemes in order to aid in the interpretation of AFM experiments.« less
Major Fault Patterns in Zanjan State of Iran Based of GECO Global Geoid Model
NASA Astrophysics Data System (ADS)
Beheshty, Sayyed Amir Hossein; Abrari Vajari, Mohammad; Raoufikelachayeh, SeyedehSusan
2016-04-01
A new Earth Gravitational Model (GECO) to degree 2190 has been developed incorporates EGM2008 and the latest GOCE based satellite solutions. Satellite gradiometry data are more sensitive information of the long- and medium- wavelengths of the gravity field than the conventional satellite tracking data. Hence, by utilizing this new technique, more accurate, reliable and higher degrees/orders of the spherical harmonic expansion of the gravity field can be achieved. Gravity gradients can also be useful in geophysical interpretation and prospecting. We have presented the concept of gravity gradients with some simple interpretations. A MATLAB based computer programs were developed and utilized for determining the gravity and gradient components of the gravity field using the GGMs, followed by a case study in Zanjan State of Iran. Our numerical studies show strong (more than 72%) correlations between gravity anomalies and the diagonal elements of the gradient tensor. Also, strong correlations were revealed between the components of the deflection of vertical and the off-diagonal elements as well as between the horizontal gradient and magnitude of the deflection of vertical. We clearly distinguished two big faults in North and South of Zanjan city based on the current information. Also, several minor faults were detected in the study area. Therefore, the same geophysical interpretation can be stated for gravity gradient components too. Our mathematical derivations support some of these correlations.
Item Selection, Evaluation, and Simple Structure in Personality Data
Pettersson, Erik; Turkheimer, Eric
2010-01-01
We report an investigation of the genesis and interpretation of simple structure in personality data using two very different self-reported data sets. The first consists of a set of relatively unselected lexical descriptors, whereas the second is based on responses to a carefully constructed instrument. In both data sets, we explore the degree of simple structure by comparing factor solutions to solutions from simulated data constructed to have either strong or weak simple structure. The analysis demonstrates that there is little evidence of simple structure in the unselected items, and a moderate degree among the selected items. In both instruments, however, much of the simple structure that could be observed originated in a strong dimension of positive vs. negative evaluation. PMID:20694168
Cell fate regulation in the shoot meristem.
Laux, T; Mayer, K F
1998-04-01
The shoot meristem is a proliferative centre containing pluripotent stem cells that are the ultimate source of all cells and organs continuously added to the growing shoot. The progeny of the stem cells have two developmental options, either to renew the stem cell population or to leave the meristem and to differentiate, possibly according to signals from more mature tissue. The destiny of each cell depends on its position within the dynamic shoot meristem. Genetic data suggest a simple model in which graded positional information is provided by antagonistic gene functions and is interpreted by genes which regulate cell fate.
Interference phenomena at backscattering by ice crystals of cirrus clouds.
Borovoi, Anatoli; Kustova, Natalia; Konoshonkin, Alexander
2015-09-21
It is shown that light backscattering by hexagonal ice crystals of cirrus clouds is formed within the physical-optics approximation by both diffraction and interference phenomena. Diffraction determines the angular width of the backscattering peak and interference produces the interference rings inside the peak. By use of a simple model for distortion of the pristine hexagonal shape, we show that the shape distortion leads to both oscillations of the scattering (Mueller) matrix within the backscattering peak and to a strong increase of the depolarization, color, and lidar ratios needed for interpretation of lidar signals.
The Influence of AN Interacting Vacuum Energy on the Gravitational Collapse of a Star Fluid
NASA Astrophysics Data System (ADS)
Campos, M.
2014-02-01
To explain the accelerated expansion of the universe, models with interacting dark components has been considered in the literature. Generally, the dark energy component is physically interpreted as the vacuum energy. However, at the other side of the same coin, the influence of the vacuum energy in the gravitational collapse is a topic of scientific interest. Based in a simple assumption on the collapsed rate of the matter fluid density that is altered by the inclusion of a vacuum energy component that interacts with the matter fluid, we study the final fate of the collapse process.
The static response of a bowed inclined hot wire
NASA Technical Reports Server (NTRS)
Smits, A. J.
1984-01-01
The directional sensitivity of a bowed, inclined hot wire is investigated using a simple model for the convective heat transfer. The static response is analyzed for subsonic and supersonic flows. It is shown that the effects of both end conduction and wire bowing are greater in supersonic flow. Regardless of the Mach number, however, these two phenomena have distinctly different effects; end conduction appears to be responsible for reducing the nonlinearity of the response, whereas bowing increases the directional sensitivity. Comparison with the available data suggests that the analysis is useful for interpreting the experimental results.
Extreme ultraviolet quantum efficiency of opaque alkali halide photocathodes on microchannel plates
NASA Technical Reports Server (NTRS)
Siegmund, O. H. W.; Everman, E.; Vallerga, J. V.; Lampton, M.
1988-01-01
Comprehensive measurements are presented for the quantum detection efficiency (QDE) of the microchannel plate materials CsI, KBr, KCl, and MgF2, over the 44-1800 A wavelength range. QDEs in excess of 40 percent are achieved by several materials in specific wavelength regions of the EUV. Structure is noted in the wavelength dependence of the QDE that is directly related to the valence-band/conduction-band gap energy and the onset of atomic-like resonant transitions. A simple photocathode model allows interpretation of these features, together with the QDE efficiency variation, as a function of illumination angle.
Satomura, Hironori; Adachi, Kohei
2013-07-01
To facilitate the interpretation of canonical correlation analysis (CCA) solutions, procedures have been proposed in which CCA solutions are orthogonally rotated to a simple structure. In this paper, we consider oblique rotation for CCA to provide solutions that are much easier to interpret, though only orthogonal rotation is allowed in the existing formulations of CCA. Our task is thus to reformulate CCA so that its solutions have the freedom of oblique rotation. Such a task can be achieved using Yanai's (Jpn. J. Behaviormetrics 1:46-54, 1974; J. Jpn. Stat. Soc. 11:43-53, 1981) generalized coefficient of determination for the objective function to be maximized in CCA. The resulting solutions are proved to include the existing orthogonal ones as special cases and to be rotated obliquely without affecting the objective function value, where ten Berge's (Psychometrika 48:519-523, 1983) theorems on suborthonormal matrices are used. A real data example demonstrates that the proposed oblique rotation can provide simple, easily interpreted CCA solutions.
On the representability problem and the physical meaning of coarse-grained models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wagner, Jacob W.; Dama, James F.; Durumeric, Aleksander E. P.
2016-07-28
In coarse-grained (CG) models where certain fine-grained (FG, i.e., atomistic resolution) observables are not directly represented, one can nonetheless identify indirect the CG observables that capture the FG observable’s dependence on CG coordinates. Often, in these cases it appears that a CG observable can be defined by analogy to an all-atom or FG observable, but the similarity is misleading and significantly undermines the interpretation of both bottom-up and top-down CG models. Such problems emerge especially clearly in the framework of the systematic bottom-up CG modeling, where a direct and transparent correspondence between FG and CG variables establishes precise conditions formore » consistency between CG observables and underlying FG models. Here we present and investigate these representability challenges and illustrate them via the bottom-up conceptual framework for several simple analytically tractable polymer models. The examples provide special focus on the observables of configurational internal energy, entropy, and pressure, which have been at the root of controversy in the CG literature, as well as discuss observables that would seem to be entirely missing in the CG representation but can nonetheless be correlated with CG behavior. Though we investigate these problems in the framework of systematic coarse-graining, the lessons apply to top-down CG modeling also, with crucial implications for simulation at constant pressure and surface tension and for the interpretations of structural and thermodynamic correlations for comparison to experiment.« less
Inelastic response of silicon to shock compression
Higginbotham, A.; Stubley, P. G.; Comley, A. J.; Eggert, J. H.; Foster, J. M.; Kalantar, D. H.; McGonegle, D.; Patel, S.; Peacock, L. J.; Rothman, S. D.; Smith, R. F.; Suggit, M. J.; Wark, J. S.
2016-01-01
The elastic and inelastic response of [001] oriented silicon to laser compression has been a topic of considerable discussion for well over a decade, yet there has been little progress in understanding the basic behaviour of this apparently simple material. We present experimental x-ray diffraction data showing complex elastic strain profiles in laser compressed samples on nanosecond timescales. We also present molecular dynamics and elasticity code modelling which suggests that a pressure induced phase transition is the cause of the previously reported ‘anomalous’ elastic waves. Moreover, this interpretation allows for measurement of the kinetic timescales for transition. This model is also discussed in the wider context of reported deformation of silicon to rapid compression in the literature. PMID:27071341
NASA Technical Reports Server (NTRS)
Hohenemser, K. H.; Crews, S. T.
1972-01-01
A two bladed 16-inch hingeless rotor model was built and tested outside and inside a 24 by 24 inch wind tunnel test section at collective pitch settings up to 5 deg and rotor advance ratios up to .4. The rotor model has a simple eccentric mechanism to provide progressing or regressing cyclic pitch excitation. The flapping responses were compared to analytically determined responses which included flap-bending elasticity but excluded rotor wake effects. Substantial systematic deviations of the measured responses from the computed responses were found, which were interpreted as the effects of interaction of the blades with a rotating asymmetrical wake.
NASA Astrophysics Data System (ADS)
Macedo-Filho, A.; Alves, G. A.; Costa Filho, R. N.; Alves, T. F. A.
2018-04-01
We investigated the susceptible-infected-susceptible model on a square lattice in the presence of a conjugated field based on recently proposed reactivating dynamics. Reactivating dynamics consists of reactivating the infection by adding one infected site, chosen randomly when the infection dies out, avoiding the dynamics being trapped in the absorbing state. We show that the reactivating dynamics can be interpreted as the usual dynamics performed in the presence of an effective conjugated field, named the reactivating field. The reactivating field scales as the inverse of the lattice number of vertices n, which vanishes at the thermodynamic limit and does not affect any scaling properties including ones related to the conjugated field.
Rheology of dilute suspensions of red blood cells: experimental and theoretical approaches
NASA Astrophysics Data System (ADS)
Drochon, A.
2003-05-01
Shear viscosity measurements with dilute suspensions of red blood cells are interpreted using a microrheological model that relates the bulk measurements to the physical properties of the suspended cells. It is thus possible to quantify the average deformability of a RBC population in terms of a mean value of the membrane shear elastic modulus E_s. The values obtained for normal cells are in good agreement with those given in the literature. The method allows to discriminate between normal and altered (diamide or glutaraldehyde treated) cells or pathological cells (scleroderma). The predictions of the microrheological model, based on analytic calculations, are also compared with the numerical results of Ramanujan and Pozrikidis (JFM 361, 1998) for dilute suspensions of capsules in simple shear flow.
A model for the wind of the M supergiant VX Sagittarii
NASA Astrophysics Data System (ADS)
Pijpers, F. P.
1990-11-01
The velocity distribution of the stellar wind from the M supergiant VX Sgr deduced from interferometric measurements of maser lines by Chapman and Cohen (1986) has been modeled using the linearized theory of stellar winds driven by short period sound waves proposed by Pijpers and Hearn (1989) and the theory of stellar winds driven by short period shocks proposed by Pijpers and Habing (1989). The effect of the radiative forces on the dust formed in the wind is included in a simple way. Good agreement with the observations is obtained by a range of parameters in the theory. A series of observations of the maser lines at invervals of one or a few days may provide additional constraints on the interpretation.
How old is this bird? The age distribution under some phase sampling schemes.
Hautphenne, Sophie; Massaro, Melanie; Taylor, Peter
2017-12-01
In this paper, we use a finite-state continuous-time Markov chain with one absorbing state to model an individual's lifetime. Under this model, the time of death follows a phase-type distribution, and the transient states of the Markov chain are known as phases. We then attempt to provide an answer to the simple question "What is the conditional age distribution of the individual, given its current phase"? We show that the answer depends on how we interpret the question, and in particular, on the phase observation scheme under consideration. We then apply our results to the computation of the age pyramid for the endangered Chatham Island black robin Petroica traversi during the monitoring period 2007-2014.
Evolution of the cerebellum as a neuronal machine for Bayesian state estimation
NASA Astrophysics Data System (ADS)
Paulin, M. G.
2005-09-01
The cerebellum evolved in association with the electric sense and vestibular sense of the earliest vertebrates. Accurate information provided by these sensory systems would have been essential for precise control of orienting behavior in predation. A simple model shows that individual spikes in electrosensory primary afferent neurons can be interpreted as measurements of prey location. Using this result, I construct a computational neural model in which the spatial distribution of spikes in a secondary electrosensory map forms a Monte Carlo approximation to the Bayesian posterior distribution of prey locations given the sense data. The neural circuit that emerges naturally to perform this task resembles the cerebellar-like hindbrain electrosensory filtering circuitry of sharks and other electrosensory vertebrates. The optimal filtering mechanism can be extended to handle dynamical targets observed from a dynamical platform; that is, to construct an optimal dynamical state estimator using spiking neurons. This may provide a generic model of cerebellar computation. Vertebrate motion-sensing neurons have specific fractional-order dynamical characteristics that allow Bayesian state estimators to be implemented elegantly and efficiently, using simple operations with asynchronous pulses, i.e. spikes. The computational neural models described in this paper represent a novel kind of particle filter, using spikes as particles. The models are specific and make testable predictions about computational mechanisms in cerebellar circuitry, while providing a plausible explanation of cerebellar contributions to aspects of motor control, perception and cognition.
Modeling and Circumventing the Effect of Sediments and Water Column on Receiver Functions
NASA Astrophysics Data System (ADS)
Audet, P.
2017-12-01
Teleseismic P-wave receiver functions are routinely used to resolve crust and mantle structure in various geologic settings. Receiver functions are approximations to the Earth's Green's functions and are composed of various scattered phase arrivals, depending on the complexity of the underlying Earth structure. For simple structure, the dominant arrivals (converted and back-scattered P-to-S phases) are well separated in time and can be reliably used in estimating crustal velocity structure. In the presence of sedimentary layers, strong reverberations typically produce high-amplitude oscillations that contaminate the early part of the wave train and receiver functions can be difficult to interpret in terms of underlying structure. The effect of a water column also limits the interpretability of under-water receiver functions due to the additional acoustic wave propagating within the water column that can contaminate structural arrivals. We perform numerical modeling of teleseismic Green's functions and receiver functions using a reflectivity technique for a range of Earth models that include thin sedimentary layers and overlying water column. These modeling results indicate that, as expected, receiver functions are difficult to interpret in the presence of sediments, but the contaminating effect of the water column is dependent on the thickness of the water layer. To circumvent these effects and recover source-side structure, we propose using an approach based on transfer function modeling that bypasses receiver functions altogether and estimates crustal properties directly from the waveforms (Frederiksen and Delayney, 2015). Using this approach, reasonable assumptions about the properties of the sedimentary layer can be included in forward calculations of the Green's functions that are convolved with radial waveforms to predict vertical waveforms. Exploration of model space using Monte Carlo-style search and least-square waveform misfits can be performed to estimate any model parameter of interest, including those of the sedimentary or water layer. We show how this method can be applied to OBS data using broadband stations from the Cascadia Initiative to recover oceanic plate structure.
Cosmic-ray streaming and anisotropies
NASA Technical Reports Server (NTRS)
Forman, M. A.; Gleeson, L. J.
1975-01-01
The paper is concerned with the differential current densities and anisotropies that exist in the interplanetary cosmic-ray gas, and in particular with a correct formulation and simple interpretation of the momentum equation that describes these on a local basis. Two examples of the use of this equation in the interpretation of previous data are given. It is demonstrated that in interplanetary space, the electric-field drifts and convective flow parallel to the magnetic field of cosmic-ray particles combine as a simple convective flow with the solar wind, and that there exist diffusive currents and transverse gradient drift currents. Thus direct reference to the interplanetary electric-field drifts is eliminated, and the study of steady-state and transient cosmic-ray anisotropies is both more systematic and simpler.
Saito, Ryoichi; Koyano, Saho; Dorin, Misato; Higurashi, Yoshimi; Misawa, Yoshiki; Nagano, Noriyuki; Kaneko, Takamasa; Moriya, Kyoji
2015-01-01
We investigated the performance of a phenotypic test, the Carbapenemase Detection Set (MAST-CDS), for the identification of carbapenemase-producing Enterobacteriaceae. Our results indicated that MAST-CDS is rapid, easily performed, simple to interpret, and highly sensitive for the identification of carbapenemase producers, particularly imipenemase producers. Copyright © 2014 Elsevier B.V. All rights reserved.
Land management in the American southwest: a state-and-transition approach to ecosystem complexity.
Bestelmeyer, Brandon T; Herrick, Jeffrey E; Brown, Joel R; Trujillo, David A; Havstad, Kris M
2004-07-01
State-and-transition models are increasingly being used to guide rangeland management. These models provide a relatively simple, management-oriented way to classify land condition (state) and to describe the factors that might cause a shift to another state (a transition). There are many formulations of state-and-transition models in the literature. The version we endorse does not adhere to any particular generalities about ecosystem dynamics, but it includes consideration of several kinds of dynamics and management response to them. In contrast to previous uses of state-and-transition models, we propose that models can, at present, be most effectively used to specify and qualitatively compare the relative benefits and potential risks of different management actions (e.g., fire and grazing) and other factors (e.g., invasive species and climate change) on specified areas of land. High spatial and temporal variability and complex interactions preclude the meaningful use of general quantitative models. Forecasts can be made on a case-by-case basis by interpreting qualitative and quantitative indicators, historical data, and spatially structured monitoring data based on conceptual models. We illustrate how science- based conceptual models are created using several rangeland examples that vary in complexity. In doing so, we illustrate the implications of designating plant communities and states in models, accounting for varying scales of pattern in vegetation and soils, interpreting the presence of plant communities on different soils and dealing with our uncertainty about how those communities were assembled and how they will change in the future. We conclude with observations about how models have helped to improve management decision-making.
How to interpret methylation sensitive amplified polymorphism (MSAP) profiles?
Fulneček, Jaroslav; Kovařík, Aleš
2014-01-06
DNA methylation plays a key role in development, contributes to genome stability, and may also respond to external factors supporting adaptation and evolution. To connect different types of stimuli with particular biological processes, identifying genome regions with altered 5-methylcytosine distribution at a genome-wide scale is important. Many researchers are using the simple, reliable, and relatively inexpensive Methylation Sensitive Amplified Polymorphism (MSAP) method that is particularly useful in studies of epigenetic variation. However, electrophoretic patterns produced by the method are rather difficult to interpret, particularly when MspI and HpaII isoschizomers are used because these enzymes are methylation-sensitive, and any C within the CCGG recognition motif can be methylated in plant DNA. Here, we evaluate MSAP patterns with respect to current knowledge of the enzyme activities and the level and distribution of 5-methylcytosine in plant and vertebrate genomes. We discuss potential caveats related to complex MSAP patterns and provide clues regarding how to interpret them. We further show that addition of combined HpaII + MspI digestion would assist in the interpretation of the most controversial MSAP pattern represented by the signal in the HpaII but not in the MspI profile. We recommend modification of the MSAP protocol that definitely discerns between putative hemimethylated mCCGG and internal CmCGG sites. We believe that our view and the simple improvement will assist in correct MSAP data interpretation.
Weber, Alain; Braybrook, Siobhan; Huflejt, Michal; Mosca, Gabriella; Routier-Kierzkowska, Anne-Lise; Smith, Richard S
2015-06-01
Growth in plants results from the interaction between genetic and signalling networks and the mechanical properties of cells and tissues. There has been a recent resurgence in research directed at understanding the mechanical aspects of growth, and their feedback on genetic regulation. This has been driven in part by the development of new micro-indentation techniques to measure the mechanical properties of plant cells in vivo. However, the interpretation of indentation experiments remains a challenge, since the force measures results from a combination of turgor pressure, cell wall stiffness, and cell and indenter geometry. In order to interpret the measurements, an accurate mechanical model of the experiment is required. Here, we used a plant cell system with a simple geometry, Nicotiana tabacum Bright Yellow-2 (BY-2) cells, to examine the sensitivity of micro-indentation to a variety of mechanical and experimental parameters. Using a finite-element mechanical model, we found that, for indentations of a few microns on turgid cells, the measurements were mostly sensitive to turgor pressure and the radius of the cell, and not to the exact indenter shape or elastic properties of the cell wall. By complementing indentation experiments with osmotic experiments to measure the elastic strain in turgid cells, we could fit the model to both turgor pressure and cell wall elasticity. This allowed us to interpret apparent stiffness values in terms of meaningful physical parameters that are relevant for morphogenesis. © The Author 2015. Published by Oxford University Press on behalf of the Society for Experimental Biology.
Shimatani, Ichiro Ken; Yoda, Ken; Katsumata, Nobuhiro; Sato, Katsufumi
2012-01-01
To analyze an animal's movement trajectory, a basic model is required that satisfies the following conditions: the model must have an ecological basis and the parameters used in the model must have ecological interpretations, a broad range of movement patterns can be explained by that model, and equations and probability distributions in the model should be mathematically tractable. Random walk models used in previous studies do not necessarily satisfy these requirements, partly because movement trajectories are often more oriented or tortuous than expected from the models. By improving the modeling for turning angles, this study aims to propose a basic movement model. On the basis of the recently developed circular auto-regressive model, we introduced a new movement model and extended its applicability to capture the asymmetric effects of external factors such as wind. The model was applied to GPS trajectories of a seabird (Calonectris leucomelas) to demonstrate its applicability to various movement patterns and to explain how the model parameters are ecologically interpreted under a general conceptual framework for movement ecology. Although it is based on a simple extension of a generalized linear model to circular variables, the proposed model enables us to evaluate the effects of external factors on movement separately from the animal's internal state. For example, maximum likelihood estimates and model selection suggested that in one homing flight section, the seabird intended to fly toward the island, but misjudged its navigation and was driven off-course by strong winds, while in the subsequent flight section, the seabird reset the focal direction, navigated the flight under strong wind conditions, and succeeded in approaching the island.
1980-05-01
be used in any application in which its movement is likely to be ambiguously interpreted . (Example--the manipulation required is opposite to that...fixtures shall be sufficient to permit unambiguous labeling, indicator interpretation , and convenient bulb removal. 5.2.2.3.3 Codinq - Simple...low-resolution for Imagery Interpretation applications, line spacing need not be closer than needed to sub- Equipment, 1975, Ch. 4. tend I minute-of
NASA Technical Reports Server (NTRS)
Vangenderen, J. L. (Principal Investigator); Lock, B. F.
1976-01-01
The author has identified the following significant results. It was found that color composite transparencies and monocular magnification provided the best base for land use interpretation. New methods for determining optimum sample sizes and analyzing interpretation accuracy levels were developed. All stages of the methodology were assessed, in the operational sense, during the production of a 1:250,000 rural land use map of Murcia Province, Southeast Spain.
Shahaf, Goded; Pratt, Hillel
2013-01-01
In this work we demonstrate the principles of a systematic modeling approach of the neurophysiologic processes underlying a behavioral function. The modeling is based upon a flexible simulation tool, which enables parametric specification of the underlying neurophysiologic characteristics. While the impact of selecting specific parameters is of interest, in this work we focus on the insights, which emerge from rather accepted assumptions regarding neuronal representation. We show that harnessing of even such simple assumptions enables the derivation of significant insights regarding the nature of the neurophysiologic processes underlying behavior. We demonstrate our approach in some detail by modeling the behavioral go/no-go task. We further demonstrate the practical significance of this simplified modeling approach in interpreting experimental data - the manifestation of these processes in the EEG and ERP literature of normal and abnormal (ADHD) function, as well as with comprehensive relevant ERP data analysis. In-fact we show that from the model-based spatiotemporal segregation of the processes, it is possible to derive simple and yet effective and theory-based EEG markers differentiating normal and ADHD subjects. We summarize by claiming that the neurophysiologic processes modeled for the go/no-go task are part of a limited set of neurophysiologic processes which underlie, in a variety of combinations, any behavioral function with measurable operational definition. Such neurophysiologic processes could be sampled directly from EEG on the basis of model-based spatiotemporal segregation.
NASA Astrophysics Data System (ADS)
Marshall, J. A.; Elitzur, M.; Armus, L.; Diaz-Santos, T.; Charmandaris, V.
2018-05-01
We present models of deeply buried ultraluminous infrared galaxy (ULIRG) spectral energy distributions (SEDs) and use them to construct a three-dimensional diagram for diagnosing the nature of observed ULIRGs. Our goal is to construct a suite of SEDs for a very simple model ULIRG structure, and to explore how well this simple model can (by itself) explain the full range of observed ULIRG properties. We use our diagnostic to analyze archival Spitzer Space Telescope Infrared Spectrograph data of ULIRGs and find that: (1) in general, our model does provide a comprehensive explanation of the distribution of mid-IR ULIRG properties; (2) >75% (in some cases 100%) of the bolometric luminosities of the most deeply buried ULIRGs must be powered by a dust-enshrouded active galactic nucleus; (3) an unobscured “keyhole” view through ≲10% of the obscuring medium surrounding a deeply buried ULIRG is sufficient to make it appear nearly unobscured in the mid-IR; (4) the observed absence of deeply buried ULIRGs with large polycyclic aromatic hydrocarbon (PAH) equivalent widths is naturally explained by our models, showing that deep absorption features are “filled-in” by small quantities of foreground unobscured PAH emission (e.g., from the host galaxy disk) at the level of ∼1% the bolometric nuclear luminosity. The modeling and analysis we present will also serve as a powerful tool for interpreting the high angular resolution spectra of high-redshift sources to be obtained with the James Webb Space Telescope.
Moulin, Emmanuel; Grondel, Sébastien; Assaad, Jamal; Duquenne, Laurent
2008-12-01
The work described in this paper is intended to present a simple and efficient way of modeling a full Lamb wave emission and reception system. The emitter behavior and the Lamb wave generation are predicted using a two-dimensional (2D) hybrid finite element-normal mode expansion model. Then the receiver electrical response is obtained from a finite element computation with prescribed displacements. A numerical correction is applied to the 2D results in order to account for the in-plane radiation divergence caused by the finite length of the emitter. The advantage of this modular approach is that realistic configurations can be simulated without performing cumbersome modeling and time-consuming computations. It also provides insight into the physical interpretation of the results. A good agreement is obtained between predicted and measured signals. The range of application of the method is discussed.
Simulation of linear mechanical systems
NASA Technical Reports Server (NTRS)
Sirlin, S. W.
1993-01-01
A dynamics and controls analyst is typically presented with a structural dynamics model and must perform various input/output tests and design control laws. The required time/frequency simulations need to be done many times as models change and control designs evolve. This paper examines some simple ways that open and closed loop frequency and time domain simulations can be done using the special structure of the system equations usually available. Routines were developed to run under Pro-Matlab in a mixture of the Pro-Matlab interpreter and FORTRAN (using the .mex facility). These routines are often orders of magnitude faster than trying the typical 'brute force' approach of using built-in Pro-Matlab routines such as bode. This makes the analyst's job easier since not only does an individual run take less time, but much larger models can be attacked, often allowing the whole model reduction step to be eliminated.
Saddles and dynamics in a solvable mean-field model
NASA Astrophysics Data System (ADS)
Angelani, L.; Ruocco, G.; Zamponi, F.
2003-05-01
We use the saddle-approach, recently introduced in the numerical investigation of simple model liquids, in the analysis of a mean-field solvable system. The investigated system is the k-trigonometric model, a k-body interaction mean field system, that generalizes the trigonometric model introduced by Madan and Keyes [J. Chem. Phys. 98, 3342 (1993)] and that has been recently introduced to investigate the relationship between thermodynamics and topology of the configuration space. We find a close relationship between the properties of saddles (stationary points of the potential energy surface) visited by the system and the dynamics. In particular the temperature dependence of saddle order follows that of the diffusivity, both having an Arrhenius behavior at low temperature and a similar shape in the whole temperature range. Our results confirm the general usefulness of the saddle-approach in the interpretation of dynamical processes taking place in interacting systems.
Full quantum mechanical analysis of atomic three-grating Mach–Zehnder interferometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanz, A.S., E-mail: asanz@iff.csic.es; Davidović, M.; Božić, M.
2015-02-15
Atomic three-grating Mach–Zehnder interferometry constitutes an important tool to probe fundamental aspects of the quantum theory. There is, however, a remarkable gap in the literature between the oversimplified models and robust numerical simulations considered to describe the corresponding experiments. Consequently, the former usually lead to paradoxical scenarios, such as the wave–particle dual behavior of atoms, while the latter make difficult the data analysis in simple terms. Here these issues are tackled by means of a simple grating working model consisting of evenly-spaced Gaussian slits. As is shown, this model suffices to explore and explain such experiments both analytically and numerically,more » giving a good account of the full atomic journey inside the interferometer, and hence contributing to make less mystic the physics involved. More specifically, it provides a clear and unambiguous picture of the wavefront splitting that takes place inside the interferometer, illustrating how the momentum along each emerging diffraction order is well defined even though the wave function itself still displays a rather complex shape. To this end, the local transverse momentum is also introduced in this context as a reliable analytical tool. The splitting, apart from being a key issue to understand atomic Mach–Zehnder interferometry, also demonstrates at a fundamental level how wave and particle aspects are always present in the experiment, without incurring in any contradiction or interpretive paradox. On the other hand, at a practical level, the generality and versatility of the model and methodology presented, makes them suitable to attack analogous problems in a simple manner after a convenient tuning. - Highlights: • A simple model is proposed to analyze experiments based on atomic Mach–Zehnder interferometry. • The model can be easily handled both analytically and computationally. • A theoretical analysis based on the combination of the position and momentum representations is considered. • Wave and particle aspects are shown to coexist within the same experiment, thus removing the old wave-corpuscle dichotomy. • A good agreement between numerical simulations and experimental data is found without appealing to best-fit procedures.« less
Social dilemmas among supergenes: intragenomic sexual conflict and a selfing solution in Oenothera
Brown, Sam P.; Levin, Donald A.
2012-01-01
Recombination is a powerful policing mechanism to control intragenomic cheats. The ‘parliament of the genes’ can often rapidly block driving genes from cheating during meiosis. But what if the genome parliament is reduced to only two members, or supergenes? Using a series of simple game-theoretic models inspired by the peculiar genetics of Oenothera sp. we illustrate that a 2 supergene genome (α and β) can produce a number of surprising evolutionary dynamics, including increases in lineage longevity following a transition from sexuality (outcrossing) to asexuality (clonal self-fertilization). We end by interpreting the model in the broader context of the evolution of mutualism, which highlights that greater α, β cooperation in the self-fertilizing model can be viewed as an example of partner fidelity driving multi-lineage cooperation. PMID:22133211
Shape coexistence and the role of axial asymmetry in 72Ge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ayangeakaa, A. D.; Janssens, R. F.; Wu, C. Y.
2016-01-22
The quadrupole collectivity of low-lying states and the anomalous behavior of the0 + 2 and 2 + 3 levels in 72Ge are investigated via projectile multi-step Coulomb excitation with GRETINA and CHICO-2. A total of forty six E2 and M1 matrix elements connecting fourteen low-lying levels were determined using the least-squares search code, GOSIA. Evidence for triaxiality and shape coexistence, based on the model-independent shape invariants deduced from the Kumar–Cline sum rule, is presented. Moreover, these are interpreted using a simple two-state mixing model as well as multi-state mixing calculations carried out within the framework of the triaxial rotor model.more » Our results represent a significant milestone towards the understanding of the unusual structure of this nucleus.« less
NASA Technical Reports Server (NTRS)
Ogallagher, J. J.
1973-01-01
A simple one-dimensional time-dependent diffusion-convection model for the modulation of cosmic rays is presented. This model predicts that the observed intensity at a given time is approximately equal to the intensity given by the time independent diffusion convection solution under interplanetary conditions which existed a time iota in the past, (U(t sub o) = U sub s(t sub o - tau)) where iota is the average time spent by a particle inside the modulating cavity. Delay times in excess of several hundred days are possible with reasonable modulation parameters. Interpretation of phase lags observed during the 1969 to 1970 solar maximum in terms of this model suggests that the modulating region is probably not less than 10 a.u. and maybe as much as 35 a.u. in extent.
NASA Astrophysics Data System (ADS)
Morris, Kevin Peter
Accurate mapping of geological structures is important in numerous applications, ranging from mineral exploration through to hydrogeological modelling. Remotely sensed data can provide synoptic views of study areas enabling mapping of geological units within the area. Structural information may be derived from such data using standard manual photo-geologic interpretation techniques, although these are often inaccurate and incomplete. The aim of this thesis is, therefore, to compile a suite of automated and interactive computer-based analysis routines, designed to help a the user map geological structure. These are examined and integrated in the context of an expert system. The data used in this study include Digital Elevation Model (DEM) and Airborne Thematic Mapper images, both with a spatial resolution of 5m, for a 5 x 5 km area surrounding Llyn Cow lyd, Snowdonia, North Wales. The geology of this area comprises folded and faulted Ordo vician sediments intruded throughout by dolerite sills, providing a stringent test for the automated and semi-automated procedures. The DEM is used to highlight geomorphological features which may represent surface expressions of the sub-surface geology. The DEM is created from digitized contours, for which kriging is found to provide the best interpolation routine, based on a number of quantitative measures. Lambertian shading and the creation of slope and change of slope datasets are shown to provide the most successful enhancement of DEMs, in terms of highlighting a range of key geomorphological features. The digital image data are used to identify rock outcrops as well as lithologically controlled features in the land cover. To this end, a series of standard spectral enhancements of the images is examined. In this respect, the least correlated 3 band composite and a principal component composite are shown to give the best visual discrimination of geological and vegetation cover types. Automatic edge detection (followed by line thinning and extraction) and manual interpretation techniques are used to identify a set of 'geological primitives' (linear or arc features representing lithological boundaries) within these data. Inclusion of the DEM data provides the three-dimensional co-ordinates of these primitives enabling a least-squares fit to be employed to calculate dip and strike values, based, initially, on the assumption of a simple, linearly dipping structural model. A very large number of scene 'primitives' is identified using these procedures, only some of which have geological significance. Knowledge-based rules are therefore used to identify the relevant. For example, rules are developed to identify lake edges, forest boundaries, forest tracks, rock-vegetation boundaries, and areas of geomorphological interest. Confidence in the geological significance of some of the geological primitives is increased where they are found independently in both the DEM and remotely sensed data. The dip and strike values derived in this way are compared to information taken from the published geological map for this area, as well as measurements taken in the field. Many results are shown to correspond closely to those taken from the map and in the field, with an error of < 1°. These data and rules are incorporated into an expert system which, initially, produces a simple model of the geological structure. The system also provides a graphical user interface for manual control and interpretation, where necessary. Although the system currently only allows a relatively simple structural model (linearly dipping with faulting), in the future it will be possible to extend the system to model more complex features, such as anticlines, synclines, thrusts, nappes, and igneous intrusions.
A simple conceptual model to interpret the 100 000 years dynamics of paleo-climate records
NASA Astrophysics Data System (ADS)
Quiroga Lombard, C. S.; Balenzuela, P.; Braun, H.; Chialvo, D. R.
2010-10-01
Spectral analyses performed on records of cosmogenic nuclides reveal a group of dominant spectral components during the Holocene period. Only a few of them are related to known solar cycles, i.e., the De Vries/Suess, Gleissberg and Hallstatt cycles. The origin of the others remains uncertain. On the other hand, time series of North Atlantic atmospheric/sea surface temperatures during the last ice age display the existence of repeated large-scale warming events, called Dansgaard-Oeschger (DO) events, spaced around multiples of 1470 years. The De Vries/Suess and Gleissberg cycles with periods close to 1470/7 (~210) and 1470/17 (~86.5) years have been proposed to explain these observations. In this work we found that a conceptual bistable model forced with the De Vries/Suess and Gleissberg cycles plus noise displays a group of dominant frequencies similar to those obtained in the Fourier spectra from paleo-climate during the Holocene. Moreover, we show that simply changing the noise amplitude in the model we obtain similar power spectra to those corresponding to GISP2 δ18O (Greenland Ice Sheet Project 2) during the last ice age. These results give a general dynamical framework which allows us to interpret the main characteristic of paleoclimate records from the last 100 000 years.
Mid-infrared Integrated-light Photometry Of LMC Star Clusters
NASA Astrophysics Data System (ADS)
Pessev, Peter; Goudfrooij, P.; Puzia, T.; Chandar, R.
2008-03-01
Massive star clusters (Galactic Globular Clusters and Populous Clusters in the Magellanic Clouds) are the best available approximation of Simple Stellar Populations (SSPs). Since the stellar populations in these nearby objects are studied in details, they provide fundamental age/metallicity templates for interpretation of the galaxy properties, testing and calibration of the SSP Models. Magellanic Cloud clusters are particularly important since they populate a region of the age/metallicity parameter space that is not easily accessible in our Galaxy. We present the first Mid-IR integrated-light measurements for six LMC clusters based on our Spitzer IRAC imaging program. Since we are targeting a specific group of intermediate-age clusters, our imaging goes deeper compared to SAGE-LMC survey data. We present a literature compilation of clusters' properties along with multi-wavelength integrated light photometry database spanning from the optical (Johnson U band) to the Mid-IR (IRAC Channel 4). This data provides an important empirical baseline for the interpretation of galaxy colors in the Mid-IR (especially high-z objects whose integrated-light is dominated by TP-AGB stars emission). It is also a valuable tool to check the SSP model predictions in the intermediate-age regime and provides calibration data for the next generation of SSP models.
Indirect detection of neutrino portal dark matter
NASA Astrophysics Data System (ADS)
Batell, Brian; Han, Tao; Shams Es Haghi, Barmak
2018-05-01
We investigate the feasibility of the indirect detection of dark matter in a simple model using the neutrino portal. The model is very economical, with right-handed neutrinos generating neutrino masses through the type-I seesaw mechanism and simultaneously mediating interactions with dark matter. Given the small neutrino Yukawa couplings expected in a type-I seesaw, direct detection and accelerator probes of dark matter in this scenario are challenging. However, dark matter can efficiently annihilate to right-handed neutrinos, which then decay via active-sterile mixing through the weak interactions, leading to a variety of indirect astronomical signatures. We derive the existing constraints on this scenario from Planck cosmic microwave background measurements, Fermi dwarf spheroidal galaxy and Galactic center gamma-ray observations, and AMS-02 antiproton observations, and we also discuss the future prospects of Fermi and the Cherenkov Telescope Array. Thermal annihilation rates are already being probed for dark matter lighter than about 50 GeV, and this can be extended to dark matter masses of 100 GeV and beyond in the future. This scenario can also provide a dark matter interpretation of the Fermi Galactic center gamma-ray excess, and we confront this interpretation with other indirect constraints. Finally we discuss some of the exciting implications of extensions of the minimal model with large neutrino Yukawa couplings and Higgs portal couplings.
Approaches to the structural modelling of insect wings.
Wootton, R J; Herbert, R C; Young, P G; Evans, K E
2003-01-01
Insect wings lack internal muscles, and the orderly, necessary deformations which they undergo in flight and folding are in part remotely controlled, in part encoded in their structure. This factor is crucial in understanding their complex, extremely varied morphology. Models have proved particularly useful in clarifying the facilitation and control of wing deformation. Their development has followed a logical sequence from conceptual models through physical and simple analytical to numerical models. All have value provided their limitations are realized and constant comparisons made with the properties and mechanical behaviour of real wings. Numerical modelling by the finite element method is by far the most time-consuming approach, but has real potential in analysing the adaptive significance of structural details and interpreting evolutionary trends. Published examples are used to review the strengths and weaknesses of each category of model, and a summary is given of new work using finite element modelling to investigate the vibration properties and response to impact of hawkmoth wings. PMID:14561349
Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A; van't Veld, Aart A
2012-03-15
To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended. Copyright © 2012 Elsevier Inc. All rights reserved.
Mathematical modelling of tissue formation in chondrocyte filter cultures.
Catt, C J; Schuurman, W; Sengers, B G; van Weeren, P R; Dhert, W J A; Please, C P; Malda, J
2011-12-17
In the field of cartilage tissue engineering, filter cultures are a frequently used three-dimensional differentiation model. However, understanding of the governing processes of in vitro growth and development of tissue in these models is limited. Therefore, this study aimed to further characterise these processes by means of an approach combining both experimental and applied mathematical methods. A mathematical model was constructed, consisting of partial differential equations predicting the distribution of cells and glycosaminoglycans (GAGs), as well as the overall thickness of the tissue. Experimental data was collected to allow comparison with the predictions of the simulation and refinement of the initial models. Healthy mature equine chondrocytes were expanded and subsequently seeded on collagen-coated filters and cultured for up to 7 weeks. Resulting samples were characterised biochemically, as well as histologically. The simulations showed a good representation of the experimentally obtained cell and matrix distribution within the cultures. The mathematical results indicate that the experimental GAG and cell distribution is critically dependent on the rate at which the cell differentiation process takes place, which has important implications for interpreting experimental results. This study demonstrates that large regions of the tissue are inactive in terms of proliferation and growth of the layer. In particular, this would imply that higher seeding densities will not significantly affect the growth rate. A simple mathematical model was developed to predict the observed experimental data and enable interpretation of the principal underlying mechanisms controlling growth-related changes in tissue composition.
NASA Astrophysics Data System (ADS)
Tamayo-Mas, Elena; Bianchi, Marco; Mansour, Majdi
2018-03-01
This study investigates the impact of model complexity and multi-scale prior hydrogeological data on the interpretation of pumping test data in a dual-porosity aquifer (the Chalk aquifer in England, UK). In order to characterize the hydrogeological properties, different approaches ranging from a traditional analytical solution (Theis approach) to more sophisticated numerical models with automatically calibrated input parameters are applied. Comparisons of results from the different approaches show that neither traditional analytical solutions nor a numerical model assuming a homogenous and isotropic aquifer can adequately explain the observed drawdowns. A better reproduction of the observed drawdowns in all seven monitoring locations is instead achieved when medium and local-scale prior information about the vertical hydraulic conductivity (K) distribution is used to constrain the model calibration process. In particular, the integration of medium-scale vertical K variations based on flowmeter measurements lead to an improvement in the goodness-of-fit of the simulated drawdowns of about 30%. Further improvements (up to 70%) were observed when a simple upscaling approach was used to integrate small-scale K data to constrain the automatic calibration process of the numerical model. Although the analysis focuses on a specific case study, these results provide insights about the representativeness of the estimates of hydrogeological properties based on different interpretations of pumping test data, and promote the integration of multi-scale data for the characterization of heterogeneous aquifers in complex hydrogeological settings.
Souillard-Mandar, William; Davis, Randall; Rudin, Cynthia; Au, Rhoda; Libon, David J.; Swenson, Rodney; Price, Catherine C.; Lamar, Melissa; Penney, Dana L.
2015-01-01
The Clock Drawing Test – a simple pencil and paper test – has been used for more than 50 years as a screening tool to differentiate normal individuals from those with cognitive impairment, and has proven useful in helping to diagnose cognitive dysfunction associated with neurological disorders such as Alzheimer’s disease, Parkinson’s disease, and other dementias and conditions. We have been administering the test using a digitizing ballpoint pen that reports its position with considerable spatial and temporal precision, making available far more detailed data about the subject’s performance. Using pen stroke data from these drawings categorized by our software, we designed and computed a large collection of features, then explored the tradeoffs in performance and interpretability in classifiers built using a number of different subsets of these features and a variety of different machine learning techniques. We used traditional machine learning methods to build prediction models that achieve high accuracy. We operationalized widely used manual scoring systems so that we could use them as benchmarks for our models. We worked with clinicians to define guidelines for model interpretability, and constructed sparse linear models and rule lists designed to be as easy to use as scoring systems currently used by clinicians, but more accurate. While our models will require additional testing for validation, they offer the possibility of substantial improvement in detecting cognitive impairment earlier than currently possible, a development with considerable potential impact in practice. PMID:27057085
Souillard-Mandar, William; Davis, Randall; Rudin, Cynthia; Au, Rhoda; Libon, David J; Swenson, Rodney; Price, Catherine C; Lamar, Melissa; Penney, Dana L
2016-03-01
The Clock Drawing Test - a simple pencil and paper test - has been used for more than 50 years as a screening tool to differentiate normal individuals from those with cognitive impairment, and has proven useful in helping to diagnose cognitive dysfunction associated with neurological disorders such as Alzheimer's disease, Parkinson's disease, and other dementias and conditions. We have been administering the test using a digitizing ballpoint pen that reports its position with considerable spatial and temporal precision, making available far more detailed data about the subject's performance. Using pen stroke data from these drawings categorized by our software, we designed and computed a large collection of features, then explored the tradeoffs in performance and interpretability in classifiers built using a number of different subsets of these features and a variety of different machine learning techniques. We used traditional machine learning methods to build prediction models that achieve high accuracy. We operationalized widely used manual scoring systems so that we could use them as benchmarks for our models. We worked with clinicians to define guidelines for model interpretability, and constructed sparse linear models and rule lists designed to be as easy to use as scoring systems currently used by clinicians, but more accurate. While our models will require additional testing for validation, they offer the possibility of substantial improvement in detecting cognitive impairment earlier than currently possible, a development with considerable potential impact in practice.
Episodic strain accumulation in southern california.
Thatcher, W
1976-11-12
Reexamination of horizontal geodetic data in the region of recently discovered aseismic uplift has demonstrated that equally unusual horizontal crustal deformation accompanied the development of the uplift. During this time interval compressive strains were oriented roughly normal to the San Andreas fault, suggesting that the uplift produced little shear strain accumulation across this fault. On the other hand, the orientation of the anomalous shear straining is consistent with strain accumulation across northdipping range-front thrusts like the San Fernando fault. Accordingly, the horizontal and vertical crustal deformation disclosed by geodetic observation is interpreted as a short epoch of rapid strain accumulation on these frontal faults. If this interpretation is correct, thrust-type earthquakes will eventually release the accumulated strains, but the geodetic data examined here cannot be used to estimate when these events might occur. However, observation of an unusual sequence of tilts prior to 1971 on a level line lying to the north of the magnitude 6.4 San Fernando earthquake offers some promise for precursor monitoring. The data are adequately explained by a simple model of up-dip aseismic slip propagation toward the 1971 epicentral region. These observations and the simple model that accounts for them suggest a conceptually straightforward monitoring scheme to search for similar uplift and tilt precursors within the uplifted region. Such premonitory effects could be detected by a combination of frequenlty repeated short (30 to 70 km in length) level line measurements, precise gravity traverses, and continuously recording gravimeters sited to the north of the active frontal thrust faults. Once identified, such precursors could be closely followed in space and time, and might then provide effective warnings of impending potentially destructive earth-quakes.
Reduced modeling of signal transduction – a modular approach
Koschorreck, Markus; Conzelmann, Holger; Ebert, Sybille; Ederer, Michael; Gilles, Ernst Dieter
2007-01-01
Background Combinatorial complexity is a challenging problem in detailed and mechanistic mathematical modeling of signal transduction. This subject has been discussed intensively and a lot of progress has been made within the last few years. A software tool (BioNetGen) was developed which allows an automatic rule-based set-up of mechanistic model equations. In many cases these models can be reduced by an exact domain-oriented lumping technique. However, the resulting models can still consist of a very large number of differential equations. Results We introduce a new reduction technique, which allows building modularized and highly reduced models. Compared to existing approaches further reduction of signal transduction networks is possible. The method also provides a new modularization criterion, which allows to dissect the model into smaller modules that are called layers and can be modeled independently. Hallmarks of the approach are conservation relations within each layer and connection of layers by signal flows instead of mass flows. The reduced model can be formulated directly without previous generation of detailed model equations. It can be understood and interpreted intuitively, as model variables are macroscopic quantities that are converted by rates following simple kinetics. The proposed technique is applicable without using complex mathematical tools and even without detailed knowledge of the mathematical background. However, we provide a detailed mathematical analysis to show performance and limitations of the method. For physiologically relevant parameter domains the transient as well as the stationary errors caused by the reduction are negligible. Conclusion The new layer based reduced modeling method allows building modularized and strongly reduced models of signal transduction networks. Reduced model equations can be directly formulated and are intuitively interpretable. Additionally, the method provides very good approximations especially for macroscopic variables. It can be combined with existing reduction methods without any difficulties. PMID:17854494
BioModels Database: a repository of mathematical models of biological processes.
Chelliah, Vijayalakshmi; Laibe, Camille; Le Novère, Nicolas
2013-01-01
BioModels Database is a public online resource that allows storing and sharing of published, peer-reviewed quantitative, dynamic models of biological processes. The model components and behaviour are thoroughly checked to correspond the original publication and manually curated to ensure reliability. Furthermore, the model elements are annotated with terms from controlled vocabularies as well as linked to relevant external data resources. This greatly helps in model interpretation and reuse. Models are stored in SBML format, accepted in SBML and CellML formats, and are available for download in various other common formats such as BioPAX, Octave, SciLab, VCML, XPP and PDF, in addition to SBML. The reaction network diagram of the models is also available in several formats. BioModels Database features a search engine, which provides simple and more advanced searches. Features such as online simulation and creation of smaller models (submodels) from the selected model elements of a larger one are provided. BioModels Database can be accessed both via a web interface and programmatically via web services. New models are available in BioModels Database at regular releases, about every 4 months.
Effects of salt-related mode conversions on subsalt prospecting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ogilvie, J.S.; Purnell, G.W.
1996-03-01
Mode conversion of waves during seismic reflection surveys has generally been considered a small phenomenon that could be neglected in data processing and interpretation. However, in subsalt prospecting, the contrast in material properties at the salt/sediment interface is often great enough that significant P-to-S and/or S-to-P conversion occurs. The resulting converted waves can be both a help and a hindrance for subsalt prospecting. A case history from the Mississippi Canyon area of the Gulf of Mexico demonstrates strong converted-wave reflections from the base-of-salt that complicate the evaluation of a subsalt prospect using 3-D seismic data. Before and after stack, themore » converted-wave reflections are evident in 2-D and 3-D surveys across the prospect. Ray-tracing synthetic common midpoint (CMP) gathers provides some useful insights about the occurrence of these waves, but elastic-wave-equation modeling is even more useful. While the latter is more time-consuming, even in 2-D, it also provides a more realistic simulated seismic survey across the prospect, which helps to reveal how some converted waves survive the processes of CMP stack and migration, and thereby present possible pitfalls to an unwary interpreter. The insights gained from the synthetic-data suggest some simple techniques that can assist an interpreter in the 3-D interpretation of subsalt events.« less
Uncovering Oscillations, Complexity, and Chaos in Chemical Kinetics Using Mathematica
NASA Astrophysics Data System (ADS)
Ferreira, M. M. C.; Ferreira, W. C., Jr.; Lino, A. C. S.; Porto, M. E. G.
1999-06-01
Unlike reactions with no peculiar temporal behavior, in oscillatory reactions concentrations can rise and fall spontaneously in a cyclic or disorganized fashion. In this article, the software Mathematica is used for a theoretical study of kinetic mechanisms of oscillating and chaotic reactions. A first simple example is introduced through a three-step reaction, called the Lotka model, which exhibits a temporal behavior characterized by damped oscillations. The phase plane method of dynamic systems theory is introduced for a geometric interpretation of the reaction kinetics without solving the differential rate equations. The equations are later numerically solved using the built-in routine NDSolve and the results are plotted. The next example, still with a very simple mechanism, is the Lotka-Volterra model reaction, which oscillates indefinitely. The kinetic process and rate equations are also represented by a three-step reaction mechanism. The most important difference between this and the former reaction is that the undamped oscillation has two autocatalytic steps instead of one. The periods of oscillations are obtained by using the discrete Fourier transform (DFT)-a well-known tool in spectroscopy, although not so common in this context. In the last section, it is shown how a simple model of biochemical interactions can be useful to understand the complex behavior of important biological systems. The model consists of two allosteric enzymes coupled in series and activated by its own products. This reaction scheme is important for explaining many metabolic mechanisms, such as the glycolytic oscillations in muscles, yeast glycolysis, and the periodic synthesis of cyclic AMP. A few of many possible dynamic behaviors are exemplified through a prototype glycolytic enzymatic reaction proposed by Decroly and Goldbeter. By simply modifying the initial concentrations, limit cycles, chaos, and birhythmicity are computationally obtained and visualized.
Maclaren, Oliver J.; Sneyd, James; Crampin, Edmund J.
2012-01-01
Secretion from the salivary glands is driven by osmosis following the establishment of osmotic gradients between the lumen, the cell and the interstitium by active ion transport. We consider a dynamic model of osmotically-driven primary saliva secretion, and use singular perturbation approaches and scaling assumptions to reduce the model. Our analysis shows that isosmotic secretion is the most efficient secretion regime, and that this holds for single isolated cells and for multiple cells assembled into an acinus. For typical parameter variations, we rule out any significant synergistic effect on total water secretion of an acinar arrangement of cells about a single shared lumen. Conditions for the attainment of isosmotic secretion are considered, and we derive an expression for how the concentration gradient between the interstitium and the lumen scales with water and chloride transport parameters. Aquaporin knockout studies are interpreted in the context of our analysis and further investigated using simulations of transport efficiency with different membrane water permeabilities. We conclude that recent claims that aquaporin knockout studies can be interpreted as evidence against a simple osmotic mechanism are not supported by our work. Many of the results that we obtain are independent of specific transporter details, and our analysis can be easily extended to apply to models that use other proposed ionic mechanisms of saliva secretion. PMID:22258315
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berryman, James G.; Grechka, Vladimir
2006-07-08
A model study on fractured systems was performed using aconcept that treats isotropic cracked systems as ensembles of crackedgrains by analogy to isotropic polycrystalline elastic media. Theapproach has two advantages: (a) Averaging performed is ensembleaveraging, thus avoiding the criticism legitimately leveled at mosteffective medium theories of quasistatic elastic behavior for crackedmedia based on volume concentrations of inclusions. Since crack effectsare largely independent of the volume they occupy in the composite, sucha non-volume-based method offers an appealingly simple modelingalternative. (b) The second advantage is that both polycrystals andfractured media are stiffer than might otherwise be expected, due tonatural bridging effects ofmore » the strong components. These same effectshave also often been interpreted as crack-crack screening inhigh-crack-density fractured media, but there is no inherent conflictbetween these two interpretations of this phenomenon. Results of thestudy are somewhat mixed. The spread in elastic constants observed in aset of numerical experiments is found to be very comparable to the spreadin values contained between the Reuss and Voigt bounds for thepolycrystal model. However, computed Hashin-Shtrikman bounds are much tootight to be in agreement with the numerical data, showing thatpolycrystals of cracked grains tend to violate some implicit assumptionsof the Hashin-Shtrikman bounding approach. However, the self-consistentestimates obtained for the random polycrystal model are nevertheless verygood estimators of the observed average behavior.« less
Strong neutron- γ competition above the neutron threshold in the decay of Co 70
Spyrou, A.; Liddick, S. N.; Naqvi, F.; ...
2016-09-29
The β-decay intensity of 70Co was measured for the first time using the technique of total absorption spectroscopy. The large β-decay Q value [12.3(3) MeV] offers a rare opportunity to study β-decay properties in a broad energy range. Two surprising features were observed in the experimental results, namely, the large fragmentation of the β intensity at high energies, as well as the strong competition between γ rays and neutrons, up to more than 2 MeV above the neutron-separation energy. The data are compared to two theoretical calculations: the shell model and the quasiparticle random phase approximation (QRPA). Both models seemmore » to be missing a significant strength at high excitation energies. Possible interpretations of this discrepancy are discussed. The shell model is used for a detailed nuclear structure interpretation and helps to explain the observed γ-neutron competition. The comparison to the QRPA calculations is done as a means to test a model that provides global β-decay properties for astrophysical calculations. Our work demonstrates the importance of performing detailed comparisons to experimental results, beyond the simple half-life comparisons. Finally, a realistic and robust description of the β-decay intensity is crucial for our understanding of nuclear structure as well as of r-process nucleosynthesis.« less
Onyśk, Agnieszka; Boczkowska, Maja
2017-01-01
Simple Sequence Repeat (SSR) markers are one of the most frequently used molecular markers in studies of crop diversity and population structure. This is due to their uniform distribution in the genome, the high polymorphism, reproducibility, and codominant character. Additional advantages are the possibility of automatic analysis and simple interpretation of the results. The M13 tagged PCR reaction significantly reduces the costs of analysis by the automatic genetic analyzers. Here, we also disclose a short protocol of SSR data analysis.
Kar, Supratik; Gajewicz, Agnieszka; Puzyn, Tomasz; Roy, Kunal; Leszczynski, Jerzy
2014-09-01
Nanotechnology has evolved as a frontrunner in the development of modern science. Current studies have established toxicity of some nanoparticles to human and environment. Lack of sufficient data and low adequacy of experimental protocols hinder comprehensive risk assessment of nanoparticles (NPs). In the present work, metal electronegativity (χ), the charge of the metal cation corresponding to a given oxide (χox), atomic number and valence electron number of the metal have been used as simple molecular descriptors to build up quantitative structure-toxicity relationship (QSTR) models for prediction of cytotoxicity of metal oxide NPs to bacteria Escherichia coli. These descriptors can be easily obtained from molecular formula and information acquired from periodic table in no time. It has been shown that a simple molecular descriptor χox can efficiently encode cytotoxicity of metal oxides leading to models with high statistical quality as well as interpretability. Based on this model and previously published experimental results, we have hypothesized the most probable mechanism of the cytotoxicity of metal oxide nanoparticles to E. coli. Moreover, the required information for descriptor calculation is independent of size range of NPs, nullifying a significant problem that various physical properties of NPs change for different size ranges. Copyright © 2014 Elsevier Inc. All rights reserved.
Modelling of ‘sub-atomic’ contrast resulting from back-bonding on Si(111)-7×7
Jarvis, Samuel P; Rashid, Mohammad A
2016-01-01
Summary It has recently been shown that ‘sub-atomic’ contrast can be observed during NC-AFM imaging of the Si(111)-7×7 substrate with a passivated tip, resulting in triangular shaped atoms [Sweetman et al. Nano Lett. 2014, 14, 2265]. The symmetry of the features, and the well-established nature of the dangling bond structure of the silicon adatom means that in this instance the contrast cannot arise from the orbital structure of the atoms, and it was suggested by simple symmetry arguments that the contrast could only arise from the backbonding symmetry of the surface adatoms. However, no modelling of the system has been performed in order to understand the precise origin of the contrast. In this paper we provide a detailed explanation for ‘sub-atomic’ contrast observed on Si(111)-7×7 using a simple model based on Lennard-Jones potentials, coupled with a flexible tip, as proposed by Hapala et al. [Phys. Rev. B 2014, 90, 085421] in the context of interpreting sub-molecular contrast. Our results show a striking similarity to experimental results, and demonstrate how ‘sub-atomic’ contrast can arise from a flexible tip exploring an asymmetric potential created due to the positioning of the surrounding surface atoms. PMID:27547610
Roche, Daniel B; Buenavista, Maria T; Tetchner, Stuart J; McGuffin, Liam J
2011-07-01
The IntFOLD server is a novel independent server that integrates several cutting edge methods for the prediction of structure and function from sequence. Our guiding principles behind the server development were as follows: (i) to provide a simple unified resource that makes our prediction software accessible to all and (ii) to produce integrated output for predictions that can be easily interpreted. The output for predictions is presented as a simple table that summarizes all results graphically via plots and annotated 3D models. The raw machine readable data files for each set of predictions are also provided for developers, which comply with the Critical Assessment of Methods for Protein Structure Prediction (CASP) data standards. The server comprises an integrated suite of five novel methods: nFOLD4, for tertiary structure prediction; ModFOLD 3.0, for model quality assessment; DISOclust 2.0, for disorder prediction; DomFOLD 2.0 for domain prediction; and FunFOLD 1.0, for ligand binding site prediction. Predictions from the IntFOLD server were found to be competitive in several categories in the recent CASP9 experiment. The IntFOLD server is available at the following web site: http://www.reading.ac.uk/bioinf/IntFOLD/.
a Proposed Benchmark Problem for Scatter Calculations in Radiographic Modelling
NASA Astrophysics Data System (ADS)
Jaenisch, G.-R.; Bellon, C.; Schumm, A.; Tabary, J.; Duvauchelle, Ph.
2009-03-01
Code Validation is a permanent concern in computer modelling, and has been addressed repeatedly in eddy current and ultrasonic modeling. A good benchmark problem is sufficiently simple to be taken into account by various codes without strong requirements on geometry representation capabilities, focuses on few or even a single aspect of the problem at hand to facilitate interpretation and to avoid that compound errors compensate themselves, yields a quantitative result and is experimentally accessible. In this paper we attempt to address code validation for one aspect of radiographic modeling, the scattered radiation prediction. Many NDT applications can not neglect scattered radiation, and the scatter calculation thus is important to faithfully simulate the inspection situation. Our benchmark problem covers the wall thickness range of 10 to 50 mm for single wall inspections, with energies ranging from 100 to 500 keV in the first stage, and up to 1 MeV with wall thicknesses up to 70 mm in the extended stage. A simple plate geometry is sufficient for this purpose, and the scatter data is compared on a photon level, without a film model, which allows for comparisons with reference codes like MCNP. We compare results of three Monte Carlo codes (McRay, Sindbad and Moderato) as well as an analytical first order scattering code (VXI), and confront them to results obtained with MCNP. The comparison with an analytical scatter model provides insights into the application domain where this kind of approach can successfully replace Monte-Carlo calculations.
Thermal regime of permafrost at Prudhoe Bay, Alaska
Lachenbruch, A.H.; Sass, J.H.; Marshall, B.V.; Moses, T.H.
1982-01-01
Temperature measurements through permafrost in the oil field at Prudhoe Bay, Alaska, combined with laboratory measurements of the thermal conductivity of drill cuttings permit an evaluation of in situ thermal properties and an understanding of the general factors that control the geothermal regime. A sharp contrast in temperature gradient at ~600 m represents a contrast in thermal conductivity caused by the downward change from interstitial ice to interstitial water at the base of permafrost under near steady-state conditions. Interpretation of the gradient contrast in terms of a simple model for the conductivity of an aggregate yields the mean ice content and thermal conductivities for the frozen and thawed sections (8.1 and 4.7 mcal/cm sec ?C, respectively). These results yield a heat flow of ~1.3 HFU which is similar to other values on the Alaskan Arctic Coast; the anomalously deep permafrost is a result of the anomalously high conductivity of the siliceous ice-rich sediments. Curvature in the upper 160 m of the temperature profiles represents a warming of ~1.8?C of the mean surface temperature, and a net accumulation of 5-6 kcal/cm 2 by the solid earth surface during the last 100 years or so. Rising sea level and thawing sea cliffs probably caused the shoreline to advance tens of kilometers in the last 20,000 years, inundating a portion of the continental shelf that is presently the target of intensive oil exploration. A simple conduction model suggests that this recently inundated region is underlain by near-melting ice-rich permafrost to depths of 300-500 m; its presence is important to seismic interpretations in oil exploration and to engineering considerations in oil production. With confirmation of the permafrost configuration by offshore drilling, heat-conduction models can yield reliable new information on the chronology of arctic shorelines.
NASA Technical Reports Server (NTRS)
Marchese, Anthony J.; Dryer, Frederick L.
1997-01-01
This program supports the engineering design, data analysis, and data interpretation requirements for the study of initially single component, spherically symmetric, isolated droplet combustion studies. Experimental emphasis is on the study of simple alcohols (methanol, ethanol) and alkanes (n-heptane, n-decane) as fuels with time dependent measurements of drop size, flame-stand-off, liquid-phase composition, and finally, extinction. Experiments have included bench-scale studies at Princeton, studies in the 2.2 and 5.18 drop towers at NASA-LeRC, and both the Fiber Supported Droplet Combustion (FSDC-1, FSDC-2) and the free Droplet Combustion Experiment (DCE) studies aboard the shuttle. Test matrix and data interpretation are performed through spherically-symmetric, time-dependent numerical computations which embody detailed sub-models for physical and chemical processes. The computed burning rate, flame stand-off, and extinction diameter are compared with the respective measurements for each individual experiment. In particular, the data from FSDC-1 and subsequent space-based experiments provide the opportunity to compare all three types of data simultaneously with the computed parameters. Recent numerical efforts are extending the computational tools to consider time dependent, axisymmetric 2-dimensional reactive flow situations.
NASA Astrophysics Data System (ADS)
Essa, Khalid S.; Elhussein, Mahmoud
2018-04-01
A new efficient approach to estimate parameters that controlled the source dimensions from magnetic anomaly profile data in light of PSO algorithm (particle swarm optimization) has been presented. The PSO algorithm has been connected in interpreting the magnetic anomaly profiles data onto a new formula for isolated sources embedded in the subsurface. The model parameters deciphered here are the depth of the body, the amplitude coefficient, the angle of effective magnetization, the shape factor and the horizontal coordinates of the source. The model parameters evaluated by the present technique, generally the depth of the covered structures were observed to be in astounding concurrence with the real parameters. The root mean square (RMS) error is considered as a criterion in estimating the misfit between the observed and computed anomalies. Inversion of noise-free synthetic data, noisy synthetic data which contains different levels of random noise (5, 10, 15 and 20%) as well as multiple structures and in additional two real-field data from USA and Egypt exhibits the viability of the approach. Thus, the final results of the different parameters are matched with those given in the published literature and from geologic results.
Adaptation, saturation, and physiological masking in single auditory-nerve fibers.
Smith, R L
1979-01-01
Results are reviewed concerning some effects, at a units's characteristic frequency, of a short-term conditioning stimulus on the responses to perstimulatory and poststimulatory test tones. A phenomenological equation is developed from the poststimulatory results and shown to be consistent with the perstimulatory results. According to the results and equation, the response to a test tone equals the unconditioned or unadapted response minus the decrement produced by adaptation to the conditioning tone. Furthermore, the decrement is proportional to the driven response to the conditioning tone and does not depend on sound intensity per se. The equation has a simple interpretation in terms of two processes in cascade--a static saturating nonlinearity followed by additive adaptation. Results are presented to show that this functional model is sufficient to account for the "physiological masking" produced by wide-band backgrounds. According to this interpretation, a sufficiently intense background produces saturation. Consequently, a superimposed test tone cause no change in response. In addition, when the onset of the background precedes the onset of the test tone, the total firing rate is reduced by adaptation. Evidence is reviewed concerning the possible correspondence between the variables in the model and intracellular events in the auditory periphery.
Colbourn, E A; Roskilly, S J; Rowe, R C; York, P
2011-10-09
This study has investigated the utility and potential advantages of gene expression programming (GEP)--a new development in evolutionary computing for modelling data and automatically generating equations that describe the cause-and-effect relationships in a system--to four types of pharmaceutical formulation and compared the models with those generated by neural networks, a technique now widely used in the formulation development. Both methods were capable of discovering subtle and non-linear relationships within the data, with no requirement from the user to specify the functional forms that should be used. Although the neural networks rapidly developed models with higher values for the ANOVA R(2) these were black box and provided little insight into the key relationships. However, GEP, although significantly slower at developing models, generated relatively simple equations describing the relationships that could be interpreted directly. The results indicate that GEP can be considered an effective and efficient modelling technique for formulation data. Copyright © 2011 Elsevier B.V. All rights reserved.
Pragmatic model of patient satisfaction in general practice: progress towards a theory.
Baker, R
1997-01-01
A major problem in the measurement of patient satisfaction is the lack of an adequate theory to explain the meaning of satisfaction, and hence how it should be measured and how the findings are interpreted. Because of the lack of a fully developed theory, when developing patient satisfaction questionnaires for use in general practice, a simple model was used. This model was pragmatic in that it linked together empirical evidence about patient satisfaction without recourse to more general social or psychological theory of behaviour, other than to define satisfaction as an attitude. Several studies with the questionnaires confirm in general the components of the model. However, the importance of personal care had not been sufficiently emphasised, and therefore the model has been revised. It can now serve as a basis for future research into patient satisfaction, in particular as a stimulus for investigating the links between components of the model and underlying psychological or other behavioural theories. PMID:10177036
Stochastic Spectral Descent for Discrete Graphical Models
Carlson, David; Hsieh, Ya-Ping; Collins, Edo; ...
2015-12-14
Interest in deep probabilistic graphical models has in-creased in recent years, due to their state-of-the-art performance on many machine learning applications. Such models are typically trained with the stochastic gradient method, which can take a significant number of iterations to converge. Since the computational cost of gradient estimation is prohibitive even for modestly sized models, training becomes slow and practically usable models are kept small. In this paper we propose a new, largely tuning-free algorithm to address this problem. Our approach derives novel majorization bounds based on the Schatten- norm. Intriguingly, the minimizers of these bounds can be interpreted asmore » gradient methods in a non-Euclidean space. We thus propose using a stochastic gradient method in non-Euclidean space. We both provide simple conditions under which our algorithm is guaranteed to converge, and demonstrate empirically that our algorithm leads to dramatically faster training and improved predictive ability compared to stochastic gradient descent for both directed and undirected graphical models.« less
Three-dimensional spatial grouping affects estimates of the illuminant
NASA Astrophysics Data System (ADS)
Perkins, Kenneth R.; Schirillo, James A.
2003-12-01
The brightnesses (i.e., perceived luminance) of surfaces within a three-dimensional scene are contingent on both the luminances and the spatial arrangement of the surfaces. Observers viewed a CRT through a haploscope that presented simulated achromatic surfaces in three dimensions. They set a test patch to be ~33% more intense than a comparison patch to match the comparison patch in brightness, which is consistent with viewing a real scene with a simple lightning interpretation from which to estimate a different level of illumination in each depth plane. Randomly positioning each surface in either depth plane minimized any simple lighting interpretation, concomitantly reducing brightness differences to ~8.5%, although the immediate surrounds of the test and comparison patches continued to differ by a 5:1 luminance ratio.
Dresch, Jacqueline M; Liu, Xiaozhou; Arnosti, David N; Ay, Ahmet
2010-10-24
Quantitative models of gene expression generate parameter values that can shed light on biological features such as transcription factor activity, cooperativity, and local effects of repressors. An important element in such investigations is sensitivity analysis, which determines how strongly a model's output reacts to variations in parameter values. Parameters of low sensitivity may not be accurately estimated, leading to unwarranted conclusions. Low sensitivity may reflect the nature of the biological data, or it may be a result of the model structure. Here, we focus on the analysis of thermodynamic models, which have been used extensively to analyze gene transcription. Extracted parameter values have been interpreted biologically, but until now little attention has been given to parameter sensitivity in this context. We apply local and global sensitivity analyses to two recent transcriptional models to determine the sensitivity of individual parameters. We show that in one case, values for repressor efficiencies are very sensitive, while values for protein cooperativities are not, and provide insights on why these differential sensitivities stem from both biological effects and the structure of the applied models. In a second case, we demonstrate that parameters that were thought to prove the system's dependence on activator-activator cooperativity are relatively insensitive. We show that there are numerous parameter sets that do not satisfy the relationships proferred as the optimal solutions, indicating that structural differences between the two types of transcriptional enhancers analyzed may not be as simple as altered activator cooperativity. Our results emphasize the need for sensitivity analysis to examine model construction and forms of biological data used for modeling transcriptional processes, in order to determine the significance of estimated parameter values for thermodynamic models. Knowledge of parameter sensitivities can provide the necessary context to determine how modeling results should be interpreted in biological systems.
Collective decision making in cohesive flocks
NASA Astrophysics Data System (ADS)
Bhattacharya, K.; Vicsek, Tamás
2010-09-01
Most of us must have been fascinated by the eye-catching displays of collectively moving animals. Schools of fish can move in a rather orderly fashion and then change direction amazingly abruptly. There are a large number of further examples both from the living and the non-living world for phenomena during which the many interacting, permanently moving units seem to arrive at a common behavioural pattern taking place in a short time. As a paradigm of this type of phenomena we consider the problem of how birds arrive at a decision resulting in their synchronized landing. We introduce a simple model to interpret this process. Collective motion prior to landing is modelled using a simple self-propelled particle (SPP) system with a new kind of boundary condition, while the tendency and the sudden propagation of the intention of landing are introduced through rules analogous to the random field Ising model in an external field. We show that our approach is capable of capturing the most relevant features of collective decision making in a system of units with variance of individual intentions and being under an increasing level of pressure to switch states. We find that as a function of the few parameters of our model the collective switching from the flying to the landing state is indeed much sharper than the distribution of individual landing intentions. The transition is accompanied by a number of interesting features discussed in this paper.
Hydrograph matching method for measuring model performance
NASA Astrophysics Data System (ADS)
Ewen, John
2011-09-01
SummaryDespite all the progress made over the years on developing automatic methods for analysing hydrographs and measuring the performance of rainfall-runoff models, automatic methods cannot yet match the power and flexibility of the human eye and brain. Very simple approaches are therefore being developed that mimic the way hydrologists inspect and interpret hydrographs, including the way that patterns are recognised, links are made by eye, and hydrological responses and errors are studied and remembered. In this paper, a dynamic programming algorithm originally designed for use in data mining is customised for use with hydrographs. It generates sets of "rays" that are analogous to the visual links made by the hydrologist's eye when linking features or times in one hydrograph to the corresponding features or times in another hydrograph. One outcome from this work is a new family of performance measures called "visual" performance measures. These can measure differences in amplitude and timing, including the timing errors between simulated and observed hydrographs in model calibration. To demonstrate this, two visual performance measures, one based on the Nash-Sutcliffe Efficiency and the other on the mean absolute error, are used in a total of 34 split-sample calibration-validation tests for two rainfall-runoff models applied to the Hodder catchment, northwest England. The customised algorithm, called the Hydrograph Matching Algorithm, is very simple to apply; it is given in a few lines of pseudocode.
GIS Toolsets for Planetary Geomorphology and Landing-Site Analysis
NASA Astrophysics Data System (ADS)
Nass, Andrea; van Gasselt, Stephan
2015-04-01
Modern Geographic Information Systems (GIS) allow expert and lay users alike to load and position geographic data and perform simple to highly complex surface analyses. For many applications dedicated and ready-to-use GIS tools are available in standard software systems while other applications require the modular combination of available basic tools to answer more specific questions. This also applies to analyses in modern planetary geomorphology where many of such (basic) tools can be used to build complex analysis tools, e.g. in image- and terrain model analysis. Apart from the simple application of sets of different tools, many complex tasks require a more sophisticated design for storing and accessing data using databases (e.g. ArcHydro for hydrological data analysis). In planetary sciences, complex database-driven models are often required to efficiently analyse potential landings sites or store rover data, but also geologic mapping data can be efficiently stored and accessed using database models rather than stand-alone shapefiles. For landings-site analyses, relief and surface roughness estimates are two common concepts that are of particular interest and for both, a number of different definitions co-exist. We here present an advanced toolset for the analysis of image and terrain-model data with an emphasis on extraction of landing site characteristics using established criteria. We provide working examples and particularly focus on the concepts of terrain roughness as it is interpreted in geomorphology and engineering studies.
Limitations to the Measurement of Oxygen Concentrations by HRTEM Imposed by Surface Roughness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lupini, Andrew R; Chisholm, Matthew F; van Benthem, Klaus
2005-01-01
In an article published in Microscopy and Microanalysis recently (Jia et al., 2004), it was claimed that aberration-corrected high resolution transmission electron microscopy (HRTEM) allows the quantitative measurement of oxygen concentrations in ceramic materials with atomic resolution. Similar claims have recently appeared elsewhere, based on images obtained through aberration correction (Jia et al., 2003; Jia & Urban, 2004) or very high voltages (Zhang et al., 2003). Seeing oxygen columns is a significant achievement of great importance (Spence, 2003) that will doubtlessly allow some exciting new science; however, other models could provide a better explanation for some of the experimental datamore » than variations in the oxygen concentration. Quantification of the oxygen concentrations was attempted by comparing experimental images with simulations in which the fractional occupancy in individual oxygen columns was reduced. The results were interpreted as representing nonstoichiometry within the bulk and at grain boundaries. This is plausible because previous studies have shown that grain boundaries can be nonstoichiometric (Kim et al., 2001), and it is indeed possible that oxygen vacancies are present at boundaries or in the bulk. However, is this the only possible interpretation? We show that for the thicknesses considered a better match to the images is obtained using a simple model of surface damage in which atoms are removed from the surface, which would usually be interpreted as surface damage or local thickness variation (from ion milling, for example).« less
Rennie, Waverly; Phetsouvanh, Rattanaxay; Lupisan, Socorro; Vanisaveth, Viengsay; Hongvanthong, Bouasy; Phompida, Samlane; Alday, Portia; Fulache, Mila; Lumagui, Richard; Jorgensen, Pernille; Bell, David; Harvey, Steven
2007-01-01
The usefulness of rapid diagnostic tests (RDT) in malaria case management depends on the accuracy of the diagnoses they provide. Despite their apparent simplicity, previous studies indicate that RDT accuracy is highly user-dependent. As malaria RDTs will frequently be used in remote areas with little supervision or support, minimising mistakes is crucial. This paper describes the development of new instructions (job aids) to improve health worker performance, based on observations of common errors made by remote health workers and villagers in preparing and interpreting RDTs, in the Philippines and Laos. Initial preparation using the instructions provided by the manufacturer was poor, but improved significantly with the job aids (e.g. correct use both of the dipstick and cassette increased in the Philippines by 17%). However, mistakes in preparation remained commonplace, especially for dipstick RDTs, as did mistakes in interpretation of results. A short orientation on correct use and interpretation further improved accuracy, from 70% to 80%. The results indicate that apparently simple diagnostic tests can be poorly performed and interpreted, but provision of clear, simple instructions can reduce these errors. Preparation of appropriate instructions and training as well as monitoring of user behaviour are an essential part of rapid test implementation.
A Simple Pythagorean Interpretation of E2 = p2 c2 + (mc2)2
NASA Astrophysics Data System (ADS)
Tobar, J. A.; Guillen, C. I.; Vargas, E. L.; Andrianarijaona, V. M.
2015-04-01
We are considering the relationship between the relativistic energy, the momentum, and the rest energy, E2 =p2c2 + (mc2)2 , and using geometrical means to analyze each individual portion in a spatial setting. The aforementioned equation suggests that pc and mc2 could be thought of as the two axis of a plane. According to de Broglie's hypothesis λ = h / p therefore suggesting that the pc-axis is connected to the wave properties of a moving object, and subsequently, the mc2-axis is connected to the particle properties such as its moment of inertia. Consequently, these two axes could represent the particle (matter) and wave properties of the moving object. An overview of possible models and meaningful interpretations, which agree with Dirac's prediction of the electron's magnetic moment, will be presented. Authors wish to give special thanks to Pacific Union College Student Senate in Angwin, California, for their financial support.
Paillet, Frederick L.; Singhroy, V.H.; Hansen, D.T.; Pierce, R.R.; Johnson, A.I.
2002-01-01
Integration of geophysical data obtained at various scales can bridge the gap between localized data from boreholes and site-wide data from regional survey profiles. Specific approaches to such analysis include: 1) comparing geophysical measurements in boreholes with the same measurement made from the surface; 2) regressing geophysical data obtained in boreholes with water-sample data from screened intervals; 3) using multiple, physically independent measurements in boreholes to develop multivariate response models for surface geophysical surveys; 4) defining subsurface cell geometry for most effective survey inversion methods; and 5) making geophysical measurements in boreholes to serve as independent verification of geophysical interpretations. Integrated analysis of surface electromagnetic surveys and borehole geophysical logs at a study site in south Florida indicates that salinity of water in the surficial aquifers is controlled by a simple wedge of seawater intrusion along the coast and by a complex pattern of upward brine seepage from deeper aquifers throughout the study area. This interpretation was verified by drilling three additional test boreholes in carefully selected locations.
Diffusion processes in tumors: A nuclear medicine approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amaya, Helman, E-mail: haamayae@unal.edu.co
The number of counts used in nuclear medicine imaging techniques, only provides physical information about the desintegration of the nucleus present in the the radiotracer molecules that were uptaken in a particular anatomical region, but that information is not a real metabolic information. For this reason a mathematical method was used to find a correlation between number of counts and {sup 18}F-FDG mass concentration. This correlation allows a better interpretation of the results obtained in the study of diffusive processes in an agar phantom, and based on it, an image from the PETCETIX DICOM sample image set from OsiriX-viewer softwaremore » was processed. PET-CT gradient magnitude and Laplacian images could show direct information on diffusive processes for radiopharmaceuticals that enter into the cells by simple diffusion. In the case of the radiopharmaceutical {sup 18}F-FDG is necessary to include pharmacokinetic models, to make a correct interpretation of the gradient magnitude and Laplacian of counts images.« less
NASA Astrophysics Data System (ADS)
Wong, K. W.; Ching, W. Y.
1989-04-01
We discuss a variety of experimental observations which are consistent with theory of the excitonic-enhancement model (EEM) presented earlier. The experimental works discussed are: (1) isotope substitution; (2) fluorinated YBa 2Cu 3O 7- x; (3) infrared optical spectra; (4) specific heat and tunneling gap; (5) Hall effect and nuclear spin relaxation; (6) positron annihilation; (7) utrasound velocity and sound attenuation; (8) Meissner effect and critical current; (9) antiferromagnetism and oxygen deficiency; (10) flux quantization; and (11) photoemission. A simple stoichiometric interpretation on the existing high temperature superconducting oxides based on the specific stacking of chemical subsystems is also presented. It is argued that according to EEM theory, a superconducting oxide must contain two stable oxides, one having excitonic levels such as Cu 2O; the other having intrinsic hole population at the top of the valence band such as CuO. A systematic search for other potential high Tc compounds is also suggested.
Enhancement of magnetocaloric effect in the Gd 2Al phase by Co alloying
Huang, Z. Y.; Fu, H.; Hadimani, R. L.; ...
2014-11-14
We observe that Cu clusters grow on surface terraces of graphite as a result of physical vapor deposition in ultrahigh vacuum. We show that the observation is incompatible with a variety of models incorporating homogeneous nucleation and high level calculations of atomic-scale energetics. An alternative explanation, ion-mediated heterogeneous nucleation, is proposed and validated, both with theory and experiment. This serves as a case study in identifying when and whether the simple, common observation of metal clusters on carbon-rich surfaces can be interpreted in terms of homogeneous nucleation. We describe a general approach for making system-specific and laboratory-specific predictions.
Helicopter rotor and engine sizing for preliminary performance estimation
NASA Technical Reports Server (NTRS)
Talbot, P. D.; Bowles, J. V.; Lee, H. C.
1986-01-01
Methods are presented for estimating some of the more fundamental design variables of single-rotor helicopters (tip speed, blade area, disk loading, and installed power) based on design requirements (speed, weight, fuselage drag, and design hover ceiling). The well-known constraints of advancing-blade compressibility and retreating-blade stall are incorporated into the estimation process, based on an empirical interpretation of rotor performance data from large-scale wind-tunnel tests. Engine performance data are presented and correlated with a simple model usable for preliminary design. When approximate results are required quickly, these methods may be more convenient to use and provide more insight than large digital computer programs.
The Second Law and Quantum Physics
NASA Astrophysics Data System (ADS)
Bennett, Charles H.
2008-08-01
In this talk, I discuss the mystery of the second law and its relation to quantum information. There are many explanations of the second law, mostly satisfactory and not mutually exclusive. Here, I advocate quantum mechanics and quantum information as something that, through entanglement, helps resolve the paradox or the puzzle of the origin of the second law. I will discuss the interpretation called quantum Darwinism and how it helps explain why our world seems so classical, and what it has to say about the permanence or transience of information. And I will discuss a simple model illustrating why systems away from thermal equilibrium tend to be more complicated.
He, Temple; Habib, Salman
2013-09-01
Simple dynamical systems--with a small number of degrees of freedom--can behave in a complex manner due to the presence of chaos. Such systems are most often (idealized) limiting cases of more realistic situations. Isolating a small number of dynamical degrees of freedom in a realistically coupled system generically yields reduced equations with terms that can have a stochastic interpretation. In situations where both noise and chaos can potentially exist, it is not immediately obvious how Lyapunov exponents, key to characterizing chaos, should be properly defined. In this paper, we show how to do this in a class of well-defined noise-driven dynamical systems, derived from an underlying Hamiltonian model.
The Extensibility of an Interpreted Language Using Plugin Libraries
NASA Astrophysics Data System (ADS)
Herceg, Dorde; Radaković, Davorka
2011-09-01
Dynamic geometry software (DGS) are computer programs that allow one to create and manipulate geometrical drawings. They are mostly used in teaching and studying geometry. However, DGS can also be used to develop interactive drawings not directly related to geometry. Examples include teaching materials for numerical mathematics at secondary school and university levels, or interactive mathematical games for elementary school children. Such applications often surpass the intended purposes of the DGS and may require complicated programming on behalf of the user. In this paper we present a simple plug-in model which enables easy development and deployment of interactive GUI components for "Geometrijica", a DGS we are developing on Silverlight.
Gravity survey and depth to bedrock in Carson Valley, Nevada-California
Maurer, D.K.
1985-01-01
Gravity data were obtained from 460 stations in Carson Valley, Nevada and California. The data have been interpreted to obtain a map of approximate depth to bedrock for use in a ground-water model of the valley. This map delineates the shape of the alluvium-filled basin and shows that the maximum depth to bedrock exceeds 5,000 feet, on the west side of the valley. A north-south trending offset in the bedrock surface shows that the Carson-Valley/Pine-Nut-Mountain block has not been tilted to the west as a simple unit, but is comprised of several smaller blocks. (USGS)
Solvent induced temperature dependencies of NMR parameters of hydrogen bonded anionic clusters
NASA Astrophysics Data System (ADS)
Golubev, Nikolai S.; Shenderovich, Ilja G.; Tolstoy, Peter M.; Shchepkin, Dmitry N.
2004-07-01
The solvent induced temperature dependence of NMR parameters (proton and fluorine chemical shifts, the two-bond scalar spin coupling constant across the hydrogen bridge, 2hJFF) for dihydrogen trifluoride anion, (FH) 2F -, in a polar aprotic solvent, CDF 3/CDF 2Cl, is reported and discussed. The results are interpreted in terms of a simple electrostatic model, accounting a decrease of electrostatic repulsion of two negatively charged fluorine atoms on placing into a dielectric medium. The conclusion is drawn that polar medium causes some contraction of hydrogen bonds in ionic clusters combined with a decrease of hydrogen bond asymmetry.
Determining whether metals nucleate homogeneously on graphite: A case study with copper
Appy, David; Lei, Huaping; Han, Yong; ...
2014-11-05
In this study, we observe that Cu clusters grow on surface terraces of graphite as a result of physical vapor deposition in ultrahigh vacuum. We show that the observation is incompatible with a variety of models incorporating homogeneous nucleation and calculations of atomic-scale energetics. An alternative explanation, ion-mediated heterogeneous nucleation, is proposed and validated, both with theory and experiment. This serves as a case study in identifying when and whether the simple, common observation of metal clusters on carbon-rich surfaces can be interpreted in terms of homogeneous nucleation. We describe a general approach for making system-specific and laboratory-specific predictions.
a Point-Like Picture of the Hydrogen Atom
NASA Astrophysics Data System (ADS)
Faghihi, F.; Jangjoo, A.; Khani, M.
A point-like picture of the Schrödinger solution for hydrogen atom is worked to emphasize that "point-like particles" may describe as "probability wave function". In each case, the three-dimensional shape of the |Ψnlm(rn, cosθ)|2 is plotted and the paths of the point-like electron (it is better to say reduced mass of the pair particles) are described in each closed shell. Finally, the orbital shape of the molecules are given according to the present simple model. In our opinion, "interpretations of the Correspondence Principle", which is a basic principle in all elementary quantum text, seems to be reviewed again!
Robust controller designs for second-order dynamic system: A virtual passive approach
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Phan, Minh
1990-01-01
A robust controller design is presented for second-order dynamic systems. The controller is model-independent and itself is a virtual second-order dynamic system. Conditions on actuator and sensor placements are identified for controller designs that guarantee overall closed-loop stability. The dynamic controller can be viewed as a virtual passive damping system that serves to stabilize the actual dynamic system. The control gains are interpreted as virtual mass, spring, and dashpot elements that play the same roles as actual physical elements in stability analysis. Position, velocity, and acceleration feedback are considered. Simple examples are provided to illustrate the physical meaning of this controller design.
Interpreting fMRI data: maps, modules and dimensions
Op de Beeck, Hans P.; Haushofer, Johannes; Kanwisher, Nancy G.
2009-01-01
Neuroimaging research over the past decade has revealed a detailed picture of the functional organization of the human brain. Here we focus on two fundamental questions that are raised by the detailed mapping of sensory and cognitive functions and illustrate these questions with findings from the object-vision pathway. First, are functionally specific regions that are located close together best understood as distinct cortical modules or as parts of a larger-scale cortical map? Second, what functional properties define each cortical map or module? We propose a model in which overlapping continuous maps of simple features give rise to discrete modules that are selective for complex stimuli. PMID:18200027
Effects of active links on epidemic transmission over social networks
NASA Astrophysics Data System (ADS)
Zhu, Guanghu; Chen, Guanrong; Fu, Xinchu
2017-02-01
A new epidemic model with two infection periods is developed to account for the human behavior in social network, where newly infected individuals gradually restrict most of future contacts or are quarantined, causing infectivity change from a degree-dependent form to a constant. The corresponding dynamics are formulated by a set of ordinary differential equations (ODEs) via mean-field approximation. The effects of diverse infectivity on the epidemic dynamics are examined, with a behavioral interpretation of the basic reproduction number. Results show that such simple adaptive reactions largely determine the impact of network structure on epidemics. Particularly, a theorem proposed by Lajmanovich and Yorke in 1976 is generalized, so that it can be applied for the analysis of the epidemic models with multi-compartments especially network-coupled ODE systems.
Model reduction for Space Station Freedom
NASA Technical Reports Server (NTRS)
Williams, Trevor
1992-01-01
Model reduction is an important practical problem in the control of flexible spacecraft, and a considerable amount of work has been carried out on this topic. Two of the best known methods developed are modal truncation and internal balancing. Modal truncation is simple to implement but can give poor results when the structure possesses clustered natural frequencies, as often occurs in practice. Balancing avoids this problem but has the disadvantages of high computational cost, possible numerical sensitivity problems, and no physical interpretation for the resulting balanced 'modes'. The purpose of this work is to examine the performance of the subsystem balancing technique developed by the investigator when tested on a realistic flexible space structure, in this case a model of the Permanently Manned Configuration (PMC) of Space Station Freedom. This method retains the desirable properties of standard balancing while overcoming the three difficulties listed above. It achieves this by first decomposing the structural model into subsystems of highly correlated modes. Each subsystem is approximately uncorrelated from all others, so balancing them separately and then combining yields comparable results to balancing the entire structure directly. The operation count reduction obtained by the new technique is considerable: a factor of roughly r(exp 2) if the system decomposes into r equal subsystems. Numerical accuracy is also improved significantly, as the matrices being operated on are of reduced dimension, and the modes of the reduced-order model now have a clear physical interpretation; they are, to first order, linear combinations of repeated-frequency modes.
Quantum Theories of Self-Localization
NASA Astrophysics Data System (ADS)
Bernstein, Lisa Joan
In the classical dynamics of coupled oscillator systems, nonlinearity leads to the existence of stable solutions in which energy remains localized for all time. Here the quantum-mechanical counterpart of classical self-localization is investigated in the context of two model systems. For these quantum models, the terms corresponding to classical nonlinearities modify a subset of the stationary quantum states to be particularly suited to the creation of nonstationary wavepackets that localize energy for long times. The first model considered here is the Quantized Discrete Self-Trapping model (QDST), a system of anharmonic oscillators with linear dispersive coupling used to model local modes of vibration in polyatomic molecules. A simple formula is derived for a particular symmetry class of QDST systems which gives an analytic connection between quantum self-localization and classical local modes. This formula is also shown to be useful in the interpretation of the vibrational spectra of some molecules. The second model studied is the Frohlich/Einstein Dimer (FED), a two-site system of anharmonically coupled oscillators based on the Frohlich Hamiltonian and motivated by the theory of Davydov solitons in biological protein. The Born-Oppenheimer perturbation method is used to obtain approximate stationary state wavefunctions with error estimates for the FED at the first excited level. A second approach is used to reduce the first excited level FED eigenvalue problem to a system of ordinary differential equations. A simple theory of low-energy self-localization in the FED is discussed. The quantum theories of self-localization in the intrinsic QDST model and the extrinsic FED model are compared.
How to interpret Methylation Sensitive Amplified Polymorphism (MSAP) profiles?
2014-01-01
Background DNA methylation plays a key role in development, contributes to genome stability, and may also respond to external factors supporting adaptation and evolution. To connect different types of stimuli with particular biological processes, identifying genome regions with altered 5-methylcytosine distribution at a genome-wide scale is important. Many researchers are using the simple, reliable, and relatively inexpensive Methylation Sensitive Amplified Polymorphism (MSAP) method that is particularly useful in studies of epigenetic variation. However, electrophoretic patterns produced by the method are rather difficult to interpret, particularly when MspI and HpaII isoschizomers are used because these enzymes are methylation-sensitive, and any C within the CCGG recognition motif can be methylated in plant DNA. Results Here, we evaluate MSAP patterns with respect to current knowledge of the enzyme activities and the level and distribution of 5-methylcytosine in plant and vertebrate genomes. We discuss potential caveats related to complex MSAP patterns and provide clues regarding how to interpret them. We further show that addition of combined HpaII + MspI digestion would assist in the interpretation of the most controversial MSAP pattern represented by the signal in the HpaII but not in the MspI profile. Conclusions We recommend modification of the MSAP protocol that definitely discerns between putative hemimethylated mCCGG and internal CmCGG sites. We believe that our view and the simple improvement will assist in correct MSAP data interpretation. PMID:24393618
Building an Open-source Simulation Platform of Acoustic Radiation Force-based Breast Elastography
Wang, Yu; Peng, Bo; Jiang, Jingfeng
2017-01-01
Ultrasound-based elastography including strain elastography (SE), acoustic radiation force Impulse (ARFI) imaging, point shear wave elastography (pSWE) and supersonic shear imaging (SSI) have been used to differentiate breast tumors among other clinical applications. The objective of this study is to extend a previously published virtual simulation platform built for ultrasound quasi-static breast elastography toward acoustic radiation force-based breast elastography. Consequently, the extended virtual breast elastography simulation platform can be used to validate image pixels with known underlying soft tissue properties (i.e. “ground truth”) in complex, heterogeneous media, enhancing confidence in elastographic image interpretations. The proposed virtual breast elastography system inherited four key components from the previously published virtual simulation platform: an ultrasound simulator (Field II), a mesh generator (Tetgen), a finite element solver (FEBio) and a visualization and data processing package (VTK). Using a simple message passing mechanism, functionalities have now been extended to acoustic radiation force-based elastography simulations. Examples involving three different numerical breast models with increasing complexity – one uniform model, one simple inclusion model and one virtual complex breast model derived from magnetic resonance imaging data, were used to demonstrate capabilities of this extended virtual platform. Overall, simulation results were compared with the published results. In the uniform model, the estimated shear wave speed (SWS) values were within 4% compared to the predetermined SWS values. In the simple inclusion and the complex breast models, SWS values of all hard inclusions in soft backgrounds were slightly underestimated, similar to what has been reported. The elastic contrast values and visual observation show that ARFI images have higher spatial resolution, while SSI images can provide higher inclusion-to-background contrast. In summary, our initial results were consistent with our expectations and what have been reported in the literature. The proposed (open-source) simulation platform can serve as a single gateway to perform many elastographic simulations in a transparent manner, thereby promoting collaborative developments. PMID:28075330
Building an open-source simulation platform of acoustic radiation force-based breast elastography
NASA Astrophysics Data System (ADS)
Wang, Yu; Peng, Bo; Jiang, Jingfeng
2017-03-01
Ultrasound-based elastography including strain elastography, acoustic radiation force impulse (ARFI) imaging, point shear wave elastography and supersonic shear imaging (SSI) have been used to differentiate breast tumors among other clinical applications. The objective of this study is to extend a previously published virtual simulation platform built for ultrasound quasi-static breast elastography toward acoustic radiation force-based breast elastography. Consequently, the extended virtual breast elastography simulation platform can be used to validate image pixels with known underlying soft tissue properties (i.e. ‘ground truth’) in complex, heterogeneous media, enhancing confidence in elastographic image interpretations. The proposed virtual breast elastography system inherited four key components from the previously published virtual simulation platform: an ultrasound simulator (Field II), a mesh generator (Tetgen), a finite element solver (FEBio) and a visualization and data processing package (VTK). Using a simple message passing mechanism, functionalities have now been extended to acoustic radiation force-based elastography simulations. Examples involving three different numerical breast models with increasing complexity—one uniform model, one simple inclusion model and one virtual complex breast model derived from magnetic resonance imaging data, were used to demonstrate capabilities of this extended virtual platform. Overall, simulation results were compared with the published results. In the uniform model, the estimated shear wave speed (SWS) values were within 4% compared to the predetermined SWS values. In the simple inclusion and the complex breast models, SWS values of all hard inclusions in soft backgrounds were slightly underestimated, similar to what has been reported. The elastic contrast values and visual observation show that ARFI images have higher spatial resolution, while SSI images can provide higher inclusion-to-background contrast. In summary, our initial results were consistent with our expectations and what have been reported in the literature. The proposed (open-source) simulation platform can serve as a single gateway to perform many elastographic simulations in a transparent manner, thereby promoting collaborative developments.
NASA Astrophysics Data System (ADS)
Huang, Qingdao; Qian, Hong
2009-09-01
We establish a mathematical model for a cellular biochemical signaling module in terms of a planar differential equation system. The signaling process is carried out by two phosphorylation-dephosphorylation reaction steps that share common kinase and phosphatase with saturated enzyme kinetics. The pair of equations is particularly simple in the present mathematical formulation, but they are singular. A complete mathematical analysis is developed based on an elementary perturbation theory. The dynamics exhibits the canonical competition behavior in addition to bistability. Although widely understood in ecological context, we are not aware of a full range of biochemical competition in a simple signaling network. The competition dynamics has broad implications to cellular processes such as cell differentiation and cancer immunoediting. The concepts of homogeneous and heterogeneous multisite phosphorylation are introduced and their corresponding dynamics are compared: there is no bistability in a heterogeneous dual phosphorylation system. A stochastic interpretation is also provided that further gives intuitive understanding of the bistable behavior inside the cells.
On a Continuum Limit for Loop Quantum Cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corichi, Alejandro; Center for Fundamental Theory, Institute for Gravitation and the Cosmos, Pennsylvania State University, University Park PA 16802; Vukasinac, Tatjana
2008-03-06
The use of non-regular representations of the Heisenberg-Weyl commutation relations has proved to be useful for studying conceptual and technical issues in quantum gravity. Of particular relevance is the study of Loop Quantum Cosmology (LQC), symmetry reduced theory that is related to Loop Quantum Gravity, and that is based on a non-regular, polymeric representation. Recently, a soluble model was used by Ashtekar, Corichi and Singh to study the relation between Loop Quantum Cosmology and the standard Wheeler-DeWitt theory and, in particular, the passage to the limit in which the auxiliary parameter (interpreted as ''quantum geometry discreetness'') is sent to zeromore » in hope to get rid of this 'regulator' that dictates the LQC dynamics at each 'scale'. In this note we outline the first steps toward reformulating this question within the program developed by the authors for studying the continuum limit of polymeric theories, which was successfully applied to simple systems such as a Simple Harmonic Oscillator.« less
Martins, Raquel R; McCracken, Andrew W; Simons, Mirre J P; Henriques, Catarina M; Rera, Michael
2018-02-05
The Smurf Assay (SA) was initially developed in the model organism Drosophila melanogaster where a dramatic increase of intestinal permeability has been shown to occur during aging (Rera et al. , 2011). We have since validated the protocol in multiple other model organisms (Dambroise et al. , 2016) and have utilized the assay to further our understanding of aging (Tricoire and Rera, 2015; Rera et al. , 2018). The SA has now also been used by other labs to assess intestinal barrier permeability (Clark et al. , 2015; Katzenberger et al. , 2015; Barekat et al. , 2016; Chakrabarti et al. , 2016; Gelino et al. , 2016). The SA in itself is simple; however, numerous small details can have a considerable impact on its experimental validity and subsequent interpretation. Here, we provide a detailed update on the SA technique and explain how to catch a Smurf while avoiding the most common experimental fallacies.
Is internal friction friction?
Savage, J.C.; Byerlee, J.D.; Lockner, D.A.
1996-01-01
Mogi [1974] proposed a simple model of the incipient rupture surface to explain the Coulomb failure criterion. We show here that this model can plausibly be extended to explain the Mohr failure criterion. In Mogi's model the incipient rupture surface immediately before fracture consists of areas across which material integrity is maintained (intact areas) and areas across which it is not (cracks). The strength of the incipient rupture surface is made up of the inherent strength of the intact areas plus the frictional resistance to sliding offered by the cracked areas. Although the coefficient of internal friction (slope of the strength versus normal stress curve) depends upon both the frictional and inherent strengths, the phenomenon of internal friction can be identified with the frictional part. The curvature of the Mohr failure envelope is interpreted as a consequence of differences in damage (cracking) accumulated in prefailure loading at different confining pressures.
Social dilemmas among supergenes: intragenomic sexual conflict and a selfing solution in Oenothera.
Brown, Sam P; Levin, Donald A
2011-12-01
Recombination is a powerful policing mechanism to control intragenomic cheats. The "parliament of the genes" can often rapidly block driving genes from cheating during meiosis. But what if the genome parliament is reduced to only two members, or supergenes? Using a series of simple game-theoretic models inspired by the peculiar genetics of Oenothera sp., we illustrate that a two supergene genome (α and β) can produce a number of surprising evolutionary dynamics, including increases in lineage longevity following a transition from sexuality (outcrossing) to asexuality (clonal self-fertilization). We end by interpreting the model in the broader context of the evolution of mutualism, which highlights that greater α, β cooperation in the self-fertilizing model can be viewed as an example of partner fidelity driving multilineage cooperation. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patil, Abhijit A.; Pandey, Yogendra Narayan; Doxastakis, Manolis
2014-10-01
The acid-catalyzed deprotection of glassy poly(4-hydroxystyrene-co-tertbutyl acrylate) films was studied with infrared absorbance spectroscopy and stochastic simulations. Experimental data were interpreted with a simple description of subdiffusive acid transport coupled to second-order acid loss. This model predicts key attributes of observed deprotection rates, such as fast reaction at short times, slow reaction at long times, and a nonlinear dependence on acid loading. Fickian diffusion is approached by increasing the post-exposure bake temperature or adding plasticizing agents to the polymer resin. These findings demonstrate that acid mobility and overall deprotection kinetics are coupled to glassy matrix dynamics. To complement the analysismore » of bulk kinetics, acid diffusion lengths were calculated from the anomalous transport model and compared with nanopattern line widths. The consistent scaling between experiments and simulations suggests that the anomalous diffusion model could be further developed into a predictive lithography tool.« less
Hyperopic photorefractive keratectomy and central islands
NASA Astrophysics Data System (ADS)
Gobbi, Pier Giorgio; Carones, Francesco; Morico, Alessandro; Vigo, Luca; Brancato, Rosario
1998-06-01
We have evaluated the refractive evolution in patients treated with yhyperopic PRK to assess the extent of the initial overcorrection and the time constant of regression. To this end, the time history of the refractive error (i.e. the difference between achieved and intended refractive correction) has been fitted by means of an exponential statistical model, giving information characterizing the surgical procedure with a direct clinical meaning. Both hyperopic and myopic PRk procedures have been analyzed by this method. The analysis of the fitting model parameters shows that hyperopic PRK patients exhibit a definitely higher initial overcorrection than myopic ones, and a regression time constant which is much longer. A common mechanism is proposed to be responsible for the refractive outcomes in hyperopic treatments and in myopic patients exhibiting significant central islands. The interpretation is in terms of superhydration of the central cornea, and is based on a simple physical model evaluating the amount of centripetal compression in the apical cornea.
NASA Astrophysics Data System (ADS)
De Filippis, G.; Cataudella, V.; Mishchenko, A. S.; Nagaosa, N.; Fierro, A.; de Candia, A.
2015-02-01
The transport properties at finite temperature of crystalline organic semiconductors are investigated, within the Su-Schrieffer-Heeger model, by combining an exact diagonalization technique, Monte Carlo approaches, and a maximum entropy method. The temperature-dependent mobility data measured in single crystals of rubrene are successfully reproduced: a crossover from super- to subdiffusive motion occurs in the range 150 ≤T ≤200 K , where the mean free path becomes of the order of the lattice parameter and strong memory effects start to appear. We provide an effective model, which can successfully explain features of the absorption spectra at low frequencies. The observed response to slowly varying electric field is interpreted by means of a simple model where the interaction between the charge carrier and lattice polarization modes is simulated by a harmonic interaction between a fictitious particle and an electron embedded in a viscous fluid.
De Filippis, G; Cataudella, V; Mishchenko, A S; Nagaosa, N; Fierro, A; de Candia, A
2015-02-27
The transport properties at finite temperature of crystalline organic semiconductors are investigated, within the Su-Schrieffer-Heeger model, by combining an exact diagonalization technique, Monte Carlo approaches, and a maximum entropy method. The temperature-dependent mobility data measured in single crystals of rubrene are successfully reproduced: a crossover from super- to subdiffusive motion occurs in the range 150≤T≤200 K, where the mean free path becomes of the order of the lattice parameter and strong memory effects start to appear. We provide an effective model, which can successfully explain features of the absorption spectra at low frequencies. The observed response to slowly varying electric field is interpreted by means of a simple model where the interaction between the charge carrier and lattice polarization modes is simulated by a harmonic interaction between a fictitious particle and an electron embedded in a viscous fluid.
Is Decoupling GDP Growth from Environmental Impact Possible?
Ward, James D; Sutton, Paul C; Werner, Adrian D; Costanza, Robert; Mohr, Steve H; Simmons, Craig T
2016-01-01
The argument that human society can decouple economic growth-defined as growth in Gross Domestic Product (GDP)-from growth in environmental impacts is appealing. If such decoupling is possible, it means that GDP growth is a sustainable societal goal. Here we show that the decoupling concept can be interpreted using an easily understood model of economic growth and environmental impact. The simple model is compared to historical data and modelled projections to demonstrate that growth in GDP ultimately cannot be decoupled from growth in material and energy use. It is therefore misleading to develop growth-oriented policy around the expectation that decoupling is possible. We also note that GDP is increasingly seen as a poor proxy for societal wellbeing. GDP growth is therefore a questionable societal goal. Society can sustainably improve wellbeing, including the wellbeing of its natural assets, but only by discarding GDP growth as the goal in favor of more comprehensive measures of societal wellbeing.
The Vaigat Rock Avalanche Laboratory, west-central Greenland
NASA Astrophysics Data System (ADS)
Dunning, S.; Rosser, N. J.; Szczucinski, W.; Norman, E. C.; Benjamin, J.; Strzelecki, M.; Long, A. J.; Drewniak, M.
2013-12-01
Rock avalanches have unusually high mobility and pose both an immediate hazard, but also produce far-field impacts associated with dam breach, glacier collapse and where they run-out into water, tsunami. Such secondary hazards can often pose higher risks than the original landslide. The prediction of future threats posed by potential rock avalanches is heavily reliant upon understanding of the physics derived from an interpretation of deposits left by previous events, yet drawing comparisons between multiple events is normally challenging as interactions with complex mountainous terrain makes deposits from each event unique. As such numerical models and the interpretation of the underlying physics which govern landslide mobility is commonly case-specific and poorly suited to extrapolation beyond the single events the model is tuned to. Here we present a high-resolution LiDAR and hyperspectral dataset captured across a unique cluster of large rock avalanche source areas and deposits in the Vaigat straight, west central Greenland. Vaigat offers the unprecedented opportunity to model a sample of > 15 rock avalanches of various age sourced from an 80 km coastal escarpment. At Vaigat many of the key variables (topography, geology, post-glacial history) are held constant across all landslides providing the chance to investigate the variations in dynamics and emplacement style related to variable landslide volume, drop-heights, and thinning/spreading over relatively simple, unrestricted run-out zones both onto land and into water. Our data suggest that this region represents excellent preservation of landslide deposits, and hence is well suited to calibrate numerical models of run out dynamics. We use this data to aid the interpretation of deposit morphology, structure lithology and run-out characteristics in more complex settings. Uniquely, we are also able to calibrate our models using a far-field dataset of well-preserved tsunami run-up deposits, resulting from the 21.11.00 Paatuut landslide. The study was funded by Polish National Science Centre grant No. 2011/01/B/ST10/01553, and project UK NERC ARSF IG13-15.
SUBMILLIMETER GALAXY NUMBER COUNTS AND MAGNIFICATION BY GALAXY CLUSTERS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lima, Marcos; Jain, Bhuvnesh; Devlin, Mark
2010-07-01
We present an analytical model that reproduces measured galaxy number counts from surveys in the wavelength range of 500 {mu}m-2 mm. The model involves a single high-redshift galaxy population with a Schechter luminosity function that has been gravitationally lensed by galaxy clusters in the mass range 10{sup 13}-10{sup 15} M{sub sun}. This simple model reproduces both the low-flux and the high-flux end of the number counts reported by the BLAST, SCUBA, AzTEC, and South Pole Telescope (SPT) surveys. In particular, our model accounts for the most luminous galaxies detected by SPT as the result of high magnifications by galaxy clustersmore » (magnification factors of 10-30). This interpretation implies that submillimeter (submm) and millimeter surveys of this population may prove to be a useful addition to ongoing cluster detection surveys. The model also implies that the bulk of submm galaxies detected at wavelengths larger than 500 {mu}m lie at redshifts greater than 2.« less
NASA Technical Reports Server (NTRS)
Courtin, Regis; Wagener, Richard; Mckay, Christopher P.; Caldwell, John; Fricke, Karl-Heinrich
1991-01-01
The theoretical model developed by McKay et al. (1989) to characterize the size distribution, thermal structure, and chemical composition of the stratospheric haze of Titan is applied to new 220-335-nm albedo measurements obtained with the long-wavelength prime camera of the IUE during August 1987. Data and model predictions are presented in extensive graphs and discussed in detail. It is shown that a simple model with particles of one size at a given altitude does not accurately reproduce the observed features in all spectral regions, but that good general agreement is obtained using a model with a uniformly mixed layer at 150-600 km and a bimodal distribution of small 'polymer' haze particles (radius less than 20 nm) and larger haze particles (radius 100-500 nm). The number densities implied by this model require, however, a mechanism such as electrostatic charging or reaction kinetics to inhibit coagulation of the smaller particles.
Brian hears: online auditory processing using vectorization over channels.
Fontaine, Bertrand; Goodman, Dan F M; Benichoux, Victor; Brette, Romain
2011-01-01
The human cochlea includes about 3000 inner hair cells which filter sounds at frequencies between 20 Hz and 20 kHz. This massively parallel frequency analysis is reflected in models of auditory processing, which are often based on banks of filters. However, existing implementations do not exploit this parallelism. Here we propose algorithms to simulate these models by vectorizing computation over frequency channels, which are implemented in "Brian Hears," a library for the spiking neural network simulator package "Brian." This approach allows us to use high-level programming languages such as Python, because with vectorized operations, the computational cost of interpretation represents a small fraction of the total cost. This makes it possible to define and simulate complex models in a simple way, while all previous implementations were model-specific. In addition, we show that these algorithms can be naturally parallelized using graphics processing units, yielding substantial speed improvements. We demonstrate these algorithms with several state-of-the-art cochlear models, and show that they compare favorably with existing, less flexible, implementations.
Rasulev, Bakhtiyor; Kusić, Hrvoje; Leszczynska, Danuta; Leszczynski, Jerzy; Koprivanac, Natalija
2010-05-01
The goal of the study was to predict toxicity in vivo caused by aromatic compounds structured with a single benzene ring and the presence or absence of different substituent groups such as hydroxyl-, nitro-, amino-, methyl-, methoxy-, etc., by using QSAR/QSPR tools. A Genetic Algorithm and multiple regression analysis were applied to select the descriptors and to generate the correlation models. The most predictive model is shown to be the 3-variable model which also has a good ratio of the number of descriptors and their predictive ability to avoid overfitting. The main contributions to the toxicity were shown to be the polarizability weighted MATS2p and the number of certain groups C-026 descriptors. The GA-MLRA approach showed good results in this study, which allows the building of a simple, interpretable and transparent model that can be used for future studies of predicting toxicity of organic compounds to mammals.
Modeling molecular mechanisms in the axon
NASA Astrophysics Data System (ADS)
de Rooij, R.; Miller, K. E.; Kuhl, E.
2017-03-01
Axons are living systems that display highly dynamic changes in stiffness, viscosity, and internal stress. However, the mechanistic origin of these phenomenological properties remains elusive. Here we establish a computational mechanics model that interprets cellular-level characteristics as emergent properties from molecular-level events. We create an axon model of discrete microtubules, which are connected to neighboring microtubules via discrete crosslinking mechanisms that obey a set of simple rules. We explore two types of mechanisms: passive and active crosslinking. Our passive and active simulations suggest that the stiffness and viscosity of the axon increase linearly with the crosslink density, and that both are highly sensitive to the crosslink detachment and reattachment times. Our model explains how active crosslinking with dynein motors generates internal stresses and actively drives axon elongation. We anticipate that our model will allow us to probe a wide variety of molecular phenomena—both in isolation and in interaction—to explore emergent cellular-level features under physiological and pathological conditions.
Cozzi-Lepri, Alessandro; Prosperi, Mattia C F; Kjær, Jesper; Dunn, David; Paredes, Roger; Sabin, Caroline A; Lundgren, Jens D; Phillips, Andrew N; Pillay, Deenan
2011-01-01
The question of whether a score for a specific antiretroviral (e.g. lopinavir/r in this analysis) that improves prediction of viral load response given by existing expert-based interpretation systems (IS) could be derived from analyzing the correlation between genotypic data and virological response using statistical methods remains largely unanswered. We used the data of the patients from the UK Collaborative HIV Cohort (UK CHIC) Study for whom genotypic data were stored in the UK HIV Drug Resistance Database (UK HDRD) to construct a training/validation dataset of treatment change episodes (TCE). We used the average square error (ASE) on a 10-fold cross-validation and on a test dataset (the EuroSIDA TCE database) to compare the performance of a newly derived lopinavir/r score with that of the 3 most widely used expert-based interpretation rules (ANRS, HIVDB and Rega). Our analysis identified mutations V82A, I54V, K20I and I62V, which were associated with reduced viral response and mutations I15V and V91S which determined lopinavir/r hypersensitivity. All models performed equally well (ASE on test ranging between 1.1 and 1.3, p = 0.34). We fully explored the potential of linear regression to construct a simple predictive model for lopinavir/r-based TCE. Although, the performance of our proposed score was similar to that of already existing IS, previously unrecognized lopinavir/r-associated mutations were identified. The analysis illustrates an approach of validation of expert-based IS that could be used in the future for other antiretrovirals and in other settings outside HIV research.
Ponce, Carlos; Bravo, Carolina; Alonso, Juan Carlos
2014-01-01
Studies evaluating agri-environmental schemes (AES) usually focus on responses of single species or functional groups. Analyses are generally based on simple habitat measurements but ignore food availability and other important factors. This can limit our understanding of the ultimate causes determining the reactions of birds to AES. We investigated these issues in detail and throughout the main seasons of a bird's annual cycle (mating, postfledging and wintering) in a dry cereal farmland in a Special Protection Area for farmland birds in central Spain. First, we modeled four bird response parameters (abundance, species richness, diversity and “Species of European Conservation Concern” [SPEC]-score), using detailed food availability and vegetation structure measurements (food models). Second, we fitted new models, built using only substrate composition variables (habitat models). Whereas habitat models revealed that both, fields included and not included in the AES benefited birds, food models went a step further and included seed and arthropod biomass as important predictors, respectively, in winter and during the postfledging season. The validation process showed that food models were on average 13% better (up to 20% in some variables) in predicting bird responses. However, the cost of obtaining data for food models was five times higher than for habitat models. This novel approach highlighted the importance of food availability-related causal processes involved in bird responses to AES, which remained undetected when using conventional substrate composition assessment models. Despite their higher costs, measurements of food availability add important details to interpret the reactions of the bird community to AES interventions and thus facilitate evaluating the real efficiency of AES programs. PMID:25165523
Kwok, Oi-Man; Underhill, Andrea T.; Berry, Jack W.; Luo, Wen; Elliott, Timothy R.; Yoon, Myeongsun
2008-01-01
The use and quality of longitudinal research designs has increased over the past two decades, and new approaches for analyzing longitudinal data, including multi-level modeling (MLM) and latent growth modeling (LGM), have been developed. The purpose of this paper is to demonstrate the use of MLM and its advantages in analyzing longitudinal data. Data from a sample of individuals with intra-articular fractures of the lower extremity from the University of Alabama at Birmingham’s Injury Control Research Center is analyzed using both SAS PROC MIXED and SPSS MIXED. We start our presentation with a discussion of data preparation for MLM analyses. We then provide example analyses of different growth models, including a simple linear growth model and a model with a time-invariant covariate, with interpretation for all the parameters in the models. More complicated growth models with different between- and within-individual covariance structures and nonlinear models are discussed. Finally, information related to MLM analysis such as online resources is provided at the end of the paper. PMID:19649151
Lambron, Julien; Rakotonjanahary, Josué; Loisel, Didier; Frampas, Eric; De Carli, Emilie; Delion, Matthieu; Rialland, Xavier; Toulgoat, Frédérique
2016-02-01
Magnetic resonance (MR) images from children with optic pathway glioma (OPG) are complex. We initiated this study to evaluate the accuracy of MR imaging (MRI) interpretation and to propose a simple and reproducible imaging classification for MRI. We randomly selected 140 MRIs from among 510 MRIs performed on 104 children diagnosed with OPG in France from 1990 to 2004. These images were reviewed independently by three radiologists (F.T., 15 years of experience in neuroradiology; D.L., 25 years of experience in pediatric radiology; and J.L., 3 years of experience in radiology) using a classification derived from the Dodge and modified Dodge classifications. Intra- and interobserver reliabilities were assessed using the Bland-Altman method and the kappa coefficient. These reviews allowed the definition of reliable criteria for MRI interpretation. The reviews showed intraobserver variability and large discrepancies among the three radiologists (kappa coefficient varying from 0.11 to 1). These variabilities were too large for the interpretation to be considered reproducible over time or among observers. A consensual analysis, taking into account all observed variabilities, allowed the development of a definitive interpretation protocol. Using this revised protocol, we observed consistent intra- and interobserver results (kappa coefficient varying from 0.56 to 1). The mean interobserver difference for the solid portion of the tumor with contrast enhancement was 0.8 cm(3) (limits of agreement = -16 to 17). We propose simple and precise rules for improving the accuracy and reliability of MRI interpretation for children with OPG. Further studies will be necessary to investigate the possible prognostic value of this approach.
Evaluation of Bayesian Sequential Proportion Estimation Using Analyst Labels
NASA Technical Reports Server (NTRS)
Lennington, R. K.; Abotteen, K. M. (Principal Investigator)
1980-01-01
The author has identified the following significant results. A total of ten Large Area Crop Inventory Experiment Phase 3 blind sites and analyst-interpreter labels were used in a study to compare proportional estimates obtained by the Bayes sequential procedure with estimates obtained from simple random sampling and from Procedure 1. The analyst error rate using the Bayes technique was shown to be no greater than that for the simple random sampling. Also, the segment proportion estimates produced using this technique had smaller bias and mean squared errors than the estimates produced using either simple random sampling or Procedure 1.
Inferring Nonlinear Neuronal Computation Based on Physiologically Plausible Inputs
McFarland, James M.; Cui, Yuwei; Butts, Daniel A.
2013-01-01
The computation represented by a sensory neuron's response to stimuli is constructed from an array of physiological processes both belonging to that neuron and inherited from its inputs. Although many of these physiological processes are known to be nonlinear, linear approximations are commonly used to describe the stimulus selectivity of sensory neurons (i.e., linear receptive fields). Here we present an approach for modeling sensory processing, termed the Nonlinear Input Model (NIM), which is based on the hypothesis that the dominant nonlinearities imposed by physiological mechanisms arise from rectification of a neuron's inputs. Incorporating such ‘upstream nonlinearities’ within the standard linear-nonlinear (LN) cascade modeling structure implicitly allows for the identification of multiple stimulus features driving a neuron's response, which become directly interpretable as either excitatory or inhibitory. Because its form is analogous to an integrate-and-fire neuron receiving excitatory and inhibitory inputs, model fitting can be guided by prior knowledge about the inputs to a given neuron, and elements of the resulting model can often result in specific physiological predictions. Furthermore, by providing an explicit probabilistic model with a relatively simple nonlinear structure, its parameters can be efficiently optimized and appropriately regularized. Parameter estimation is robust and efficient even with large numbers of model components and in the context of high-dimensional stimuli with complex statistical structure (e.g. natural stimuli). We describe detailed methods for estimating the model parameters, and illustrate the advantages of the NIM using a range of example sensory neurons in the visual and auditory systems. We thus present a modeling framework that can capture a broad range of nonlinear response functions while providing physiologically interpretable descriptions of neural computation. PMID:23874185
Hierarchy of Certain Types of DNA Splicing Systems
NASA Astrophysics Data System (ADS)
Yusof, Yuhani; Sarmin, Nor Haniza; Goode, T. Elizabeth; Mahmud, Mazri; Heng, Fong Wan
A Head splicing system (H-system)consists of a finite set of strings (words) written over a finite alphabet, along with a finite set of rules that acts on the strings by iterated cutting and pasting to create a splicing language. Any interpretation that is aligned with Tom Head's original idea is one in which the strings represent double-stranded deoxyribonucleic acid (dsDNA) and the rules represent the cutting and pasting action of restriction enzymes and ligase, respectively. A new way of writing the rule sets is adopted so as to make the biological interpretation transparent. This approach is used in a formal language- theoretic analysis of the hierarchy of certain classes of splicing systems, namely simple, semi-simple and semi-null splicing systems. The relations between such systems and their associated languages are given as theorems, corollaries and counterexamples.
Computer model analysis of the radial artery pressure waveform.
Schwid, H A; Taylor, L A; Smith, N T
1987-10-01
Simultaneous measurements of aortic and radial artery pressures are reviewed, and a model of the cardiovascular system is presented. The model is based on resonant networks for the aorta and axillo-brachial-radial arterial system. The model chosen is a simple one, in order to make interpretation of the observed relationships clear. Despite its simplicity, the model produces realistic aortic and radial artery pressure waveforms. It demonstrates that the resonant properties of the arterial wall significantly alter the pressure waveform as it is propagated from the aorta to the radial artery. Although the mean and end-diastolic radial pressures are usually accurate estimates of the corresponding aortic pressures, the systolic pressure at the radial artery is often much higher than that of the aorta due to overshoot caused by the resonant behavior of the radial artery. The radial artery dicrotic notch is predominantly dependent on the axillo-brachial-radial arterial wall properties, rather than on the aortic valve or peripheral resistance. Hence the use of the radial artery dicrotic notch as an estimate of end systole is unreliable. The rate of systolic upstroke, dP/dt, of the radial artery waveform is a function of many factors, making it difficult to interpret. The radial artery waveform usually provides accurate estimates for mean and diastolic aortic pressures; for all other measurements it is an inadequate substitute for the aortic pressure waveform. In the presence of low forearm peripheral resistance the mean radial artery pressure may significantly underestimate the mean aortic pressure, as explained by a voltage divider model.
Visible Geology - Interactive online geologic block modelling
NASA Astrophysics Data System (ADS)
Cockett, R.
2012-12-01
Geology is a highly visual science, and many disciplines require spatial awareness and manipulation. For example, interpreting cross-sections, geologic maps, or plotting data on a stereonet all require various levels of spatial abilities. These skills are often not focused on in undergraduate geoscience curricula and many students struggle with spatial relations, manipulations, and penetrative abilities (e.g. Titus & Horsman, 2009). A newly developed program, Visible Geology, allows for students to be introduced to many geologic concepts and spatial skills in a virtual environment. Visible Geology is a web-based, three-dimensional environment where students can create and interrogate their own geologic block models. The program begins with a blank model, users then add geologic beds (with custom thickness and color) and can add geologic deformation events like tilting, folding, and faulting. Additionally, simple intrusive dikes can be modelled, as well as unconformities. Students can also explore the interaction of geology with topography by drawing elevation contours to produce their own topographic models. Students can not only spatially manipulate their model, but can create cross-sections and boreholes to practice their visual penetrative abilities. Visible Geology is easy to access and use, with no downloads required, so it can be incorporated into current, paper-based, lab activities. Sample learning activities are being developed that target introductory and structural geology curricula with learning objectives such as relative geologic history, fault characterization, apparent dip and thickness, interference folding, and stereonet interpretation. Visible Geology provides a richly interactive, and immersive environment for students to explore geologic concepts and practice their spatial skills.; Screenshot of Visible Geology showing folding and faulting interactions on a ridge topography.
Free oscillations in a climate model with ice-sheet dynamics
NASA Technical Reports Server (NTRS)
Kallen, E.; Crafoord, C.; Ghil, M.
1979-01-01
A study of stable periodic solutions to a simple nonlinear model of the ocean-atmosphere-ice system is presented. The model has two dependent variables: ocean-atmosphere temperature and latitudinal extent of the ice cover. No explicit dependence on latitude is considered in the model. Hence all variables depend only on time and the model consists of a coupled set of nonlinear ordinary differential equations. The globally averaged ocean-atmosphere temperature in the model is governed by the radiation balance. The reflectivity to incoming solar radiation, i.e., the planetary albedo, includes separate contributions from sea ice and from continental ice sheets. The major physical mechanisms active in the model are (1) albedo-temperature feedback, (2) continental ice-sheet dynamics and (3) precipitation-rate variations. The model has three-equilibrium solutions, two of which are linearly unstable, while one is linearly stable. For some choices of parameters, the stability picture changes and sustained, finite-amplitude oscillations obtain around the previously stable equilibrium solution. The physical interpretation of these oscillations points to the possibility of internal mechanisms playing a role in glaciation cycles.
A Methodology to Seperate and Analyze a Seismic Wide Angle Profile
NASA Astrophysics Data System (ADS)
Weinzierl, Wolfgang; Kopp, Heidrun
2010-05-01
General solutions of inverse problems can often be obtained through the introduction of probability distributions to sample the model space. We present a simple approach of defining an a priori space in a tomographic study and retrieve the velocity-depth posterior distribution by a Monte Carlo method. Utilizing a fitting routine designed for very low statistics to setup and analyze the obtained tomography results, it is possible to statistically separate the velocity-depth model space derived from the inversion of seismic refraction data. An example of a profile acquired in the Lesser Antilles subduction zone reveals the effectiveness of this approach. The resolution analysis of the structural heterogeneity includes a divergence analysis which proves to be capable of dissecting long wide-angle profiles for deep crust and upper mantle studies. The complete information of any parameterised physical system is contained in the a posteriori distribution. Methods for analyzing and displaying key properties of the a posteriori distributions of highly nonlinear inverse problems are therefore essential in the scope of any interpretation. From this study we infer several conclusions concerning the interpretation of the tomographic approach. By calculating a global as well as singular misfits of velocities we are able to map different geological units along a profile. Comparing velocity distributions with the result of a tomographic inversion along the profile we can mimic the subsurface structures in their extent and composition. The possibility of gaining a priori information for seismic refraction analysis by a simple solution to an inverse problem and subsequent resolution of structural heterogeneities through a divergence analysis is a new and simple way of defining a priori space and estimating the a posteriori mean and covariance in singular and general form. The major advantage of a Monte Carlo based approach in our case study is the obtained knowledge of velocity depth distributions. Certainly the decision of where to extract velocity information on the profile for setting up a Monte Carlo ensemble is limiting the a priori space. However, the general conclusion of analyzing the velocity field according to distinct reference distributions gives us the possibility to define the covariance according to any geological unit if we have a priori information on the velocity depth distributions. Using the wide angle data recorded across the Lesser Antilles arc, we are able to resolve a shallow feature like the backstop by a robust and simple divergence analysis. We demonstrate the effectiveness of the new methodology to extract some key features and properties from the inversion results by including information concerning the confidence level of results.
NASA Astrophysics Data System (ADS)
Ginsberg, Edw. S.
2018-02-01
The compatibility of the Newtonian formulation of mechanical energy and the transformation equations of Galilean relativity is demonstrated for three simple examples of motion treated in most introductory physics courses (free fall, a frictionless inclined plane, and a mass/spring system). Only elementary concepts and mathematics, accessible to students at that level, are used. Emphasis is on pedagogy and concepts related to the transformation properties of potential energy.
Characteristics of pattern formation and evolution in approximations of Physarum transport networks.
Jones, Jeff
2010-01-01
Most studies of pattern formation place particular emphasis on its role in the development of complex multicellular body plans. In simpler organisms, however, pattern formation is intrinsic to growth and behavior. Inspired by one such organism, the true slime mold Physarum polycephalum, we present examples of complex emergent pattern formation and evolution formed by a population of simple particle-like agents. Using simple local behaviors based on chemotaxis, the mobile agent population spontaneously forms complex and dynamic transport networks. By adjusting simple model parameters, maps of characteristic patterning are obtained. Certain areas of the parameter mapping yield particularly complex long term behaviors, including the circular contraction of network lacunae and bifurcation of network paths to maintain network connectivity. We demonstrate the formation of irregular spots and labyrinthine and reticulated patterns by chemoattraction. Other Turing-like patterning schemes were obtained by using chemorepulsion behaviors, including the self-organization of regular periodic arrays of spots, and striped patterns. We show that complex pattern types can be produced without resorting to the hierarchical coupling of reaction-diffusion mechanisms. We also present network behaviors arising from simple pre-patterning cues, giving simple examples of how the emergent pattern formation processes evolve into networks with functional and quasi-physical properties including tensionlike effects, network minimization behavior, and repair to network damage. The results are interpreted in relation to classical theories of biological pattern formation in natural systems, and we suggest mechanisms by which emergent pattern formation processes may be used as a method for spatially represented unconventional computation.
Untangling Slab Dynamics Using 3-D Numerical and Analytical Models
NASA Astrophysics Data System (ADS)
Holt, A. F.; Royden, L.; Becker, T. W.
2016-12-01
Increasingly sophisticated numerical models have enabled us to make significant strides in identifying the key controls on how subducting slabs deform. For example, 3-D models have demonstrated that subducting plate width, and the related strength of toroidal flow around the plate edge, exerts a strong control on both the curvature and the rate of migration of the trench. However, the results of numerical subduction models can be difficult to interpret, and many first order dynamics issues remain at least partially unresolved. Such issues include the dominant controls on trench migration, the interdependence of asthenospheric pressure and slab dynamics, and how nearby slabs influence each other's dynamics. We augment 3-D, dynamically evolving finite element models with simple, analytical force-balance models to distill the physics associated with subduction into more manageable parts. We demonstrate that for single, isolated subducting slabs much of the complexity of our fully numerical models can be encapsulated by simple analytical expressions. Rates of subduction and slab dip correlate strongly with the asthenospheric pressure difference across the subducting slab. For double subduction, an additional slab gives rise to more complex mantle pressure and flow fields, and significantly extends the range of plate kinematics (e.g., convergence rate, trench migration rate) beyond those present in single slab models. Despite these additional complexities, we show that much of the dynamics of such multi-slab systems can be understood using the physics illuminated by our single slab study, and that a force-balance method can be used to relate intra-plate stress to viscous pressure in the asthenosphere and coupling forces at plate boundaries. This method has promise for rapid modeling of large systems of subduction zones on a global scale.
Modular Bundle Adjustment for Photogrammetric Computations
NASA Astrophysics Data System (ADS)
Börlin, N.; Murtiyoso, A.; Grussenmeyer, P.; Menna, F.; Nocerino, E.
2018-05-01
In this paper we investigate how the residuals in bundle adjustment can be split into a composition of simple functions. According to the chain rule, the Jacobian (linearisation) of the residual can be formed as a product of the Jacobians of the individual steps. When implemented, this enables a modularisation of the computation of the bundle adjustment residuals and Jacobians where each component has limited responsibility. This enables simple replacement of components to e.g. implement different projection or rotation models by exchanging a module. The technique has previously been used to implement bundle adjustment in the open-source package DBAT (Börlin and Grussenmeyer, 2013) based on the Photogrammetric and Computer Vision interpretations of Brown (1971) lens distortion model. In this paper, we applied the technique to investigate how affine distortions can be used to model the projection of a tilt-shift lens. Two extended distortion models were implemented to test the hypothesis that the ordering of the affine and lens distortion steps can be changed to reduce the size of the residuals of a tilt-shift lens calibration. Results on synthetic data confirm that the ordering of the affine and lens distortion steps matter and is detectable by DBAT. However, when applied to a real camera calibration data set of a tilt-shift lens, no difference between the extended models was seen. This suggests that the tested hypothesis is false and that other effects need to be modelled to better explain the projection. The relatively low implementation effort that was needed to generate the models suggest that the technique can be used to investigate other novel projection models in photogrammetry, including modelling changes in the 3D geometry to better understand the tilt-shift lens.
Tungsten isotope evidence that mantle plumes contain no contribution from the Earth's core
NASA Astrophysics Data System (ADS)
Scherstén, Anders; Elliott, Tim; Hawkesworth, Chris; Norman, Marc
2004-01-01
Osmium isotope ratios provide important constraints on the sources of ocean-island basalts, but two very different models have been put forward to explain such data. One model interprets 187Os-enrichments in terms of a component of recycled oceanic crust within the source material. The other model infers that interaction of the mantle with the Earth's outer core produces the isotope anomalies and, as a result of coupled 186Os-187Os anomalies, put time constraints on inner-core formation. Like osmium, tungsten is a siderophile (`iron-loving') element that preferentially partitioned into the Earth's core during core formation but is also `incompatible' during mantle melting (it preferentially enters the melt phase), which makes it further depleted in the mantle. Tungsten should therefore be a sensitive tracer of core contributions in the source of mantle melts. Here we present high-precision tungsten isotope data from the same set of Hawaiian rocks used to establish the previously interpreted 186Os-187Os anomalies and on selected South African rocks, which have also been proposed to contain a core contribution. None of the samples that we have analysed have a negative tungsten isotope value, as predicted from the core-contribution model. This rules out a simple core-mantle mixing scenario and suggests that the radiogenic osmium in ocean-island basalts can better be explained by the source of such basalts containing a component of recycled crust.
Surface vibrational structure at alkane liquid/vapor interfaces
NASA Astrophysics Data System (ADS)
Esenturk, Okan; Walker, Robert A.
2006-11-01
Broadband vibrational sum frequency spectroscopy (VSFS) has been used to examine the surface structure of alkane liquid/vapor interfaces. The alkanes range in length from n-nonane (C9H20) to n-heptadecane (C17H36), and all liquids except heptadecane are studied at temperatures well above their bulk (and surface) freezing temperatures. Intensities of vibrational bands in the CH stretching region acquired under different polarization conditions show systematic, chain length dependent changes. Data provide clear evidence of methyl group segregation at the liquid/vapor interface, but two different models of alkane chain structure can predict chain length dependent changes in band intensities. Each model leads to a different interpretation of the extent to which different chain segments contribute to the anisotropic interfacial region. One model postulates that changes in vibrational band intensities arise solely from a reduced surface coverage of methyl groups as alkane chain length increases. The additional methylene groups at the surface must be randomly distributed and make no net contribution to the observed VSF spectra. The second model considers a simple statistical distribution of methyl and methylene groups populating a three dimensional, interfacial lattice. This statistical picture implies that the VSF signal arises from a region extending several functional groups into the bulk liquid, and that the growing fraction of methylene groups in longer chain alkanes bears responsibility for the observed spectral changes. The data and resulting interpretations provide clear benchmarks for emerging theories of molecular structure and organization at liquid surfaces, especially for liquids lacking strong polar ordering.
A causal examination of the effects of confounding factors on multimetric indices
Schoolmaster, Donald R.; Grace, James B.; Schweiger, E. William; Mitchell, Brian R.; Guntenspergen, Glenn R.
2013-01-01
The development of multimetric indices (MMIs) as a means of providing integrative measures of ecosystem condition is becoming widespread. An increasingly recognized problem for the interpretability of MMIs is controlling for the potentially confounding influences of environmental covariates. Most common approaches to handling covariates are based on simple notions of statistical control, leaving the causal implications of covariates and their adjustment unstated. In this paper, we use graphical models to examine some of the potential impacts of environmental covariates on the observed signals between human disturbance and potential response metrics. Using simulations based on various causal networks, we show how environmental covariates can both obscure and exaggerate the effects of human disturbance on individual metrics. We then examine from a causal interpretation standpoint the common practice of adjusting ecological metrics for environmental influences using only the set of sites deemed to be in reference condition. We present and examine the performance of an alternative approach to metric adjustment that uses the whole set of sites and models both environmental and human disturbance effects simultaneously. The findings from our analyses indicate that failing to model and adjust metrics can result in a systematic bias towards those metrics in which environmental covariates function to artificially strengthen the metric–disturbance relationship resulting in MMIs that do not accurately measure impacts of human disturbance. We also find that a “whole-set modeling approach” requires fewer assumptions and is more efficient with the given information than the more commonly applied “reference-set” approach.
Detonation shock dynamics with an acceleration relation for nitromethane and TATB
NASA Astrophysics Data System (ADS)
Swift, Damian; Kraus, Richard; Mulford, Roberta; White, Stephen
2015-06-01
The propagation of curved detonation waves has been treated phenomenologically through models of the speed D of a detonation wave as a function of its curvature K, in the Whitham-Bdzil-Lambourn model, also known as detonation shock dynamics. D(K) relations, and the edge angle with adjacent material, have been deduced from the steady shape of detonation waves in long rods and slabs of explosive. Nonlinear D(K) relations have proven necessary to interpret data from charges of different diameter, and even then the D(K) relation may not transfer between diameters. This is an indication that the D(K) relation oversimplifies the kinematics. It is also possible to interpret wave-shape data in terms of an acceleration relation, as used in Brun's Jouguet relaxe model. One form of acceleration behavior is to couple an asymptotic D(K) relation with a time-dependent relaxation toward it from the instantaneous, local speed. This approach is also capable of modeling overdriving of a detonation by a booster. Using archival data for the TATB-based explosive EDC35 and for nitromethane, we found that a simple linear asymptotic D(K) relation with a constant relaxation rate was able to reproduce the experimental wave-shapes better, with fewer parameters, than a nonlinear instantaneous D(K) relation. This work was performed in part under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Synthesis and Spectra of Vanadium Complexes.
ERIC Educational Resources Information Center
Ophardt, Charles E.; Stupgia, Sean
1984-01-01
Describes an experiment which illustrates simple synthetic techniques, redox principles in synthesis reactions, interpretation of visible spectra using Orgel diagrams, and the spectrochemical series. The experiment is suitable for the advanced undergraduate inorganic chemistry laboratory. (JN)
Understanding Singular Vectors
ERIC Educational Resources Information Center
James, David; Botteron, Cynthia
2013-01-01
matrix yields a surprisingly simple, heuristical approximation to its singular vectors. There are correspondingly good approximations to the singular values. Such rules of thumb provide an intuitive interpretation of the singular vectors that helps explain why the SVD is so…
Mapping land cover from satellite images: A basic, low cost approach
NASA Technical Reports Server (NTRS)
Elifrits, C. D.; Barney, T. W.; Barr, D. J.; Johannsen, C. J.
1978-01-01
Simple, inexpensive methodologies developed for mapping general land cover and land use categories from LANDSAT images are reported. One methodology, a stepwise, interpretive, direct tracing technique was developed through working with university students from different disciplines with no previous experience in satellite image interpretation. The technique results in maps that are very accurate in relation to actual land cover and relative to the small investment in skill, time, and money needed to produce the products.
Incorporating uncertainty into medical decision making: an approach to unexpected test results.
Bianchi, Matt T; Alexander, Brian M; Cash, Sydney S
2009-01-01
The utility of diagnostic tests derives from the ability to translate the population concepts of sensitivity and specificity into information that will be useful for the individual patient: the predictive value of the result. As the array of available diagnostic testing broadens, there is a temptation to de-emphasize history and physical findings and defer to the objective rigor of technology. However, diagnostic test interpretation is not always straightforward. One significant barrier to routine use of probability-based test interpretation is the uncertainty inherent in pretest probability estimation, the critical first step of Bayesian reasoning. The context in which this uncertainty presents the greatest challenge is when test results oppose clinical judgment. It is this situation when decision support would be most helpful. The authors propose a simple graphical approach that incorporates uncertainty in pretest probability and has specific application to the interpretation of unexpected results. This method quantitatively demonstrates how uncertainty in disease probability may be amplified when test results are unexpected (opposing clinical judgment), even for tests with high sensitivity and specificity. The authors provide a simple nomogram for determining whether an unexpected test result suggests that one should "switch diagnostic sides.'' This graphical framework overcomes the limitation of pretest probability uncertainty in Bayesian analysis and guides decision making when it is most challenging: interpretation of unexpected test results.
Finite element model for MOI applications using A-V formulation
NASA Astrophysics Data System (ADS)
Xuan, L.; Shanker, B.; Udpa, L.; Shih, W.; Fitzpatrick, G.
2001-04-01
Magneto-optic imaging (MOI) is a relatively new sensor application of an extension of bubble memory technology to NDT and produce easy-to-interpret, real time analog images. MOI systems use a magneto-optic (MO) sensor to produce analog images of magnetic flux leakage from surface and subsurface defects. The instrument's capability in detecting the relatively weak magnetic fields associated with subsurface defects depends on the sensitivity of the magneto-optic sensor. The availability of a theoretical model that can simulate the MOI system performance is extremely important for optimization of the MOI sensor and hardware system. A nodal finite element model based on magnetic vector potential formulation has been developed for simulating MOI phenomenon. This model has been used for predicting the magnetic fields in simple test geometry with corrosion dome defects. In the case of test samples with multiple discontinuities, a more robust model using the magnetic vector potential Ā and electrical scalar potential V is required. In this paper, a finite element model based on A-V formulation is developed to model complex circumferential crack under aluminum rivets in dimpled countersink.
Xu, Yun; Muhamadali, Howbeer; Sayqal, Ali; Dixon, Neil; Goodacre, Royston
2016-10-28
Partial least squares (PLS) is one of the most commonly used supervised modelling approaches for analysing multivariate metabolomics data. PLS is typically employed as either a regression model (PLS-R) or a classification model (PLS-DA). However, in metabolomics studies it is common to investigate multiple, potentially interacting, factors simultaneously following a specific experimental design. Such data often cannot be considered as a "pure" regression or a classification problem. Nevertheless, these data have often still been treated as a regression or classification problem and this could lead to ambiguous results. In this study, we investigated the feasibility of designing a hybrid target matrix Y that better reflects the experimental design than simple regression or binary class membership coding commonly used in PLS modelling. The new design of Y coding was based on the same principle used by structural modelling in machine learning techniques. Two real metabolomics datasets were used as examples to illustrate how the new Y coding can improve the interpretability of the PLS model compared to classic regression/classification coding.
Zonostrophic instability driven by discrete particle noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
St-Onge, D. A.; Krommes, J. A.
The consequences of discrete particle noise for a system possessing a possibly unstable collective mode are discussed. It is argued that a zonostrophic instability (of homogeneous turbulence to the formation of zonal flows) occurs just below the threshold for linear instability. The scenario provides a new interpretation of the random forcing that is ubiquitously invoked in stochastic models such as the second-order cumulant expansion or stochastic structural instability theory; neither intrinsic turbulence nor coupling to extrinsic turbulence is required. A representative calculation of the zonostrophic neutral curve is made for a simple two-field model of toroidal ion-temperature-gradient-driven modes. To themore » extent that the damping of zonal flows is controlled by the ion-ion collision rate, the point of zonostrophic instability is independent of that rate. Published by AIP Publishing.« less
Back-and-forth micromotion of aqueous droplets in a dc electric field.
Kurimura, Tomo; Ichikawa, Masatoshi; Takinoue, Masahiro; Yoshikawa, Kenichi
2013-10-01
Recently, it was reported that an aqueous droplet in an oil phase exhibited rhythmic back-and-forth motion under stationary dc voltage on the order of 100 V. Here, we demonstrate that the threshold voltage for inducing such oscillation is successfully decreased to the order of 10 V through downsizing of the experimental system. Notably, the threshold electric field tends to decrease with a nonlinear scaling relationship accompanied by the downsizing. We derive a simple theoretical model to interpret the system size dependence of the threshold voltage. This model equation suggests the unique effect of additional noise, which is qualitatively characterized as a coherent resonance by an actual experiment as a kind of coherent resonance. Our result would provide insight into the construction of micrometer-sized self-commutating motors and actuators in microfluidic and micromechanical devices.
Zonostrophic instability driven by discrete particle noise
St-Onge, D. A.; Krommes, J. A.
2017-04-01
The consequences of discrete particle noise for a system possessing a possibly unstable collective mode are discussed. It is argued that a zonostrophic instability (of homogeneous turbulence to the formation of zonal flows) occurs just below the threshold for linear instability. The scenario provides a new interpretation of the random forcing that is ubiquitously invoked in stochastic models such as the second-order cumulant expansion or stochastic structural instability theory; neither intrinsic turbulence nor coupling to extrinsic turbulence is required. A representative calculation of the zonostrophic neutral curve is made for a simple two-field model of toroidal ion-temperature-gradient-driven modes. To themore » extent that the damping of zonal flows is controlled by the ion-ion collision rate, the point of zonostrophic instability is independent of that rate. Published by AIP Publishing.« less
Evaluation of generalized degrees of freedom for sparse estimation by replica method
NASA Astrophysics Data System (ADS)
Sakata, A.
2016-12-01
We develop a method to evaluate the generalized degrees of freedom (GDF) for linear regression with sparse regularization. The GDF is a key factor in model selection, and thus its evaluation is useful in many modelling applications. An analytical expression for the GDF is derived using the replica method in the large-system-size limit with random Gaussian predictors. The resulting formula has a universal form that is independent of the type of regularization, providing us with a simple interpretation. Within the framework of replica symmetric (RS) analysis, GDF has a physical meaning as the effective fraction of non-zero components. The validity of our method in the RS phase is supported by the consistency of our results with previous mathematical results. The analytical results in the RS phase are calculated numerically using the belief propagation algorithm.
Model-based gene set analysis for Bioconductor.
Bauer, Sebastian; Robinson, Peter N; Gagneur, Julien
2011-07-01
Gene Ontology and other forms of gene-category analysis play a major role in the evaluation of high-throughput experiments in molecular biology. Single-category enrichment analysis procedures such as Fisher's exact test tend to flag large numbers of redundant categories as significant, which can complicate interpretation. We have recently developed an approach called model-based gene set analysis (MGSA), that substantially reduces the number of redundant categories returned by the gene-category analysis. In this work, we present the Bioconductor package mgsa, which makes the MGSA algorithm available to users of the R language. Our package provides a simple and flexible application programming interface for applying the approach. The mgsa package has been made available as part of Bioconductor 2.8. It is released under the conditions of the Artistic license 2.0. peter.robinson@charite.de; julien.gagneur@embl.de.
Bailey, John D; Harrington, Constance A
2006-04-01
Past research has established that terminal buds of Douglas-fir (Pseudotsuga menziesii (Mirb.) Franco) seedlings from many seed sources have a chilling requirement of about 1200 h at 0-5 degrees C; once chilled, temperatures > 5 degrees C force bud burst via accumulation of heat units. We tested this sequential bud-burst model in the field to determine whether terminal buds of trees in cooler microsites, which receive less heat forcing, develop more slowly than those in warmer microsites. For three years we monitored terminal bud development in young saplings as well as soil and air temperatures on large, replicated plots in a harvest unit; plots differed in microclimate based on amount of harvest residue and shade from neighboring stands. In two of three years, trees on cooler microsites broke bud 2 to 4 days earlier than those on warmer microsites, despite receiving less heat forcing from March to May each year. A simple sequential model did not predict cooler sites having earlier bud burst nor did it correctly predict the order of bud burst across the three years. We modified the basic heat-forcing model to initialize, or reset to zero, the accumulation of heat units whenever significant freezing temperature events (> or = 3 degree-hours day(-1) < 0 degrees C) occurred; this modified model correctly predicted the sequence of bud burst across years. Soil temperature alone or in combination with air temperature did not improve our predictions of bud burst. Past models of bud burst have relied heavily on data from controlled experiments with simple temperature patterns; analysis of more variable temperature patterns from our 3-year field trial, however, indicated that simple models of bud burst are inaccurate. More complex models that incorporate chilling hours, heat forcing, photoperiod and the occurrence of freeze events in the spring may be needed to predict effects of future silvicultural treatments as well to interpret the implications of climate-change scenarios. Developing and testing new models will require data from both field and controlled-environment experiments.
NASA Astrophysics Data System (ADS)
Oses, Corey; Isayev, Olexandr; Toher, Cormac; Curtarolo, Stefano; Tropsha, Alexander
Historically, materials discovery is driven by a laborious trial-and-error process. The growth of materials databases and emerging informatics approaches finally offer the opportunity to transform this practice into data- and knowledge-driven rational design-accelerating discovery of novel materials exhibiting desired properties. By using data from the AFLOW repository for high-throughput, ab-initio calculations, we have generated Quantitative Materials Structure-Property Relationship (QMSPR) models to predict critical materials properties, including the metal/insulator classification, band gap energy, and bulk modulus. The prediction accuracy obtained with these QMSPR models approaches training data for virtually any stoichiometric inorganic crystalline material. We attribute the success and universality of these models to the construction of new materials descriptors-referred to as the universal Property-Labeled Material Fragments (PLMF). This representation affords straightforward model interpretation in terms of simple heuristic design rules that could guide rational materials design. This proof-of-concept study demonstrates the power of materials informatics to dramatically accelerate the search for new materials.
Interpretation of Ground Temperature Anomalies in Hydrothermal Discharge Areas
NASA Astrophysics Data System (ADS)
Price, A. N.; Lindsey, C.; Fairley, J. P., Jr.
2017-12-01
Researchers have long noted the potential for shallow hydrothermal fluids to perturb near-surface temperatures. Several investigators have made qualitative or semi-quantitative use of elevated surface temperatures; for example, in snowfall calorimetry, or for tracing subsurface flow paths. However, little effort has been expended to develop a quantitative framework connecting surface temperature observations with conditions in the subsurface. Here, we examine an area of shallow subsurface flow at Burgdorf Hot Springs, in the Payette National Forest, north of McCall, Idaho USA. We present a simple analytical model that uses easily-measured surface data to infer the temperatures of laterally-migrating shallow hydrothermal fluids. The model is calibrated using shallow ground temperature measurements and overburden thickness estimates from seismic refraction studies. The model predicts conditions in the shallow subsurface, and suggests that the Biot number may place a more important control on the expression of near-surface thermal perturbations than previously thought. In addition, our model may have application in inferring difficult-to-measure parameters, such as shallow subsurface discharge from hydrothermal springs.
Games among relatives revisited.
Allen, Benjamin; Nowak, Martin A
2015-08-07
We present a simple model for the evolution of social behavior in family-structured, finite sized populations. Interactions are represented as evolutionary games describing frequency-dependent selection. Individuals interact more frequently with siblings than with members of the general population, as quantified by an assortment parameter r, which can be interpreted as "relatedness". Other models, mostly of spatially structured populations, have shown that assortment can promote the evolution of cooperation by facilitating interaction between cooperators, but this effect depends on the details of the evolutionary process. For our model, we find that sibling assortment promotes cooperation in stringent social dilemmas such as the Prisoner's Dilemma, but not necessarily in other situations. These results are obtained through straightforward calculations of changes in gene frequency. We also analyze our model using inclusive fitness. We find that the quantity of inclusive fitness does not exist for general games. For special games, where inclusive fitness exists, it provides less information than the straightforward analysis. Copyright © 2015 Elsevier Ltd. All rights reserved.
Levine, Matthew E; Albers, David J; Hripcsak, George
2016-01-01
Time series analysis methods have been shown to reveal clinical and biological associations in data collected in the electronic health record. We wish to develop reliable high-throughput methods for identifying adverse drug effects that are easy to implement and produce readily interpretable results. To move toward this goal, we used univariate and multivariate lagged regression models to investigate associations between twenty pairs of drug orders and laboratory measurements. Multivariate lagged regression models exhibited higher sensitivity and specificity than univariate lagged regression in the 20 examples, and incorporating autoregressive terms for labs and drugs produced more robust signals in cases of known associations among the 20 example pairings. Moreover, including inpatient admission terms in the model attenuated the signals for some cases of unlikely associations, demonstrating how multivariate lagged regression models' explicit handling of context-based variables can provide a simple way to probe for health-care processes that confound analyses of EHR data.
Transformation of MT Resistivity Sections into Geologically Meaningful Images
NASA Astrophysics Data System (ADS)
Park, S. K.
2004-05-01
Earthscope offers an unprecedented opportunity for interdisciplinary studies of North America. In addition to a continent-wide seismic study, it includes the acquisition of magnetotelluric (MT) data at many of the Bigfoot array sites. Earthscope will thus provide a uniform 3-D MT survey over regional scales when completed. MT interpreters will be able to include 3-D regional effects in their models for the first time whether they are interpreting local studies. However, the full value of the interdisciplinary nature of Earthscope will be realized only if MT sections and maps are useful to other earth scientists. The standard final product from any 2-D or 3-D MT interpretation is a spatial distribution of electrical resistivity. Inference of the physicochemical state from bulk resistivity is complicated because a variety of factors influence the property including temperature, intrinsic conduction of silicates, and small amounts of interconnected conducting materials (e.g., graphite, metallic minerals, partial melt, fluid). Here, I use petrophysical measurements and a petrological model to transform a resistivity section into cross sections of temperature and partial melt fraction in the mantle beneath the Sierra Nevada. In this manner, I am able to separate the contributions of increasing temperature and melt fraction to the bulk resistivity. Predicted melt fractions match observations from xenoliths relatively well but temperatures are systematically 200C higher than those observed. A small amount of dissolved hydrogen (~70 ppm H/Si) lowers the predicted temperatures to match those from the xenoliths, however. I conclude that while this transformation is a simple first step based on many assumptions, initial results are promising.
A Comprehensive View Of Taiwan Orogeny From TAIGER Perspective
NASA Astrophysics Data System (ADS)
Wu, F. T.; Kuochen, H.; McIntosh, K. D.; Okaya, D. A.; Lavier, L. L.
2012-12-01
Arc-continent collision is one of the basic mechanisms for building continental masses. Taiwan is young and very active. Based on known geology a multi-disciplinary geophysical experiment was designed to image the orogeny in action. Logistics for R/V Langseth, OBS and PASSCAL instruments was complex; nevertheless the field works were completed within the project period. The resulting dataset allows us to map the structures of the shallow crust and the upper mantle. The amount of data gathered is large; some key observations and current interpretations are: (I) Observation: Crustal roots on both Eurasian and Philippine Sea plates, with a high velocity rise in between. Interpretation: Deformation throughout lithosphere on both sides of the initial suture; shortening of lithosphere near plate boundary produce high velocity rise. (II) Observation: Upper mantle high velocity anomaly coincides with a steep east-dippping Wadati-Benioff seismicity in southern Taiwan; the anomaly continues part of the way to central Taiwan but it is aseismic; under northern Taiwan the anomaly is very weak and disorganized. Interpretation: Active subduction in the south (up to 22.8°N) and may be eclogitization in the lower crust and delamination in central Taiwan. (III) Observation: Low Vp/Vs, low resistivity in the core of Central Range. Interp: dry, felsic rocks at relatively high temper (up to 750OC). (IV) Obs: Strong SKS splitting (~2 sec) with trend-parallel fast axis. Interp: Shearing throughout uppermost mantle. Preliminary 2-D geodynamic modeling produces the primary observed features from simple initial model of an arc impinging on continental margin.
Using bioimpedance spectroscopy parameters as real-time feedback during tDCS.
Nejadgholi, Isar; Caytak, Herschel; Bolic, Miodrag
2016-08-01
An exploratory analysis is carried out to investigate the feasibility of using BioImpedance Spectroscopy (BIS) parameters, measured on scalp, as real-time feedback during Transcranial Direct Current Stimulation (tDCS). TDCS is shown to be a potential treatment for neurological disorders. However, this technique is not considered as a reliable clinical treatment, due to the lack of a measurable indicator of treatment efficacy. Although the voltage that is applied on the head is very simple to measure during a tDCS session, changes of voltage are difficult to interpret in terms of variables that affect clinical outcome. BIS parameters are considered as potential feedback parameters, because: 1) they are shown to be associated with the DC voltage applied on the head, 2) they are interpretable in terms of conductive and capacitive properties of head tissues, 3) physical interpretation of BIS measurements makes them prone to be adjusted by clinically controllable variables, 4) BIS parameters are measurable in a cost-effective and safe way and do not interfere with DC stimulation. This research indicates that a quadratic regression model can predict the DC voltage between anode and cathode based on parameters extracted from BIS measurements. These parameters are extracted by fitting the measured BIS spectra to an equivalent electrical circuit model. The effect of clinical tDCS variables on BIS parameters needs to be investigated in future works. This work suggests that BIS is a potential method to be used for monitoring a tDCS session in order to adjust, tailor, or personalize tDCS treatment protocols.
Cold, warm, and composite (cool) cosmic string models
NASA Astrophysics Data System (ADS)
Carter, B.
1994-01-01
The dynamical behaviour of a cosmic string is strongly affected by any reduction of the effective string tension T below the constant value, T = m2 say, that typifies a simple, longitudinally Lorentz invariant Goto-Nambu type string model, where m is a fixed mass scale determined by the internal structure of an underlying Nielsen-Olesen type vacuum vortex. Such a reduction of tension occurs in the standard ``warm'' cosmic string model in which the effect of thermal perturbations of a simple Goto-Nambu model is represented by an effective tension T given in terms of the corresponding effective temperature, Θ say, by T2 = m2(m2 - 1/3πΘ2). A qualitatively similar though analytically more complicated tension reduction phenomenon occurs in ``cold'' conducting cosmic string models of the kind whose existence was first proposed by Witten, where the role of the temperature is played by an effective mass or chemical potential μ that is constructed as the scalar magnitude of the energy momentum covector obtained as the gradient of the phase ϕ of a bosonic condensate in the core of the vacuum vortex. The present article describes the construction and essential mechanical properties of a new category of composite ``cool'' cosmic string models that are intermediate between these ``warm'' and ``cold'' limit cases. These composite models are the string analogues of the standard Landau model for a two-constituent finite temperature superfluid, and as such involve two independent currents interpretable as that of the entropy on the one hand and that of the bosonic condensate on the other. It is surmised that the stationary (in particular ring) equilibrium states of such ``cool'' cosmic strings may be of cosmologicl significance.
The detection of high-velocity outflows from M8E-IR
NASA Technical Reports Server (NTRS)
Mitchell, George F.; Allen, Mark; Beer, Reinhard; Dekany, Richard; Huntress, Wesley
1988-01-01
A high-resolution (0.059/cm) M band (4.6 micron) spectrum of the embedded young stellar object M8E-IR is presented and discussed. The spectrum shows strong absorption to large blueshifts in the rotational lines of the fundamental vibrational band, v = 1-0, of CO. The absorption is interpreted as being due to gas near to, and flowing from, the central object. The outflowing gas is warm (95-330 K) and consists of discrete velocity components with the very high velocities of 90, 130, 150, and 160 km/s. On the basis of a simple model, it is estimated that the observed outflows are less than 100 yr old.
The Canyonlands Grabens Revisited, with a New Interpretation of Graben Geometry
NASA Astrophysics Data System (ADS)
Schultz, R. A.; Moore, J. M.
1996-03-01
The relative scale between faults and faulted-layer thickness is critical to the mechanical behavior of faults and fault populations on any planetary body. Due to their fresh, relatively uneroded morphology and simple structural setting, the terrestrial Canyonlands grabens provide a unique opportunity to critically investigate the geometry, growth, interaction, and scaling relationships of normal faults. Symmetrical models have traditionally been used to describe these grabens, but field observations of stratigraphic offsets require asymmetric graben cross-sectional geometry. Topographic profiles reveal differential stratigraphic offsets, graben floor-tilts, and possible roll-over anticlines as well as footwall uplifts. Relationships between the asymmetric graben geometry and brittle-layer thickness are currently being investigated.
The cooling rate dependence of cation distributions in CoFe2O4
NASA Technical Reports Server (NTRS)
De Guire, Mark R.; O'Handley, Robert C.; Kalonji, Gretchen
1989-01-01
The room-temperature cation distributions in bulk CoFe2O4 samples, cooled at rates between less than 0.01 and about 1000 C/sec, have been determined using Mossbauer spectroscopy in an 80-kOe magnetic field. With increasing cooling rate, the quenched structure departs increasingly from the mostly ordered cation distribution ordinarily observed at room temperature. However, the cation disorder appears to saturate just short of a random distribution at very high cooling rates. These results are interpreted in terms of a simple relaxation model of cation redistribution kinetics. The disordered cation distributions should lead to increased magnetization and decreased coercivity in CoFe2O4.
Profit intensity and cases of non-compliance with the law of demand/supply
NASA Astrophysics Data System (ADS)
Makowski, Marcin; Piotrowski, Edward W.; Sładkowski, Jan; Syska, Jacek
2017-05-01
We consider properties of the measurement intensity ρ of a random variable for which the probability density function represented by the corresponding Wigner function attains negative values on a part of the domain. We consider a simple economic interpretation of this problem. This model is used to present the applicability of the method to the analysis of the negative probability on markets where there are anomalies in the law of supply and demand (e.g. Giffen's goods). It turns out that the new conditions to optimize the intensity ρ require a new strategy. We propose a strategy (so-called à rebours strategy) based on the fixed point method and explore its effectiveness.
Huynh-Thu, Vân Anh; Saeys, Yvan; Wehenkel, Louis; Geurts, Pierre
2012-07-01
Univariate statistical tests are widely used for biomarker discovery in bioinformatics. These procedures are simple, fast and their output is easily interpretable by biologists but they can only identify variables that provide a significant amount of information in isolation from the other variables. As biological processes are expected to involve complex interactions between variables, univariate methods thus potentially miss some informative biomarkers. Variable relevance scores provided by machine learning techniques, however, are potentially able to highlight multivariate interacting effects, but unlike the p-values returned by univariate tests, these relevance scores are usually not statistically interpretable. This lack of interpretability hampers the determination of a relevance threshold for extracting a feature subset from the rankings and also prevents the wide adoption of these methods by practicians. We evaluated several, existing and novel, procedures that extract relevant features from rankings derived from machine learning approaches. These procedures replace the relevance scores with measures that can be interpreted in a statistical way, such as p-values, false discovery rates, or family wise error rates, for which it is easier to determine a significance level. Experiments were performed on several artificial problems as well as on real microarray datasets. Although the methods differ in terms of computing times and the tradeoff, they achieve in terms of false positives and false negatives, some of them greatly help in the extraction of truly relevant biomarkers and should thus be of great practical interest for biologists and physicians. As a side conclusion, our experiments also clearly highlight that using model performance as a criterion for feature selection is often counter-productive. Python source codes of all tested methods, as well as the MATLAB scripts used for data simulation, can be found in the Supplementary Material.
Interpreting single jet measurements in Pb$+$Pb collisions at the LHC
Spousta, Martin; Cole, Brian
2016-01-27
Results are presented from a phenomenological analysis of recent measurements of jet suppression and modifications of jet fragmentation functions in Pb+Pb collisions at the LHC. Particular emphasis is placed on the impact of the differences between quark and gluon jet quenching on the transverse momentum (p jet T) dependence of the jet R AA and on the fragmentation functions, D(z). Primordial quark and gluon parton distributions were obtained from PYTHIA8 and were parameterized using simple power-law functions and extensions to the power-law function which were found to better describe the PYTHIA8 parton spectra. A simple model for the quark energymore » loss based on the shift formalism is used to model R AA and D(z) using both analytic results and using direct Monte-Carlo sampling of the PYTHIA parton spectra. The model is capable of describing the full p jet T, rapidity, and centrality dependence of the measured jet R AA using three effective parameters. A key result from the analysis is that the D(z) modifications observed in the data, excluding the enhancement at low-z, may result primarily from the different quenching of the quarks and gluons. Furthermore, the model is also capable of reproducing the charged hadron R AA at high transverse momentum. Predictions are made for the jet R AA at large rapidities where it has not yet been measured and for the rapidity dependence of D(z).« less
MX Siting Investigation Gravity Survey - Ralston Valley, Nevada.
1981-08-20
Center (DMAHTC), headquartered in Cheyenne. Wyoming. DMAHTC reduces the data to Simple Bouguer Anomaly (see Section A1.4, Appendix Al.0). The Defense...LIST OF DRAWINGS Drawing Number 1 Complete Bouguer Anomaly Contours In Pocket at 2 Depth to Rock - Interpreted from End of Report Gravity Data iv, I I...REDUCTION DMAHTC obtained the basic observations for the new stations and reduced them to Simple Bouguer Anomalies (SBA) as described in Appendix A1.0
Kao, Ping; Parhi, Purnendu; Krishnan, Anandi; Noh, Hyeran; Haider, Waseem; Tadigadapa, Srinivas; Allara, David L.; Vogler, Erwin A.
2010-01-01
The maximum capacity of a hydrophobic adsorbent is interpreted in terms of square or hexagonal (cubic and face-centered-cubic, FCC) interfacial packing models of adsorbed blood proteins in a way that accommodates experimental measurements by the solution-depletion method and quartz-crystal-microbalance (QCM) for the human proteins serum albumin (HSA, 66 kDa), immunoglobulin G (IgG, 160 kDa), fibrinogen (Fib, 341 kDa), and immunoglobulin M (IgM, 1000 kDa). A simple analysis shows that adsorbent capacity is capped by a fixed mass/volume (e.g. mg/mL) surface-region (interphase) concentration and not molar concentration. Nearly analytical agreement between the packing models and experiment suggests that, at surface saturation, above-mentioned proteins assemble within the interphase in a manner that approximates a well-ordered array. HSA saturates a hydrophobic adsorbent with the equivalent of a single square-or-hexagonally-packed layer of hydrated molecules whereas the larger proteins occupy two-or-more layers, depending on the specific protein under consideration and analytical method used to measure adsorbate mass (solution depletion or QCM). Square-or-hexagonal (cubic and FCC) packing models cannot be clearly distinguished by comparison to experimental data. QCM measurement of adsorbent capacity is shown to be significantly different than that measured by solution depletion for similar hydrophobic adsorbents. The underlying reason is traced to the fact that QCM measures contribution of both core protein, water of hydration, and interphase water whereas solution depletion measures only the contribution of core protein. It is further shown that thickness of the interphase directly measured by QCM systematically exceeds that inferred from solution-depletion measurements, presumably because the static model used to interpret solution depletion does not accurately capture the complexities of the viscoelastic interfacial environment probed by QCM. PMID:21035180
Kao, Ping; Parhi, Purnendu; Krishnan, Anandi; Noh, Hyeran; Haider, Waseem; Tadigadapa, Srinivas; Allara, David L; Vogler, Erwin A
2011-02-01
The maximum capacity of a hydrophobic adsorbent is interpreted in terms of square or hexagonal (cubic and face-centered-cubic, FCC) interfacial packing models of adsorbed blood proteins in a way that accommodates experimental measurements by the solution-depletion method and quartz-crystal-microbalance (QCM) for the human proteins serum albumin (HSA, 66 kDa), immunoglobulin G (IgG, 160 kDa), fibrinogen (Fib, 341 kDa), and immunoglobulin M (IgM, 1000 kDa). A simple analysis shows that adsorbent capacity is capped by a fixed mass/volume (e.g. mg/mL) surface-region (interphase) concentration and not molar concentration. Nearly analytical agreement between the packing models and experiment suggests that, at surface saturation, above-mentioned proteins assemble within the interphase in a manner that approximates a well-ordered array. HSA saturates a hydrophobic adsorbent with the equivalent of a single square or hexagonally-packed layer of hydrated molecules whereas the larger proteins occupy two-or-more layers, depending on the specific protein under consideration and analytical method used to measure adsorbate mass (solution depletion or QCM). Square or hexagonal (cubic and FCC) packing models cannot be clearly distinguished by comparison to experimental data. QCM measurement of adsorbent capacity is shown to be significantly different than that measured by solution depletion for similar hydrophobic adsorbents. The underlying reason is traced to the fact that QCM measures contribution of both core protein, water of hydration, and interphase water whereas solution depletion measures only the contribution of core protein. It is further shown that thickness of the interphase directly measured by QCM systematically exceeds that inferred from solution-depletion measurements, presumably because the static model used to interpret solution depletion does not accurately capture the complexities of the viscoelastic interfacial environment probed by QCM. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sarout, Joël.
2012-04-01
For the first time, a comprehensive and quantitative analysis of the domains of validity of popular wave propagation theories for porous/cracked media is provided. The case of a simple, yet versatile rock microstructure is detailed. The microstructural parameters controlling the applicability of the scattering theories, the effective medium theories, the quasi-static (Gassmann limit) and dynamic (inertial) poroelasticity are analysed in terms of pores/cracks characteristic size, geometry and connectivity. To this end, a new permeability model is devised combining the hydraulic radius and percolation concepts. The predictions of this model are compared to published micromechanical models of permeability for the limiting cases of capillary tubes and penny-shaped cracks. It is also compared to published experimental data on natural rocks in these limiting cases. It explicitly accounts for pore space topology around the percolation threshold and far above it. Thanks to this permeability model, the scattering, squirt-flow and Biot cut-off frequencies are quantitatively compared. This comparison leads to an explicit mapping of the domains of validity of these wave propagation theories as a function of the rock's actual microstructure. How this mapping impacts seismic, geophysical and ultrasonic wave velocity data interpretation is discussed. The methodology demonstrated here and the outcomes of this analysis are meant to constitute a quantitative guide for the selection of the most suitable modelling strategy to be employed for prediction and/or interpretation of rocks elastic properties in laboratory-or field-scale applications when information regarding the rock's microstructure is available.
Spatial Structure of Evolutionary Models of Dialects in Contact
Murawaki, Yugo
2015-01-01
Phylogenetic models, originally developed to demonstrate evolutionary biology, have been applied to a wide range of cultural data including natural language lexicons, manuscripts, folktales, material cultures, and religions. A fundamental question regarding the application of phylogenetic inference is whether trees are an appropriate approximation of cultural evolutionary history. Their validity in cultural applications has been scrutinized, particularly with respect to the lexicons of dialects in contact. Phylogenetic models organize evolutionary data into a series of branching events through time. However, branching events are typically not included in dialectological studies to interpret the distributions of lexical terms. Instead, dialectologists have offered spatial interpretations to represent lexical data. For example, new lexical items that emerge in a politico-cultural center are likely to spread to peripheries, but not vice versa. To explore the question of the tree model’s validity, we present a simple simulation model in which dialects form a spatial network and share lexical items through contact rather than through common ancestors. We input several network topologies to the model to generate synthetic data. We then analyze the synthesized data using conventional phylogenetic techniques. We found that a group of dialects can be considered tree-like even if it has not evolved in a temporally tree-like manner but has a temporally invariant, spatially tree-like structure. In addition, the simulation experiments appear to reproduce unnatural results observed in reconstructed trees for real data. These results motivate further investigation into the spatial structure of the evolutionary history of dialect lexicons as well as other cultural characteristics. PMID:26221958
Constraining Slab Breakoff Induced Magmatism through Numerical Modelling
NASA Astrophysics Data System (ADS)
Freeburn, R.; Van Hunen, J.; Maunder, B. L.; Magni, V.; Bouilhol, P.
2015-12-01
Post-collisional magmatism is markedly different in nature and composition than pre-collisional magmas. This is widely interpreted to mark a change in the thermal structure of the system due to the loss of the oceanic slab (slab breakoff), allowing a different source to melt. Early modelling studies suggest that when breakoff takes place at depths shallower than the overriding lithosphere, magmatism occurs through both the decompression of upwelling asthenopshere into the slab window and the thermal perturbation of the overriding lithosphere (Davies & von Blanckenburg, 1995; van de Zedde & Wortel, 2001). Interpretations of geochemical data which invoke slab breakoff as a means of generating magmatism mostly assume these shallow depths. However more recent modelling results suggest that slab breakoff is likely to occur deeper (e.g. Andrews & Billen, 2009; Duretz et al., 2011; van Hunen & Allen, 2011). Here we test the extent to which slab breakoff is a viable mechanism for generating melting in post-collisional settings. Using 2-D numerical models we conduct a parametric study, producing models displaying a range of dynamics with breakoff depths ranging from 150 - 300 km. Key models are further analysed to assess the extent of melting. We consider the mantle wedge above the slab to be hydrated, and compute the melt fraction by using a simple parameterised solidus. Our models show that breakoff at shallow depths can generate a short-lived (< 3 Myr) pulse of mantle melting, through the hydration of hotter, undepleted asthenosphere flowing in from behind the detached slab. However, our results do not display the widespread, prolonged style of magmatism, observed in many post-collisional areas, suggesting that this magmatism may be generated via alternative mechanisms. This further implies that using magmatic observations to constrain slab breakoff is not straightforward.
Ash, A; Schwartz, M; Payne, S M; Restuccia, J D
1990-11-01
Medical record review is increasing in importance as the need to identify and monitor utilization and quality of care problems grow. To conserve resources, reviews are usually performed on a subset of cases. If judgment is used to identify subgroups for review, this raises the following questions: How should subgroups be determined, particularly since the locus of problems can change over time? What standard of comparison should be used in interpreting rates of problems found in subgroups? How can population problem rates be estimated from observed subgroup rates? How can the bias be avoided that arises because reviewers know that selected cases are suspected of having problems? How can changes in problem rates over time be interpreted when evaluating intervention programs? Simple random sampling, an alternative to subgroup review, overcomes the problems implied by these questions but is inefficient. The Self-Adapting Focused Review System (SAFRS), introduced and described here, provides an adaptive approach to record selection that is based upon model-weighted probability sampling. It retains the desirable inferential properties of random sampling while allowing reviews to be concentrated on cases currently thought most likely to be problematic. Model development and evaluation are illustrated using hospital data to predict inappropriate admissions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aston, D.; Awaji, N.; D'Amore, J.
A model incorporating K* resonance contributions and simple backgrounds is shown to quantitatively reproduce the mass dependence of the partial wave amplitudes governing the production and decay of the anti K/sup 0/..pi../sup +/..pi../sup -/ system. A fit of this model to these amplitudes confirms the resonance interpretations of the well-established 1/sup +/ K/sub 1/(1400), the 2/sup +/ K/sub 2/*(1430), the 3/sup -/ K/sub 3/*(1780), and the less well-known 1/sup -/ states, the K*(1410) and the K*(1790). The 4/sup +/ amplitudes are shown to be consistent with the production and decay of the 4/sup +/ K/sub 4/*(2060). A second 2/sup +/more » enhancement at a mass of approx.1.95 GeV/c/sup 2/ can be interpreted as resonant and may be the radial excitation of the K/sub 2/*(1430) or the triplet partner of the K/sub 4/*(2060). New measurements of the masses, widths and branching ratios of these states are given, and the implications of these data for the spectroscopy of the nonstrange meson sector are discussed.« less
A PRIMER ON UNIFYING DEBRIS DISK MORPHOLOGIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Eve J.; Chiang, Eugene, E-mail: evelee@berkeley.edu, E-mail: echiang@astro.berkeley.edu
A “minimum model” for debris disks consists of a narrow ring of parent bodies, secularly forced by a single planet on a possibly eccentric orbit, colliding to produce dust grains that are perturbed by stellar radiation pressure. We demonstrate how this minimum model can reproduce a wide variety of disk morphologies imaged in scattered starlight. Five broad categories of disk shape can be captured: “rings,” “needles,” “ships-and-wakes,” “bars,” and “moths (a.k.a. fans),” depending on the viewing geometry. Moths can also sport “double wings.” We explain the origin of morphological features from first principles, exploring the dependence on planet eccentricity, diskmore » inclination dispersion, and the parent body orbital phases at which dust grains are born. A key determinant in disk appearance is the degree to which dust grain orbits are apsidally aligned. Our study of a simple steady-state (secularly relaxed) disk should serve as a reference for more detailed models tailored to individual systems. We use the intuition gained from our guidebook of disk morphologies to interpret, informally, the images of a number of real-world debris disks. These interpretations suggest that the farthest reaches of planetary systems are perturbed by eccentric planets, possibly just a few Earth masses each.« less
Ultrasonography of the eye and orbit.
Eisenberg, H M
1985-11-01
The eye and orbit are excellent subjects for ultrasonic evaluation. Examination and interpretation are relatively simple procedures. The normal ultrasonic anatomy of the eye and orbit is presented. Some examples of ocular and orbital pathology are discussed also.
Interpretation of magnetotelluric resistivity and phase soundings over horizontal layers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patella, D.
1976-02-01
The present paper deals with a new inverse method for quantitatively interpreting magnetotelluric apparent resistivity and phase-lag sounding curves over horizontally stratified earth sections. The recurrent character of the general formula relating the wave impedance of an (n-l)-layered medium to that of an n-layered medium suggests the use of the method of reduction to a lower boundary plane, as originally termed by Koefoed in the case of dc resistivity soundings. The layering parameters are so directly derived by a simple iterative procedure. The method is applicable for any number of layers but only when both apparent resistivity and phase-lag soundingmore » curves are jointly available. Moreover no sophisticated algorithm is required: a simple desk electronic calculator together with a sheet of two-layer apparent resistivity and phase-lag master curves are sufficient to reproduce earth sections which, in the range of equivalence, are all consistent with field data.« less
A psychological model of mental disorder.
Kinderman, Peter
2005-01-01
A coherent conceptualization of the role of psychological factors is of great importance in understanding mental disorder. Academic articles and professional reports alluding to psychological models of the etiology of mental disorder are becoming increasingly common, and there is evidence of a marked policy shift toward the provision of psychological therapies and interventions. This article discusses the relationship between biological, social, and psychological factors in the causation and treatment of mental disorder. It argues that simple biological reductionism is not scientifically justified, and also that the specific role of psychological processes within the biopsychosocial model requires further elaboration. The biopsychosocial model is usually interpreted as implying that biological, psychological, and social factors are co-equal partners in the etiology of mental disorder. The psychological model of mental disorder presented here suggests that disruption or dysfunction in psychological processes is a final common pathway in the development of mental disorder. These processes include, but are not limited to, cognitive processes. The model proposes that biological and social factors, together with a person's individual experiences, lead to mental disorder through their conjoint effects on those psychological processes. Implications for research, interventions, and policy are discussed.
Lei, Chon Lok; Wang, Ken; Clerx, Michael; Johnstone, Ross H; Hortigon-Vinagre, Maria P; Zamora, Victor; Allan, Andrew; Smith, Godfrey L; Gavaghan, David J; Mirams, Gary R; Polonchuk, Liudmila
2017-01-01
Human induced pluripotent stem cell derived cardiomyocytes (iPSC-CMs) have applications in disease modeling, cell therapy, drug screening and personalized medicine. Computational models can be used to interpret experimental findings in iPSC-CMs, provide mechanistic insights, and translate these findings to adult cardiomyocyte (CM) electrophysiology. However, different cell lines display different expression of ion channels, pumps and receptors, and show differences in electrophysiology. In this exploratory study, we use a mathematical model based on iPSC-CMs from Cellular Dynamic International (CDI, iCell), and compare its predictions to novel experimental recordings made with the Axiogenesis Cor.4U line. We show that tailoring this model to the specific cell line, even using limited data and a relatively simple approach, leads to improved predictions of baseline behavior and response to drugs. This demonstrates the need and the feasibility to tailor models to individual cell lines, although a more refined approach will be needed to characterize individual currents, address differences in ion current kinetics, and further improve these results.
Cancer survival: an overview of measures, uses, and interpretation.
Mariotto, Angela B; Noone, Anne-Michelle; Howlader, Nadia; Cho, Hyunsoon; Keel, Gretchen E; Garshell, Jessica; Woloshin, Steven; Schwartz, Lisa M
2014-11-01
Survival statistics are of great interest to patients, clinicians, researchers, and policy makers. Although seemingly simple, survival can be confusing: there are many different survival measures with a plethora of names and statistical methods developed to answer different questions. This paper aims to describe and disseminate different survival measures and their interpretation in less technical language. In addition, we introduce templates to summarize cancer survival statistic organized by their specific purpose: research and policy versus prognosis and clinical decision making. Published by Oxford University Press 2014.
Cancer Survival: An Overview of Measures, Uses, and Interpretation
Noone, Anne-Michelle; Howlader, Nadia; Cho, Hyunsoon; Keel, Gretchen E.; Garshell, Jessica; Woloshin, Steven; Schwartz, Lisa M.
2014-01-01
Survival statistics are of great interest to patients, clinicians, researchers, and policy makers. Although seemingly simple, survival can be confusing: there are many different survival measures with a plethora of names and statistical methods developed to answer different questions. This paper aims to describe and disseminate different survival measures and their interpretation in less technical language. In addition, we introduce templates to summarize cancer survival statistic organized by their specific purpose: research and policy versus prognosis and clinical decision making. PMID:25417231
Substituent and solvent effects on electronic spectra of some substituted phenoxyacetic acids.
Shanthi, M; Kabilan, S
2007-06-01
The effects of substituents and solvents have been studied through the absorption spectra of nearly 19 para- and ortho-substituted phenoxyacetic acids in the range of 200-400 nm. The effects of substituent on the absorption spectra of compounds under present investigation are interpreted by correlation of absorption frequencies with simple and extended Hammett equations. Effect of solvent polarity and hydrogen bonding on the absorption spectra are interpreted by means of Kamlet equation and the results are discussed.
Substituent and solvent effects on electronic spectra of some substituted phenoxyacetic acids
NASA Astrophysics Data System (ADS)
Shanthi, M.; Kabilan, S.
2007-06-01
The effects of substituents and solvents have been studied through the absorption spectra of nearly 19 para- and ortho-substituted phenoxyacetic acids in the range of 200-400 nm. The effects of substituent on the absorption spectra of compounds under present investigation are interpreted by correlation of absorption frequencies with simple and extended Hammett equations. Effect of solvent polarity and hydrogen bonding on the absorption spectra are interpreted by means of Kamlet equation and the results are discussed.
NASA Astrophysics Data System (ADS)
Suganya, Krishnasamy; Kabilan, Senthamaraikannan
2004-04-01
The effects of substituents and solvents have been studied through the absorption spectra of nearly 23 ortho- and para-N-(substitutedphenyl)benzene sulphonamides in the range of 200-400 nm. The effects of substituents on the absorption spectra of compounds under present investigation are interpreted by correlation of absorption frequencies with simple and extended Hammett equations. Effect of solvent polarity and hydrogen bonding on the absorption spectra are interpreted by means of Kamlet equation and the results are discussed.
On geodesics of the rotation group SO(3)
NASA Astrophysics Data System (ADS)
Novelia, Alyssa; O'Reilly, Oliver M.
2015-11-01
Geodesics on SO(3) are characterized by constant angular velocity motions and as great circles on a three-sphere. The former interpretation is widely used in optometry and the latter features in the interpolation of rotations in computer graphics. The simplicity of these two disparate interpretations belies the complexity of the corresponding rotations. Using a quaternion representation for a rotation, we present a simple proof of the equivalence of the aforementioned characterizations and a straightforward method to establish features of the corresponding rotations.
NASA Astrophysics Data System (ADS)
Nikolić, Hrvoje
Most physicists do not have patience for reading long and obscure interpretation arguments and disputes. Hence, to attract attention of a wider physics community, in this paper various old and new aspects of quantum interpretations are explained in a concise and simple (almost trivial) form. About the “Copenhagen” interpretation, we note that there are several different versions of it and explain how to make sense of “local nonreality” interpretation. About the many-world interpretation (MWI), we explain that it is neither local nor nonlocal, that it cannot explain the Born rule, that it suffers from the preferred basis problem, and that quantum suicide cannot be used to test it. About the Bohmian interpretation, we explain that it is analogous to dark matter, use it to explain that there is no big difference between nonlocal correlation and nonlocal causation, and use some condensed-matter ideas to outline how nonrelativistic Bohmian theory could be a theory of everything. We also explain how different interpretations can be used to demystify the delayed choice experiment, to resolve the problem of time in quantum gravity, and to provide alternatives to quantum nonlocality. Finally, we explain why is life compatible with the second law.
Exploring the Dynamics of Cell Processes through Simulations of Fluorescence Microscopy Experiments
Angiolini, Juan; Plachta, Nicolas; Mocskos, Esteban; Levi, Valeria
2015-01-01
Fluorescence correlation spectroscopy (FCS) methods are powerful tools for unveiling the dynamical organization of cells. For simple cases, such as molecules passively moving in a homogeneous media, FCS analysis yields analytical functions that can be fitted to the experimental data to recover the phenomenological rate parameters. Unfortunately, many dynamical processes in cells do not follow these simple models, and in many instances it is not possible to obtain an analytical function through a theoretical analysis of a more complex model. In such cases, experimental analysis can be combined with Monte Carlo simulations to aid in interpretation of the data. In response to this need, we developed a method called FERNET (Fluorescence Emission Recipes and Numerical routines Toolkit) based on Monte Carlo simulations and the MCell-Blender platform, which was designed to treat the reaction-diffusion problem under realistic scenarios. This method enables us to set complex geometries of the simulation space, distribute molecules among different compartments, and define interspecies reactions with selected kinetic constants, diffusion coefficients, and species brightness. We apply this method to simulate single- and multiple-point FCS, photon-counting histogram analysis, raster image correlation spectroscopy, and two-color fluorescence cross-correlation spectroscopy. We believe that this new program could be very useful for predicting and understanding the output of fluorescence microscopy experiments. PMID:26039162
NASA Astrophysics Data System (ADS)
Max-Moerbeck, W.; Richards, J. L.; Hovatta, T.; Pavlidou, V.; Pearson, T. J.; Readhead, A. C. S.
2014-11-01
We present a practical implementation of a Monte Carlo method to estimate the significance of cross-correlations in unevenly sampled time series of data, whose statistical properties are modelled with a simple power-law power spectral density. This implementation builds on published methods; we introduce a number of improvements in the normalization of the cross-correlation function estimate and a bootstrap method for estimating the significance of the cross-correlations. A closely related matter is the estimation of a model for the light curves, which is critical for the significance estimates. We present a graphical and quantitative demonstration that uses simulations to show how common it is to get high cross-correlations for unrelated light curves with steep power spectral densities. This demonstration highlights the dangers of interpreting them as signs of a physical connection. We show that by using interpolation and the Hanning sampling window function we are able to reduce the effects of red-noise leakage and to recover steep simple power-law power spectral densities. We also introduce the use of a Neyman construction for the estimation of the errors in the power-law index of the power spectral density. This method provides a consistent way to estimate the significance of cross-correlations in unevenly sampled time series of data.
NASA Astrophysics Data System (ADS)
Chu, H.; Baldocchi, D. D.
2017-12-01
FLUXNET - the global network of eddy covariance tower sites provides valuable datasets of the direct and in situ measurements of fluxes and ancillary variables that are used across different disciplines and applications. Aerodynamic roughness (i.e., roughness length, zero plane displacement height) are one of the potential parameters that can be derived from flux-tower data and are crucial for the applications of land surface models and flux footprint models. As aerodynamic roughness are tightly associated with canopy structures (e.g., canopy height, leaf area), such parameters could potentially serve as an alternative metric for detecting the change of canopy structure (e.g., change of leaf areas in deciduous ecosystems). This study proposes a simple approach for deriving aerodynamic roughness from flux-tower data, and tests their suitability and robustness in detecting the seasonality of canopy structure. We run tests across a broad range of deciduous forests, and compare the seasonality derived from aerodynamic roughness (i.e., starting and ending dates of leaf-on period and peak-foliage period) against those obtained from remote sensing or in situ leaf area measurements. Our findings show aerodynamic roughness generally captures the timing of changes of leaf areas in deciduous forests. Yet, caution needs to be exercised while interpreting the absolute values of the roughness estimates.
Unmixing of spectral components affecting AVIRIS imagery of Tampa Bay
NASA Astrophysics Data System (ADS)
Carder, Kendall L.; Lee, Z. P.; Chen, Robert F.; Davis, Curtiss O.
1993-09-01
According to Kirk's as well as Morel and Gentili's Monte Carlo simulations, the popular simple expression, R approximately equals 0.33 bb/a, relating subsurface irradiance reflectance (R) to the ratio of the backscattering coefficient (bb) to absorption coefficient (a), is not valid for bb/a > 0.25. This means that it may no longer be valid for values of remote-sensing reflectance (above-surface ratio of water-leaving radiance to downwelling irradiance) where Rrs4/ > 0.01. Since there has been no simple Rrs expression developed for very turbid waters, we developed one based in part on Monte Carlo simulations and empirical adjustments to an Rrs model and applied it to rather turbid coastal waters near Tampa Bay to evaluate its utility for unmixing the optical components affecting the water- leaving radiance. With the high spectral (10 nm) and spatial (20 m2) resolution of Airborne Visible-InfraRed Imaging Spectrometer (AVIRIS) data, the water depth and bottom type were deduced using the model for shallow waters. This research demonstrates the necessity of further research to improve interpretations of scenes with highly variable turbid waters, and it emphasizes the utility of high spectral-resolution data as from AVIRIS for better understanding complicated coastal environments such as the west Florida shelf.
A Simple But Tricky Experiment.
ERIC Educational Resources Information Center
Grosu, I.; Baltag, O.
1994-01-01
Describes an experiment that uses a bottle, a cork, and a wooden match to study students' explanations of what they observe to reveal misunderstandings about pressure and to produce some incorrect interpretations such as creation of a gradient of pressure. (DDR)
The Statistics of wood assays for preservative retention
Patricia K. Lebow; Scott W. Conklin
2011-01-01
This paper covers general statistical concepts that apply to interpreting wood assay retention values. In particular, since wood assays are typically obtained from a single composited sample, the statistical aspects, including advantages and disadvantages, of simple compositing are covered.
Evolution of complex fruiting-body morphologies in homobasidiomycetes.
Hibbett, David S; Binder, Manfred
2002-01-01
The fruiting bodies of homobasidiomycetes include some of the most complex forms that have evolved in the fungi, such as gilled mushrooms, bracket fungi and puffballs ('pileate-erect') forms. Homobasidiomycetes also include relatively simple crust-like 'resupinate' forms, however, which account for ca. 13-15% of the described species in the group. Resupinate homobasidiomycetes have been interpreted either as a paraphyletic grade of plesiomorphic forms or a polyphyletic assemblage of reduced forms. The former view suggests that morphological evolution in homobasidiomycetes has been marked by independent elaboration in many clades, whereas the latter view suggests that parallel simplification has been a common mode of evolution. To infer patterns of morphological evolution in homobasidiomycetes, we constructed phylogenetic trees from a dataset of 481 species and performed ancestral state reconstruction (ASR) using parsimony and maximum likelihood (ML) methods. ASR with both parsimony and ML implies that the ancestor of the homobasidiomycetes was resupinate, and that there have been multiple gains and losses of complex forms in the homobasidiomycetes. We also used ML to address whether there is an asymmetry in the rate of transformations between simple and complex forms. Models of morphological evolution inferred with ML indicate that the rate of transformations from simple to complex forms is about three to six times greater than the rate of transformations in the reverse direction. A null model of morphological evolution, in which there is no asymmetry in transformation rates, was rejected. These results suggest that there is a 'driven' trend towards the evolution of complex forms in homobasidiomycetes. PMID:12396494
To naturalize or not to naturalize? An issue for cognitive science as well as anthropology.
Stenning, Keith
2012-07-01
Several of Beller, Bender, and Medin's (2012) issues are as relevant within cognitive science as between it and anthropology. Knowledge-rich human mental processes impose hermeneutic tasks, both on subjects and researchers. Psychology's current philosophy of science is ill suited to analyzing these: Its demand for ''stimulus control'' needs to give way to ''negotiation of mutual interpretation.'' Cognitive science has ways to address these issues, as does anthropology. An example from my own work is about how defeasible logics are mathematical models of some aspects of simple hermeneutic processes. They explain processing relative to databases of knowledge and belief-that is, content. A specific example is syllogistic reasoning, which raises issues of experimenters' interpretations of subjects' reasoning. Science, especially since the advent of understandings of computation, does not have to be reductive. How does this approach transfer onto anthropological topics? Recent cognitive science approaches to anthropological topics have taken a reductive stance in terms of modules. We end with some speculations about a different cognitive approach to, for example, religion. Copyright © 2012 Cognitive Science Society, Inc.
Day, Troy
2016-04-01
Epigenetic inheritance is the transmission of nongenetic material such as gene expression levels, RNA and other biomolecules from parents to offspring. There is a growing realization that such forms of inheritance can play an important role in evolution. Bacteria represent a prime example of epigenetic inheritance because a large array of cellular components is transmitted to offspring, in addition to genetic material. Interestingly, there is an extensive and growing empirical literature showing that many bacteria can form 'persister' cells that are phenotypically resistant or tolerant to antibiotics, but most of these results are not interpreted within the context of epigenetic inheritance. Instead, persister cells are usually viewed as a genetically encoded bet-hedging strategy that has evolved in response to a fluctuating environment. Here I show, using a relatively simple model, that many of these empirical findings can be more simply understood as arising from a combination of epigenetic inheritance and cellular noise. I therefore suggest that phenotypic drug tolerance in bacteria might represent one of the best-studied examples of evolution under epigenetic inheritance. © 2016 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Martínez-Casado, R.; Vega, J. L.; Sanz, A. S.; Miret-Artés, S.
2007-08-01
The study of diffusion and low-frequency vibrational motions of particles on metal surfaces is of paramount importance; it provides valuable information on the nature of the adsorbate-substrate and substrate-substrate interactions. In particular, the experimental broadening observed in the diffusive peak with increasing coverage is usually interpreted in terms of a dipole-dipole-like interaction among adsorbates via extensive molecular dynamics calculations within the Langevin framework. Here we present an alternative way to interpret this broadening by means of a purely stochastic description, namely the interacting single-adsorbate approximation, where two noise sources are considered: (1) a Gaussian white noise accounting for the surface friction and temperature, and (2) a white shot noise replacing the interaction potential between adsorbates. Standard Langevin numerical simulations for flat and corrugated surfaces (with a separable potential) illustrate the dynamics of Na atoms on a Cu(100) surface which fit fairly well to the analytical expressions issued from simple models (free particle and anharmonic oscillator) when the Gaussian approximation is assumed. A similar broadening is also expected for the frustrated translational mode peaks.
Use of diagnostics in wound management.
Romanelli, Marco; Miteva, Maria; Romanelli, Paolo; Barbanera, Sabrina; Dini, Valentina
2013-03-01
Wound healing research has progressed impressively over the past years. New insights into the pathogenesis of different chronic wounds and the study of novel treatment have made wound healing a model disorder and have revealed basic cellular and molecular mechanisms underlying chronic wounds. Although the observation is so obvious and simple, the interpretations by different observers can be quite variable. The interpretations of severity and change in severity by treatment may differ considerably between patient and practitioners. In this review we provide comprehensive view on different aspects of wound diagnostic, including clinical measurement, new biomarkers in wound pathology, proteases evaluation, and future noninvasive sensor-based devices. Wound caregivers are in the unique position of being able to observe the wound changes and describe these with knowledge and strict methodology, but also with the wide range of available wound diagnostic devices. The complexity of severity assessment in wound healing is reflected by the multiple clinical scores available. The best objective methods used to evaluate cutaneous tissue repair should have a high specificity and sensitivity and a low inter and intraobserver variation.
Development of a Global Multilayered Cloud Retrieval System
NASA Technical Reports Server (NTRS)
Huang, J.; Minnis, P.; Lin, B.; Yi, Y.; Ayers, J. K.; Khaiyer, M. M.; Arduini, R.; Fan, T.-F
2004-01-01
A more rigorous multilayered cloud retrieval system has been developed to improve the determination of high cloud properties in multilayered clouds. The MCRS attempts a more realistic interpretation of the radiance field than earlier methods because it explicitly resolves the radiative transfer that would produce the observed radiances. A two-layer cloud model was used to simulate multilayered cloud radiative characteristics. Despite the use of a simplified two-layer cloud reflectance parameterization, the MCRS clearly produced a more accurate retrieval of ice water path than simple differencing techniques used in the past. More satellite data and ground observation have to be used to test the MCRS. The MCRS methods are quite appropriate for interpreting the radiances when the high cloud has a relatively large optical depth (tau(sub I) greater than 2). For thinner ice clouds, a more accurate retrieval might be possible using infrared methods. Selection of an ice cloud retrieval and a variety of other issues must be explored before a complete global application of this technique can be implemented. Nevertheless, the initial results look promising.
Endo- vs. exogenous shocks and relaxation rates in book and music “sales”
NASA Astrophysics Data System (ADS)
Lambiotte, R.; Ausloos, M.
2006-04-01
In this paper, we analyse the response of music and book sales to an external field and a buyer herding. We distinguish endogenous and exogenous shocks. We focus on some case studies, whose data have been collected from ranking on amazon.com. We show that an ensemble of equivalent systems quantitatively respond in a same way to a similar “external shock”, indicating roads to universality features. In contrast to Sornette et al. [Phys. Rev. Lett. 93 (2004) 228701] who seemed to find power-law behaviours, in particular at long times, a law interpreted in terms of an epidemic activity, we observe that the relaxation process can be as well seen as an exponential one that saturates toward an asymptotic state, itself different from the pre-shock state. By studying an ensemble of 111 shocks, on books or records, we show that exogenous and endogenous shocks are discriminated by their short-time behaviour: the relaxation time seems to be twice shorter in endogenous shocks than in exogenous ones. We interpret the finding through a simple thermodynamic model with a dissipative force.
Laxmisan, A.; McCoy, A.B.; Wright, A.; Sittig, D.F.
2012-01-01
Objective Clinical summarization, the process by which relevant patient information is electronically summarized and presented at the point of care, is of increasing importance given the increasing volume of clinical data in electronic health record systems (EHRs). There is a paucity of research on electronic clinical summarization, including the capabilities of currently available EHR systems. Methods We compared different aspects of general clinical summary screens used in twelve different EHR systems using a previously described conceptual model: AORTIS (Aggregation, Organization, Reduction, Interpretation and Synthesis). Results We found a wide variation in the EHRs’ summarization capabilities: all systems were capable of simple aggregation and organization of limited clinical content, but only one demonstrated an ability to synthesize information from the data. Conclusion Improvement of the clinical summary screen functionality for currently available EHRs is necessary. Further research should identify strategies and methods for creating easy to use, well-designed clinical summary screens that aggregate, organize and reduce all pertinent patient information as well as provide clinical interpretations and synthesis as required. PMID:22468161
NASA Astrophysics Data System (ADS)
Rathbun, K.; Ukstins, I.; Drop, S.
2017-12-01
Monturaqui Crater is a small ( 350 m diameter), simple meteorite impact crater located in the Atacama Desert of northern Chile that was emplaced in Ordovician granite overlain by discontinuous Pliocene ignimbrite. Ejecta deposits are granite and ignimbrite, with lesser amounts of dark impact melt and rare tektites and iron shale. The impact restructured existing drainage systems in the area that have subsequently eroded through the ejecta. Satellite-based mapping and modeling, including a synthesis of photographic satellite imagery and ASTER thermal infrared imagery in ArcGIS, were used to construct a basic geological interpretation of the site with special emphasis on understanding ejecta distribution patterns. This was combined with field-based mapping to construct a high-resolution geologic map of the crater and its ejecta blanket and field check the satellite-based geologic interpretation. The satellite- and modeling-based interpretation suggests a well-preserved crater with an intact, heterogeneous ejecta blanket that has been subjected to moderate erosion. In contrast, field mapping shows that the crater has a heavily-eroded rim and ejecta blanket, and the ejecta is more heterogeneous than previously thought. In addition, the erosion rate at Monturaqui is much higher than erosion rates reported elsewhere in the Atacama Desert. The bulk compositions of the target rocks at Monturaqui are similar and the ejecta deposits are highly heterogeneous, so distinguishing between them with remote sensing is less effective than with direct field observations. In particular, the resolution of available imagery for the site is too low to resolve critical details that are readily apparent in the field on the scale of 10s of cm, and which significantly alter the geologic interpretation. The limiting factors for effective remote interpretation at Monturaqui are its target composition and crater size relative to the resolution of the remote sensing methods employed. This suggests that satellite-based mapping of ejecta may have limited utility at small craters due to limitations in source resolution compared to the geology of the site in question.
Whole-Range Assessment: A Simple Method for Analysing Allelopathic Dose-Response Data
An, Min; Pratley, J. E.; Haig, T.; Liu, D.L.
2005-01-01
Based on the typical biological responses of an organism to allelochemicals (hormesis), concepts of whole-range assessment and inhibition index were developed for improved analysis of allelopathic data. Examples of their application are presented using data drawn from the literature. The method is concise and comprehensive, and makes data grouping and multiple comparisons simple, logical, and possible. It improves data interpretation, enhances research outcomes, and is a statistically efficient summary of the plant response profiles. PMID:19330165
MX Siting Investigation, Gravity Survey - Delamar Valley, Nevada.
1981-07-20
reduces the data to Simple Bouguer Anomaly (see Section A1.4, Appendix A1.0). The Defense Mapping Agency Aerospace Center (DMAAC), St. Louis, Missouri...DRAWINGS Drawing Number 1 Complete Bouguer Anomaly Contours 2 Depth to Rock -Interpreted from In Pocket at Gravity Data End of Report iv E-TR-33-DM...ErtPX E-TR-3 3-DM 6 2.0 GRAVITY DATA REDUCTION DMAHTC/GSS obtained the basic observations for the new stations and reduced them to Simple Bouguer
NASA Astrophysics Data System (ADS)
Vicsek, Tamas
1997-03-01
It is demonstrated that a wide range of experimental results on biological motion can be successfully interpreted in terms of statistical physics motivated models taking into account the relevant microscopic details of motor proteins and allowing analytic solutions. Two important examples are considered, i) the motion of a single kinesin molecule along microtubules inside individual cells and ii) muscle contraction which is a macroscopic phenomenon due to the collective action of a large number of myosin heads along actin filaments. i) Recently individual two-headed kinesin molecules have been studied in in vitro motility assays revealing a number of their peculiar transport properties. Here we propose a simple and robust model for the kinesin stepping process with elastically coupled Brownian heads showing all of these properties. The analytic treatment of our model results in a very good fit to the experimental data and practically has no free parameters. ii) Myosin is an ATPase enzyme that converts the chemical energy stored in ATP molecules into mechanical work. During muscle contraction, the myosin cross-bridges attach to the actin filaments and exert force on them yielding a relative sliding of the actin and myosin filaments. In this paper we present a simple mechanochemical model for the cross-bridge interaction involving the relevant kinetic data and providing simple analytic solutions for the mechanical properties of muscle contraction, such as the force-velocity relationship or the relative number of the attached cross-bridges. So far the only analytic formula which could be fitted to the measured force-velocity curves has been the well known Hill equation containing parameters lacking clear microscopic origin. The main advantages of our new approach are that it explicitly connects the mechanical data with the kinetic data and the concentration of the ATP and ATPase products and as such it leads to new analytic solutions which agree extremely well with a wide range of experimental curves, while the parameters of the corresponding expressions have well defined microscopic meaning.
Notes on interpretation of geophysical data over areas of mineralization in Afghanistan
Drenth, Benjamin J.
2011-01-01
Afghanistan has the potential to contain substantial metallic mineral resources. Although valuable mineral deposits have been identified, much of the country's potential remains unknown. Geophysical surveys, particularly those conducted from airborne platforms, are a well-accepted and cost-effective method for obtaining information on the geological setting of a given area. This report summarizes interpretive findings from various geophysical surveys over selected mineral targets in Afghanistan, highlighting what existing data tell us. These interpretations are mainly qualitative in nature, because of the low resolution of available geophysical data. Geophysical data and simple interpretations are included for these six areas and deposit types: (1) Aynak: Sedimentary-hosted copper; (2) Zarkashan: Porphyry copper; (3) Kundalan: Porphyry copper; (4) Dusar Shaida: Volcanic-hosted massive sulphide; (5) Khanneshin: Carbonatite-hosted rare earth element; and (6) Chagai Hills: Porphyry copper.
Mechanical specific energy versus depth of cut in rock cutting and drilling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Yaneng; Zhang, Wu; Gamwo, Isaac
The relationship between Mechanical Specific Energy (MSE) and the Rate of Penetration (ROP), or equivalently the depth of cut per revolution, provides an important measure for strategizing a drilling operation. This study explores how MSE evolves with depth of cut, and presents a concerted effort that encompasses analytical, computational and experimental approaches. A simple model for the relationship between MSE and cutting depth is first derived with consideration of the wear progression of a circular cutter. This is an extension of Detournay and Defourny's phenomenological cutting model. Wear is modeled as a flat contact area at the bottom of amore » cutter referred to as a wear flat, and that wear flat in the past is often considered to be fixed during cutting. But during a drilling operation by a full bit that consists of multiple circular cutters, the wear flat length may increase because of various wear mechanisms involved. The wear progression of cutters generally results in reduced efficiency with either increased MSE or decreased ROP. Also, an accurate estimate of removed rock volume is found important for the evaluation of MSE. The derived model is compared with experiment results from a single circular cutter, for cutting soft rock under ambient pressure with actual depth measured through a micrometer, and for cutting high strength rock under high pressure with actual cutting area measured by a confocal microscope. Lastly, the model is employed to interpret the evolution of MSE with depth of cut for a full drilling bit under confining pressure. The general form of equation of the developed model is found to describe well the experiment data and can be applied to interpret the drilling data for a full bit.« less
Li, Bangde; Hayes, John E; Ziegler, Gregory R
2014-09-01
Designed experiments provide product developers feedback on the relationship between formulation and consumer acceptability. While actionable, this approach typically assumes a simple psychophysical relationship between ingredient concentration and perceived intensity. This assumption may not be valid, especially in cases where perceptual interactions occur. Additional information can be gained by considering the liking-intensity function, as single ingredients can influence more than one perceptual attribute. Here, 20 coffee-flavored dairy beverages were formulated using a fractional mixture design that varied the amount of coffee extract, fluid milk, sucrose, and water. Overall liking ( liking ) was assessed by 388 consumers using an incomplete block design (4 out of 20 prototypes) to limit fatigue; all participants also rated the samples for intensity of coffee flavor (coffee) , milk flavor (milk) , sweetness (sweetness) and thickness (thickness) . Across product means, the concentration variables explained 52% of the variance in liking in main effects multiple regression. The amount of sucrose (β = 0.46) and milk (β = 0.46) contributed significantly to the model (p's <0.02) while coffee extract (β = -0.17; p = 0.35) did not. A comparable model based on the perceived intensity explained 63% of the variance in mean liking ; sweetness (β = 0.53) and milk (β = 0.69) contributed significantly to the model (p's <0.04), while the influence of coffee flavor (β = 0.48) was positive but marginally (p = 0.09). Since a strong linear relationship existed between coffee extract concentration and coffee flavor, this discrepancy between the two models was unexpected, and probably indicates that adding more coffee extract also adds a negative attribute, e.g. too much bitterness. In summary, modeling liking as a function of both perceived intensity and physical concentration provides a richer interpretation of consumer data.
Li, Bangde; Hayes, John E.; Ziegler, Gregory R.
2014-01-01
Designed experiments provide product developers feedback on the relationship between formulation and consumer acceptability. While actionable, this approach typically assumes a simple psychophysical relationship between ingredient concentration and perceived intensity. This assumption may not be valid, especially in cases where perceptual interactions occur. Additional information can be gained by considering the liking-intensity function, as single ingredients can influence more than one perceptual attribute. Here, 20 coffee-flavored dairy beverages were formulated using a fractional mixture design that varied the amount of coffee extract, fluid milk, sucrose, and water. Overall liking (liking) was assessed by 388 consumers using an incomplete block design (4 out of 20 prototypes) to limit fatigue; all participants also rated the samples for intensity of coffee flavor (coffee), milk flavor (milk), sweetness (sweetness) and thickness (thickness). Across product means, the concentration variables explained 52% of the variance in liking in main effects multiple regression. The amount of sucrose (β = 0.46) and milk (β = 0.46) contributed significantly to the model (p’s <0.02) while coffee extract (β = −0.17; p = 0.35) did not. A comparable model based on the perceived intensity explained 63% of the variance in mean liking; sweetness (β = 0.53) and milk (β = 0.69) contributed significantly to the model (p’s <0.04), while the influence of coffee flavor (β = 0.48) was positive but marginally (p = 0.09). Since a strong linear relationship existed between coffee extract concentration and coffee flavor, this discrepancy between the two models was unexpected, and probably indicates that adding more coffee extract also adds a negative attribute, e.g. too much bitterness. In summary, modeling liking as a function of both perceived intensity and physical concentration provides a richer interpretation of consumer data. PMID:25024507
Mechanical specific energy versus depth of cut in rock cutting and drilling
Zhou, Yaneng; Zhang, Wu; Gamwo, Isaac; ...
2017-12-07
The relationship between Mechanical Specific Energy (MSE) and the Rate of Penetration (ROP), or equivalently the depth of cut per revolution, provides an important measure for strategizing a drilling operation. This study explores how MSE evolves with depth of cut, and presents a concerted effort that encompasses analytical, computational and experimental approaches. A simple model for the relationship between MSE and cutting depth is first derived with consideration of the wear progression of a circular cutter. This is an extension of Detournay and Defourny's phenomenological cutting model. Wear is modeled as a flat contact area at the bottom of amore » cutter referred to as a wear flat, and that wear flat in the past is often considered to be fixed during cutting. But during a drilling operation by a full bit that consists of multiple circular cutters, the wear flat length may increase because of various wear mechanisms involved. The wear progression of cutters generally results in reduced efficiency with either increased MSE or decreased ROP. Also, an accurate estimate of removed rock volume is found important for the evaluation of MSE. The derived model is compared with experiment results from a single circular cutter, for cutting soft rock under ambient pressure with actual depth measured through a micrometer, and for cutting high strength rock under high pressure with actual cutting area measured by a confocal microscope. Lastly, the model is employed to interpret the evolution of MSE with depth of cut for a full drilling bit under confining pressure. The general form of equation of the developed model is found to describe well the experiment data and can be applied to interpret the drilling data for a full bit.« less
Web-Based Model Visualization Tools to Aid in Model Optimization and Uncertainty Analysis
NASA Astrophysics Data System (ADS)
Alder, J.; van Griensven, A.; Meixner, T.
2003-12-01
Individuals applying hydrologic models have a need for a quick easy to use visualization tools to permit them to assess and understand model performance. We present here the Interactive Hydrologic Modeling (IHM) visualization toolbox. The IHM utilizes high-speed Internet access, the portability of the web and the increasing power of modern computers to provide an online toolbox for quick and easy model result visualization. This visualization interface allows for the interpretation and analysis of Monte-Carlo and batch model simulation results. Often times a given project will generate several thousands or even hundreds of thousands simulations. This large number of simulations creates a challenge for post-simulation analysis. IHM's goal is to try to solve this problem by loading all of the data into a database with a web interface that can dynamically generate graphs for the user according to their needs. IHM currently supports: a global samples statistics table (e.g. sum of squares error, sum of absolute differences etc.), top ten simulations table and graphs, graphs of an individual simulation using time step data, objective based dotty plots, threshold based parameter cumulative density function graphs (as used in the regional sensitivity analysis of Spear and Hornberger) and 2D error surface graphs of the parameter space. IHM is ideal for the simplest bucket model to the largest set of Monte-Carlo model simulations with a multi-dimensional parameter and model output space. By using a web interface, IHM offers the user complete flexibility in the sense that they can be anywhere in the world using any operating system. IHM can be a time saving and money saving alternative to spending time producing graphs or conducting analysis that may not be informative or being forced to purchase or use expensive and proprietary software. IHM is a simple, free, method of interpreting and analyzing batch model results, and is suitable for novice to expert hydrologic modelers.
Assessing map accuracy in a remotely sensed, ecoregion-scale cover map
Edwards, T.C.; Moisen, Gretchen G.; Cutler, D.R.
1998-01-01
Landscape- and ecoregion-based conservation efforts increasingly use a spatial component to organize data for analysis and interpretation. A challenge particular to remotely sensed cover maps generated from these efforts is how best to assess the accuracy of the cover maps, especially when they can exceed 1000 s/km2 in size. Here we develop and describe a methodological approach for assessing the accuracy of large-area cover maps, using as a test case the 21.9 million ha cover map developed for Utah Gap Analysis. As part of our design process, we first reviewed the effect of intracluster correlation and a simple cost function on the relative efficiency of cluster sample designs to simple random designs. Our design ultimately combined clustered and subsampled field data stratified by ecological modeling unit and accessibility (hereafter a mixed design). We next outline estimation formulas for simple map accuracy measures under our mixed design and report results for eight major cover types and the three ecoregions mapped as part of the Utah Gap Analysis. Overall accuracy of the map was 83.2% (SE=1.4). Within ecoregions, accuracy ranged from 78.9% to 85.0%. Accuracy by cover type varied, ranging from a low of 50.4% for barren to a high of 90.6% for man modified. In addition, we examined gains in efficiency of our mixed design compared with a simple random sample approach. In regard to precision, our mixed design was more precise than a simple random design, given fixed sample costs. We close with a discussion of the logistical constraints facing attempts to assess the accuracy of large-area, remotely sensed cover maps.
BatSLAM: Simultaneous localization and mapping using biomimetic sonar.
Steckel, Jan; Peremans, Herbert
2013-01-01
We propose to combine a biomimetic navigation model which solves a simultaneous localization and mapping task with a biomimetic sonar mounted on a mobile robot to address two related questions. First, can robotic sonar sensing lead to intelligent interactions with complex environments? Second, can we model sonar based spatial orientation and the construction of spatial maps by bats? To address these questions we adapt the mapping module of RatSLAM, a previously published navigation system based on computational models of the rodent hippocampus. We analyze the performance of the proposed robotic implementation operating in the real world. We conclude that the biomimetic navigation model operating on the information from the biomimetic sonar allows an autonomous agent to map unmodified (office) environments efficiently and consistently. Furthermore, these results also show that successful navigation does not require the readings of the biomimetic sonar to be interpreted in terms of individual objects/landmarks in the environment. We argue that the system has applications in robotics as well as in the field of biology as a simple, first order, model for sonar based spatial orientation and map building.
BatSLAM: Simultaneous Localization and Mapping Using Biomimetic Sonar
Steckel, Jan; Peremans, Herbert
2013-01-01
We propose to combine a biomimetic navigation model which solves a simultaneous localization and mapping task with a biomimetic sonar mounted on a mobile robot to address two related questions. First, can robotic sonar sensing lead to intelligent interactions with complex environments? Second, can we model sonar based spatial orientation and the construction of spatial maps by bats? To address these questions we adapt the mapping module of RatSLAM, a previously published navigation system based on computational models of the rodent hippocampus. We analyze the performance of the proposed robotic implementation operating in the real world. We conclude that the biomimetic navigation model operating on the information from the biomimetic sonar allows an autonomous agent to map unmodified (office) environments efficiently and consistently. Furthermore, these results also show that successful navigation does not require the readings of the biomimetic sonar to be interpreted in terms of individual objects/landmarks in the environment. We argue that the system has applications in robotics as well as in the field of biology as a simple, first order, model for sonar based spatial orientation and map building. PMID:23365647
NASA Technical Reports Server (NTRS)
Butner, Harold M.
1999-01-01
Our understanding about the inter-relationship between the collapsing cloud envelope and the disk has been greatly altered. While the dominant star formation models invoke free fall collapse and r(sup -1.5) density profile, other star formation models are possible. These models invoke either different cloud starting conditions or the mediating effects of magnetic fields to alter the cloud geometry during collapse. To test these models, it is necessary to understand the envelope's physical structure. The discovery of disks, based on millimeter observations around young stellar objects, however makes a simple interpretation of the emission complicated. Depending on the wavelength, the disk or the envelope could dominate emission from a star. In addition, the discovery of planets around other stars has made understanding the disks in their own right quite important. Many star formation models predict disks should form naturally as the star is forming. In many cases, the information we derive about disk properties depends implicitly on the assumed envelope properties. How to understand the two components and their interaction with each other is a key problem of current star formation.
NASA Astrophysics Data System (ADS)
Hamim, Salah Uddin Ahmed
Nanoindentation involves probing a hard diamond tip into a material, where the load and the displacement experienced by the tip is recorded continuously. This load-displacement data is a direct function of material's innate stress-strain behavior. Thus, theoretically it is possible to extract mechanical properties of a material through nanoindentation. However, due to various nonlinearities associated with nanoindentation the process of interpreting load-displacement data into material properties is difficult. Although, simple elastic behavior can be characterized easily, a method to characterize complicated material behavior such as nonlinear viscoelasticity is still lacking. In this study, a nanoindentation-based material characterization technique is developed to characterize soft materials exhibiting nonlinear viscoelasticity. Nanoindentation experiment was modeled in finite element analysis software (ABAQUS), where a nonlinear viscoelastic behavior was incorporated using user-defined subroutine (UMAT). The model parameters were calibrated using a process called inverse analysis. In this study, a surrogate model-based approach was used for the inverse analysis. The different factors affecting the surrogate model performance are analyzed in order to optimize the performance with respect to the computational cost.
Bayesian model calibration of ramp compression experiments on Z
NASA Astrophysics Data System (ADS)
Brown, Justin; Hund, Lauren
2017-06-01
Bayesian model calibration (BMC) is a statistical framework to estimate inputs for a computational model in the presence of multiple uncertainties, making it well suited to dynamic experiments which must be coupled with numerical simulations to interpret the results. Often, dynamic experiments are diagnosed using velocimetry and this output can be modeled using a hydrocode. Several calibration issues unique to this type of scenario including the functional nature of the output, uncertainty of nuisance parameters within the simulation, and model discrepancy identifiability are addressed, and a novel BMC process is proposed. As a proof of concept, we examine experiments conducted on Sandia National Laboratories' Z-machine which ramp compressed tantalum to peak stresses of 250 GPa. The proposed BMC framework is used to calibrate the cold curve of Ta (with uncertainty), and we conclude that the procedure results in simple, fast, and valid inferences. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Brian Hears: Online Auditory Processing Using Vectorization Over Channels
Fontaine, Bertrand; Goodman, Dan F. M.; Benichoux, Victor; Brette, Romain
2011-01-01
The human cochlea includes about 3000 inner hair cells which filter sounds at frequencies between 20 Hz and 20 kHz. This massively parallel frequency analysis is reflected in models of auditory processing, which are often based on banks of filters. However, existing implementations do not exploit this parallelism. Here we propose algorithms to simulate these models by vectorizing computation over frequency channels, which are implemented in “Brian Hears,” a library for the spiking neural network simulator package “Brian.” This approach allows us to use high-level programming languages such as Python, because with vectorized operations, the computational cost of interpretation represents a small fraction of the total cost. This makes it possible to define and simulate complex models in a simple way, while all previous implementations were model-specific. In addition, we show that these algorithms can be naturally parallelized using graphics processing units, yielding substantial speed improvements. We demonstrate these algorithms with several state-of-the-art cochlear models, and show that they compare favorably with existing, less flexible, implementations. PMID:21811453
Development and application of a hillslope hydrologic model
Blain, C.A.; Milly, P.C.D.
1991-01-01
A vertically integrated two-dimensional lateral flow model of soil moisture has been developed. Derivation of the governing equation is based on a physical interpretation of hillslope processes. The lateral subsurface-flow model permits variability of precipitation and evapotranspiration, and allows arbitrary specification of soil-moisture retention properties. Variable slope, soil thickness, and saturation are all accommodated. The numerical solution method, a Crank-Nicolson, finite-difference, upstream-weighted scheme, is simple and robust. A small catchment in northeastern Kansas is the subject of an application of the lateral subsurface-flow model. Calibration of the model using observed discharge provides estimates of the active porosity (0.1 cm3/cm3) and of the saturated horizontal hydraulic conductivity (40 cm/hr). The latter figure is at least an order of magnitude greater than the vertical hydraulic conductivity associated with the silty clay loam soil matrix. The large value of hydraulic conductivity derived from the calibration is suggestive of macropore-dominated hillslope drainage. The corresponding value of active porosity agrees well with a published average value of the difference between total porosity and field capacity for a silty clay loam. ?? 1991.
Robust mislabel logistic regression without modeling mislabel probabilities.
Hung, Hung; Jou, Zhi-Yu; Huang, Su-Yun
2018-03-01
Logistic regression is among the most widely used statistical methods for linear discriminant analysis. In many applications, we only observe possibly mislabeled responses. Fitting a conventional logistic regression can then lead to biased estimation. One common resolution is to fit a mislabel logistic regression model, which takes into consideration of mislabeled responses. Another common method is to adopt a robust M-estimation by down-weighting suspected instances. In this work, we propose a new robust mislabel logistic regression based on γ-divergence. Our proposal possesses two advantageous features: (1) It does not need to model the mislabel probabilities. (2) The minimum γ-divergence estimation leads to a weighted estimating equation without the need to include any bias correction term, that is, it is automatically bias-corrected. These features make the proposed γ-logistic regression more robust in model fitting and more intuitive for model interpretation through a simple weighting scheme. Our method is also easy to implement, and two types of algorithms are included. Simulation studies and the Pima data application are presented to demonstrate the performance of γ-logistic regression. © 2017, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Stramaglia, S.; Pellicoro, M.; Angelini, L.; Amico, E.; Aerts, H.; Cortés, J. M.; Laureys, S.; Marinazzo, D.
2017-04-01
Dynamical models implemented on the large scale architecture of the human brain may shed light on how a function arises from the underlying structure. This is the case notably for simple abstract models, such as the Ising model. We compare the spin correlations of the Ising model and the empirical functional brain correlations, both at the single link level and at the modular level, and show that their match increases at the modular level in anesthesia, in line with recent results and theories. Moreover, we show that at the peak of the specific heat (the critical state), the spin correlations are minimally shaped by the underlying structural network, explaining how the best match between the structure and function is obtained at the onset of criticality, as previously observed. These findings confirm that brain dynamics under anesthesia shows a departure from criticality and could open the way to novel perspectives when the conserved magnetization is interpreted in terms of a homeostatic principle imposed to neural activity.
Simple mathematical law benchmarks human confrontations.
Johnson, Neil F; Medina, Pablo; Zhao, Guannan; Messinger, Daniel S; Horgan, John; Gill, Paul; Bohorquez, Juan Camilo; Mattson, Whitney; Gangi, Devon; Qi, Hong; Manrique, Pedro; Velasquez, Nicolas; Morgenstern, Ana; Restrepo, Elvira; Johnson, Nicholas; Spagat, Michael; Zarama, Roberto
2013-12-10
Many high-profile societal problems involve an individual or group repeatedly attacking another - from child-parent disputes, sexual violence against women, civil unrest, violent conflicts and acts of terror, to current cyber-attacks on national infrastructure and ultrafast cyber-trades attacking stockholders. There is an urgent need to quantify the likely severity and timing of such future acts, shed light on likely perpetrators, and identify intervention strategies. Here we present a combined analysis of multiple datasets across all these domains which account for >100,000 events, and show that a simple mathematical law can benchmark them all. We derive this benchmark and interpret it, using a minimal mechanistic model grounded by state-of-the-art fieldwork. Our findings provide quantitative predictions concerning future attacks; a tool to help detect common perpetrators and abnormal behaviors; insight into the trajectory of a 'lone wolf'; identification of a critical threshold for spreading a message or idea among perpetrators; an intervention strategy to erode the most lethal clusters; and more broadly, a quantitative starting point for cross-disciplinary theorizing about human aggression at the individual and group level, in both real and online worlds.
Simple mathematical law benchmarks human confrontations
NASA Astrophysics Data System (ADS)
Johnson, Neil F.; Medina, Pablo; Zhao, Guannan; Messinger, Daniel S.; Horgan, John; Gill, Paul; Bohorquez, Juan Camilo; Mattson, Whitney; Gangi, Devon; Qi, Hong; Manrique, Pedro; Velasquez, Nicolas; Morgenstern, Ana; Restrepo, Elvira; Johnson, Nicholas; Spagat, Michael; Zarama, Roberto
2013-12-01
Many high-profile societal problems involve an individual or group repeatedly attacking another - from child-parent disputes, sexual violence against women, civil unrest, violent conflicts and acts of terror, to current cyber-attacks on national infrastructure and ultrafast cyber-trades attacking stockholders. There is an urgent need to quantify the likely severity and timing of such future acts, shed light on likely perpetrators, and identify intervention strategies. Here we present a combined analysis of multiple datasets across all these domains which account for >100,000 events, and show that a simple mathematical law can benchmark them all. We derive this benchmark and interpret it, using a minimal mechanistic model grounded by state-of-the-art fieldwork. Our findings provide quantitative predictions concerning future attacks; a tool to help detect common perpetrators and abnormal behaviors; insight into the trajectory of a `lone wolf' identification of a critical threshold for spreading a message or idea among perpetrators; an intervention strategy to erode the most lethal clusters; and more broadly, a quantitative starting point for cross-disciplinary theorizing about human aggression at the individual and group level, in both real and online worlds.
Forsdyke, D. R.
1971-01-01
1. Rat lymph-node cells were incubated in serum and medium 199 with [5-3H]uridine or [5-3H]cytidine and acid-precipitable radioactivity was measured. Results were interpreted in terms of an isotope-dilution model. 2. Both serum and medium 199 contained pools that inhibited radioactive labelling in a competitive manner. The serum activity was diffusible and inhibited labelling with [3H]cytidine more than with [3H]uridine; in these respects the activity resembled cytidine (14μm). 3. The pools in serum and plasma were the same size; however, the rate of labelling was greater in plasma, owing to a diffusible factor. 4. Paradoxically, relatively simple media (Earle's salts and Eagle's minimum essential) appeared to have a larger pool than the more complex pyrimidine-containing medium 199; this suggests a contribution to the pool by cells in the simple media. 5. In the absence of pools the average cell was capable of incorporating 2000 radioactive nucleoside molecules/s. PMID:4947658
Complex networks as an emerging property of hierarchical preferential attachment.
Hébert-Dufresne, Laurent; Laurence, Edward; Allard, Antoine; Young, Jean-Gabriel; Dubé, Louis J
2015-12-01
Real complex systems are not rigidly structured; no clear rules or blueprints exist for their construction. Yet, amidst their apparent randomness, complex structural properties universally emerge. We propose that an important class of complex systems can be modeled as an organization of many embedded levels (potentially infinite in number), all of them following the same universal growth principle known as preferential attachment. We give examples of such hierarchy in real systems, for instance, in the pyramid of production entities of the film industry. More importantly, we show how real complex networks can be interpreted as a projection of our model, from which their scale independence, their clustering, their hierarchy, their fractality, and their navigability naturally emerge. Our results suggest that complex networks, viewed as growing systems, can be quite simple, and that the apparent complexity of their structure is largely a reflection of their unobserved hierarchical nature.
Complex networks as an emerging property of hierarchical preferential attachment
NASA Astrophysics Data System (ADS)
Hébert-Dufresne, Laurent; Laurence, Edward; Allard, Antoine; Young, Jean-Gabriel; Dubé, Louis J.
2015-12-01
Real complex systems are not rigidly structured; no clear rules or blueprints exist for their construction. Yet, amidst their apparent randomness, complex structural properties universally emerge. We propose that an important class of complex systems can be modeled as an organization of many embedded levels (potentially infinite in number), all of them following the same universal growth principle known as preferential attachment. We give examples of such hierarchy in real systems, for instance, in the pyramid of production entities of the film industry. More importantly, we show how real complex networks can be interpreted as a projection of our model, from which their scale independence, their clustering, their hierarchy, their fractality, and their navigability naturally emerge. Our results suggest that complex networks, viewed as growing systems, can be quite simple, and that the apparent complexity of their structure is largely a reflection of their unobserved hierarchical nature.
The very low frequency power spectrum of Centaurus X-3
NASA Technical Reports Server (NTRS)
Gruber, D. E.
1988-01-01
The long-term variability of Cen X-3 on time scales ranging from days to years has been examined by combining data obtained by the HEAO 1 A-4 instrument with data from Vela 5B. A simple interpretation of the data is made in terms of the standard alpha-disk model of accretion disk structure and dynamics. Assuming that the low-frequency variance represents the inherent variability of the mass transfer from the companion, the decline in power at higher frequencies results from the leveling of radial structure in the accretion disk through viscous mixing. The shape of the observed power spectrum is shown to be in excellent agreement with a calculation based on a simplified form of this model. The observed low-frequency power spectrum of Cen X-3 is consistent with a disk in which viscous mixing occurs about as rapidly as possible and on the largest scale possible.
Is the negative glow plasma of a direct current glow discharge negatively charged?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bogdanov, E. A.; Saifutdinov, A. I.; Demidov, V. I., E-mail: Vladimir.Demidov@mail.wvu.edu
A classic problem in gas discharge physics is discussed: what is the sign of charge density in the negative glow region of a glow discharge? It is shown that traditional interpretations in text-books on gas discharge physics that states a negative charge of the negative glow plasma are based on analogies with a simple one-dimensional model of discharge. Because the real glow discharges with a positive column are always two-dimensional, the transversal (radial) term in divergence with the electric field can provide a non-monotonic axial profile of charge density in the plasma, while maintaining a positive sign. The numerical calculationmore » of glow discharge is presented, showing a positive space charge in the negative glow under conditions, where a one-dimensional model of the discharge would predict a negative space charge.« less
NASA Technical Reports Server (NTRS)
Haralick, R. M.
1982-01-01
The facet model was used to accomplish step edge detection. The essence of the facet model is that any analysis made on the basis of the pixel values in some neighborhood has its final authoritative interpretation relative to the underlying grey tone intensity surface of which the neighborhood pixel values are observed noisy samples. Pixels which are part of regions have simple grey tone intensity surfaces over their areas. Pixels which have an edge in them have complex grey tone intensity surfaces over their areas. Specially, an edge moves through a pixel only if there is some point in the pixel's area having a zero crossing of the second directional derivative taken in the direction of a non-zero gradient at the pixel's center. To determine whether or not a pixel should be marked as a step edge pixel, its underlying grey tone intensity surface was estimated on the basis of the pixels in its neighborhood.