Simple and detailed conceptual model diagram and associated narrative for ammonia, dissolved oxygen, flow alteration, herbicides, insecticides, ionic strength, metals, nutrients, ph, physical habitat, sediments, temperature, unspecified toxic chemicals.
Complexity-aware simple modeling.
Gómez-Schiavon, Mariana; El-Samad, Hana
2018-02-26
Mathematical models continue to be essential for deepening our understanding of biology. On one extreme, simple or small-scale models help delineate general biological principles. However, the parsimony of detail in these models as well as their assumption of modularity and insulation make them inaccurate for describing quantitative features. On the other extreme, large-scale and detailed models can quantitatively recapitulate a phenotype of interest, but have to rely on many unknown parameters, making them often difficult to parse mechanistically and to use for extracting general principles. We discuss some examples of a new approach-complexity-aware simple modeling-that can bridge the gap between the small-scale and large-scale approaches. Copyright © 2018 Elsevier Ltd. All rights reserved.
Adaptive exponential integrate-and-fire model as an effective description of neuronal activity.
Brette, Romain; Gerstner, Wulfram
2005-11-01
We introduce a two-dimensional integrate-and-fire model that combines an exponential spike mechanism with an adaptation equation, based on recent theoretical findings. We describe a systematic method to estimate its parameters with simple electrophysiological protocols (current-clamp injection of pulses and ramps) and apply it to a detailed conductance-based model of a regular spiking neuron. Our simple model predicts correctly the timing of 96% of the spikes (+/-2 ms) of the detailed model in response to injection of noisy synaptic conductances. The model is especially reliable in high-conductance states, typical of cortical activity in vivo, in which intrinsic conductances were found to have a reduced role in shaping spike trains. These results are promising because this simple model has enough expressive power to reproduce qualitatively several electrophysiological classes described in vitro.
Magneto-hydrodynamic modeling of gas discharge switches
NASA Astrophysics Data System (ADS)
Doiphode, P.; Sakthivel, N.; Sarkar, P.; Chaturvedi, S.
2002-12-01
We have performed one-dimensional, time-dependent magneto-hydrodynamic modeling of fast gas-discharge switches. The model has been applied to both high- and low-pressure switches, involving a cylindrical argon-filled cavity. It is assumed that the discharge is initiated in a small channel near the axis of the cylinder. Joule heating in this channel rapidly raises its temperature and pressure. This drives a radial shock wave that heats and ionizes the surrounding low-temperature region, resulting in progressive expansion of the current channel. Our model is able to reproduce this expansion. However, significant difference of detail is observed, as compared with a simple model reported in the literature. In this paper, we present details of our simulations, a comparison with results from the simple model, and a physical interpretation for these differences. This is a first step towards development of a detailed 2-D model for such switches.
Comparison of different objective functions for parameterization of simple respiration models
M.T. van Wijk; B. van Putten; D.Y. Hollinger; A.D. Richardson
2008-01-01
The eddy covariance measurements of carbon dioxide fluxes collected around the world offer a rich source for detailed data analysis. Simple, aggregated models are attractive tools for gap filling, budget calculation, and upscaling in space and time. Key in the application of these models is their parameterization and a robust estimate of the uncertainty and reliability...
Simple model of inhibition of chain-branching combustion processes
NASA Astrophysics Data System (ADS)
Babushok, Valeri I.; Gubernov, Vladimir V.; Minaev, Sergei S.; Miroshnichenko, Taisia P.
2017-11-01
A simple kinetic model has been suggested to describe the inhibition and extinction of flame propagation in reaction systems with chain-branching reactions typical for hydrocarbon systems. The model is based on the generalised model of the combustion process with chain-branching reaction combined with the one-stage reaction describing the thermal mode of flame propagation with the addition of inhibition reaction steps. Inhibitor addition suppresses the radical overshoot in flame and leads to the change of reaction mode from the chain-branching reaction to a thermal mode of flame propagation. With the increase of inhibitor the transition of chain-branching mode of reaction to the reaction with straight-chains (non-branching chain reaction) is observed. The inhibition part of the model includes a block of three reactions to describe the influence of the inhibitor. The heat losses are incorporated into the model via Newton cooling. The flame extinction is the result of the decreased heat release of inhibited reaction processes and the suppression of radical overshoot with the further decrease of the reaction rate due to the temperature decrease and mixture dilution. A comparison of the results of modelling laminar premixed methane/air flames inhibited by potassium bicarbonate (gas phase model, detailed kinetic model) with the results obtained using the suggested simple model is presented. The calculations with the detailed kinetic model demonstrate the following modes of combustion process: (1) flame propagation with chain-branching reaction (with radical overshoot, inhibitor addition decreases the radical overshoot down to the equilibrium level); (2) saturation of chemical influence of inhibitor, and (3) transition to thermal mode of flame propagation (non-branching chain mode of reaction). The suggested simple kinetic model qualitatively reproduces the modes of flame propagation with the addition of the inhibitor observed using detailed kinetic models.
Physical models of collective cell motility: from cell to tissue
NASA Astrophysics Data System (ADS)
Camley, B. A.; Rappel, W.-J.
2017-03-01
In this article, we review physics-based models of collective cell motility. We discuss a range of techniques at different scales, ranging from models that represent cells as simple self-propelled particles to phase field models that can represent a cell’s shape and dynamics in great detail. We also extensively review the ways in which cells within a tissue choose their direction, the statistics of cell motion, and some simple examples of how cell-cell signaling can interact with collective cell motility. This review also covers in more detail selected recent works on collective cell motion of small numbers of cells on micropatterns, in wound healing, and the chemotaxis of clusters of cells.
CADDIS Volume 2. Sources, Stressors and Responses: Metals - Simple Conceptual Model Diagram
Introduction to the metals module, when to list metals as a candidate cause, ways to measure metals, simple and detailed conceptual diagrams for metals, metals module references and literature reviews.
A simple physical model for forest fire spread
E. Koo; P. Pagni; J. Woycheese; S. Stephens; D. Weise; J. Huff
2005-01-01
Based on energy conservation and detailed heat transfer mechanisms, a simple physical model for fire spread is presented for the limit of one-dimensional steady-state contiguous spread of a line fire in a thermally-thin uniform porous fuel bed. The solution for the fire spread rate is found as an eigenvalue from this model with appropriate boundary conditions through a...
A simple stochastic weather generator for ecological modeling
A.G. Birt; M.R. Valdez-Vivas; R.M. Feldman; C.W. Lafon; D. Cairns; R.N. Coulson; M. Tchakerian; W. Xi; Jim Guldin
2010-01-01
Stochastic weather generators are useful tools for exploring the relationship between organisms and their environment. This paper describes a simple weather generator that can be used in ecological modeling projects. We provide a detailed description of methodology, and links to full C++ source code (http://weathergen.sourceforge.net) required to implement or modify...
CADDIS Volume 2. Sources, Stressors and Responses: Metals - Detailed Conceptual Model Diagram
Introduction to the metals module, when to list metals as a candidate cause, ways to measure metals, simple and detailed conceptual diagrams for metals, metals module references and literature reviews.
Anthropogenic heat flux: advisable spatial resolutions when input data are scarce
NASA Astrophysics Data System (ADS)
Gabey, A. M.; Grimmond, C. S. B.; Capel-Timms, I.
2018-02-01
Anthropogenic heat flux (QF) may be significant in cities, especially under low solar irradiance and at night. It is of interest to many practitioners including meteorologists, city planners and climatologists. QF estimates at fine temporal and spatial resolution can be derived from models that use varying amounts of empirical data. This study compares simple and detailed models in a European megacity (London) at 500 m spatial resolution. The simple model (LQF) uses spatially resolved population data and national energy statistics. The detailed model (GQF) additionally uses local energy, road network and workday population data. The Fractions Skill Score (FSS) and bias are used to rate the skill with which the simple model reproduces the spatial patterns and magnitudes of QF, and its sub-components, from the detailed model. LQF skill was consistently good across 90% of the city, away from the centre and major roads. The remaining 10% contained elevated emissions and "hot spots" representing 30-40% of the total city-wide energy. This structure was lost because it requires workday population, spatially resolved building energy consumption and/or road network data. Daily total building and traffic energy consumption estimates from national data were within ± 40% of local values. Progressively coarser spatial resolutions to 5 km improved skill for total QF, but important features (hot spots, transport network) were lost at all resolutions when residential population controlled spatial variations. The results demonstrate that simple QF models should be applied with conservative spatial resolution in cities that, like London, exhibit time-varying energy use patterns.
Simple animal models for amyotrophic lateral sclerosis drug discovery.
Patten, Shunmoogum A; Parker, J Alex; Wen, Xiao-Yan; Drapeau, Pierre
2016-08-01
Simple animal models have enabled great progress in uncovering the disease mechanisms of amyotrophic lateral sclerosis (ALS) and are helping in the selection of therapeutic compounds through chemical genetic approaches. Within this article, the authors provide a concise overview of simple model organisms, C. elegans, Drosophila and zebrafish, which have been employed to study ALS and discuss their value to ALS drug discovery. In particular, the authors focus on innovative chemical screens that have established simple organisms as important models for ALS drug discovery. There are several advantages of using simple animal model organisms to accelerate drug discovery for ALS. It is the authors' particular belief that the amenability of simple animal models to various genetic manipulations, the availability of a wide range of transgenic strains for labelling motoneurons and other cell types, combined with live imaging and chemical screens should allow for new detailed studies elucidating early pathological processes in ALS and subsequent drug and target discovery.
Use of paired simple and complex models to reduce predictive bias and quantify uncertainty
NASA Astrophysics Data System (ADS)
Doherty, John; Christensen, Steen
2011-12-01
Modern environmental management and decision-making is based on the use of increasingly complex numerical models. Such models have the advantage of allowing representation of complex processes and heterogeneous system property distributions inasmuch as these are understood at any particular study site. The latter are often represented stochastically, this reflecting knowledge of the character of system heterogeneity at the same time as it reflects a lack of knowledge of its spatial details. Unfortunately, however, complex models are often difficult to calibrate because of their long run times and sometimes questionable numerical stability. Analysis of predictive uncertainty is also a difficult undertaking when using models such as these. Such analysis must reflect a lack of knowledge of spatial hydraulic property details. At the same time, it must be subject to constraints on the spatial variability of these details born of the necessity for model outputs to replicate observations of historical system behavior. In contrast, the rapid run times and general numerical reliability of simple models often promulgates good calibration and ready implementation of sophisticated methods of calibration-constrained uncertainty analysis. Unfortunately, however, many system and process details on which uncertainty may depend are, by design, omitted from simple models. This can lead to underestimation of the uncertainty associated with many predictions of management interest. The present paper proposes a methodology that attempts to overcome the problems associated with complex models on the one hand and simple models on the other hand, while allowing access to the benefits each of them offers. It provides a theoretical analysis of the simplification process from a subspace point of view, this yielding insights into the costs of model simplification, and into how some of these costs may be reduced. It then describes a methodology for paired model usage through which predictive bias of a simplified model can be detected and corrected, and postcalibration predictive uncertainty can be quantified. The methodology is demonstrated using a synthetic example based on groundwater modeling environments commonly encountered in northern Europe and North America.
Directions for computational mechanics in automotive crashworthiness
NASA Technical Reports Server (NTRS)
Bennett, James A.; Khalil, T. B.
1993-01-01
The automotive industry has used computational methods for crashworthiness since the early 1970's. These methods have ranged from simple lumped parameter models to full finite element models. The emergence of the full finite element models in the mid 1980's has significantly altered the research direction. However, there remains a need for both simple, rapid modeling methods and complex detailed methods. Some directions for continuing research are discussed.
Directions for computational mechanics in automotive crashworthiness
NASA Astrophysics Data System (ADS)
Bennett, James A.; Khalil, T. B.
1993-08-01
The automotive industry has used computational methods for crashworthiness since the early 1970's. These methods have ranged from simple lumped parameter models to full finite element models. The emergence of the full finite element models in the mid 1980's has significantly altered the research direction. However, there remains a need for both simple, rapid modeling methods and complex detailed methods. Some directions for continuing research are discussed.
CADDIS Volume 2. Sources, Stressors and Responses: Dissolved Oxygen - Simple Conceptual Diagram
Introduction to the dissolved oxygen module, when to list dissolved oxygen as a candidate cause, ways to measure dissolved oxygen, simple and detailed conceptual model diagrams for dissolved oxygen, references for the dissolved oxygen module.
CADDIS Volume 2. Sources, Stressors and Responses: Flow Alteration - Simple Conceptual Diagram
Introduction to the flow alteration module, when to list flow alteration as a candidate cause, ways to measure flow alteration, simple and detailed conceptual model diagrams for flow alteration, flow alteration module references and literature reviews.
CADDIS Volume 2. Sources, Stressors and Responses: Dissolved Oxygen - Detailed Conceptual Diagram
Introduction to the dissolved oxygen module, when to list dissolved oxygen as a candidate cause, ways to measure dissolved oxygen, simple and detailed conceptual model diagrams for dissolved oxygen, references for the dissolved oxygen module.
CADDIS Volume 2. Sources, Stressors and Responses: Flow Alteration - Detailed Conceptual Diagram
Introduction to the flow alteration module, when to list flow alteration as a candidate cause, ways to measure flow alteration, simple and detailed conceptual model diagrams for flow alteration, flow alteration module references and literature reviews.
Complex versus simple models: ion-channel cardiac toxicity prediction.
Mistry, Hitesh B
2018-01-01
There is growing interest in applying detailed mathematical models of the heart for ion-channel related cardiac toxicity prediction. However, a debate as to whether such complex models are required exists. Here an assessment in the predictive performance between two established large-scale biophysical cardiac models and a simple linear model B net was conducted. Three ion-channel data-sets were extracted from literature. Each compound was designated a cardiac risk category using two different classification schemes based on information within CredibleMeds. The predictive performance of each model within each data-set for each classification scheme was assessed via a leave-one-out cross validation. Overall the B net model performed equally as well as the leading cardiac models in two of the data-sets and outperformed both cardiac models on the latest. These results highlight the importance of benchmarking complex versus simple models but also encourage the development of simple models.
Comparisons of CTH simulations with measured wave profiles for simple flyer plate experiments
Thomas, S. A.; Veeser, L. R.; Turley, W. D.; ...
2016-06-13
We conducted detailed 2-dimensional hydrodynamics calculations to assess the quality of simulations commonly used to design and analyze simple shock compression experiments. Such simple shock experiments also contain data where dynamic properties of materials are integrated together. We wished to assess how well the chosen computer hydrodynamic code could do at capturing both the simple parts of the experiments and the integral parts. We began with very simple shock experiments, in which we examined the effects of the equation of state and the compressional and tensile strength models. We increased complexity to include spallation in copper and iron and amore » solid-solid phase transformation in iron to assess the quality of the damage and phase transformation simulations. For experiments with a window, the response of both the sample and the window are integrated together, providing a good test of the material models. While CTH physics models are not perfect and do not reproduce all experimental details well, we find the models are useful; the simulations are adequate for understanding much of the dynamic process and for planning experiments. However, higher complexity in the simulations, such as adding in spall, led to greater differences between simulation and experiment. Lastly, this comparison of simulation to experiment may help guide future development of hydrodynamics codes so that they better capture the underlying physics.« less
DOT National Transportation Integrated Search
1976-04-30
A simple and a more detailed mathematical model for the simulation of train collisions are presented. The study presents considerable insight as to the causes and consequences of train motions on impact. Comparison of model predictions with two full ...
Ahern, Thomas P; Sprague, Brian L; Bissell, Michael C S; Miglioretti, Diana L; Buist, Diana S M; Braithwaite, Dejana; Kerlikowske, Karla
2017-06-01
Background: The utility of incorporating detailed family history into breast cancer risk prediction hinges on its independent contribution to breast cancer risk. We evaluated associations between detailed family history and breast cancer risk while accounting for breast density. Methods: We followed 222,019 participants ages 35 to 74 in the Breast Cancer Surveillance Consortium, of whom 2,456 developed invasive breast cancer. We calculated standardized breast cancer risks within joint strata of breast density and simple (1 st -degree female relative) or detailed (first-degree, second-degree, or first- and second-degree female relative) breast cancer family history. We fit log-binomial models to estimate age-specific breast cancer associations for simple and detailed family history, accounting for breast density. Results: Simple first-degree family history was associated with increased breast cancer risk compared with no first-degree history [Risk ratio (RR), 1.5; 95% confidence interval (CI), 1.0-2.1 at age 40; RR, 1.5; 95% CI, 1.3-1.7 at age 50; RR, 1.4; 95% CI, 1.2-1.6 at age 60; RR, 1.3; 95% CI, 1.1-1.5 at age 70). Breast cancer associations with detailed family history were strongest for women with first- and second-degree family history compared with no history (RR, 1.9; 95% CI, 1.1-3.2 at age 40); this association weakened in higher age groups (RR, 1.2; 95% CI, 0.88-1.5 at age 70). Associations did not change substantially when adjusted for breast density. Conclusions: Even with adjustment for breast density, a history of breast cancer in both first- and second-degree relatives is more strongly associated with breast cancer than simple first-degree family history. Impact: Future efforts to improve breast cancer risk prediction models should evaluate detailed family history as a risk factor. Cancer Epidemiol Biomarkers Prev; 26(6); 938-44. ©2017 AACR . ©2017 American Association for Cancer Research.
Architecture with GIDEON, A Program for Design in Structural DNA Nanotechnology
Birac, Jeffrey J.; Sherman, William B.; Kopatsch, Jens; Constantinou, Pamela E.; Seeman, Nadrian C.
2012-01-01
We present geometry based design strategies for DNA nanostructures. The strategies have been implemented with GIDEON – a Graphical Integrated Development Environment for OligoNucleotides. GIDEON has a highly flexible graphical user interface that facilitates the development of simple yet precise models, and the evaluation of strains therein. Models are built on a simple model of undistorted B-DNA double-helical domains. Simple point and click manipulations of the model allow the minimization of strain in the phosphate-backbone linkages between these domains and the identification of any steric clashes that might occur as a result. Detailed analysis of 3D triangles yields clear predictions of the strains associated with triangles of different sizes. We have carried out experiments that confirm that 3D triangles form well only when their geometrical strain is less than 4% deviation from the estimated relaxed structure. Thus geometry-based techniques alone, without energetic considerations, can be used to explain general trends in DNA structure formation. We have used GIDEON to build detailed models of double crossover and triple crossover molecules, evaluating the non-planarity associated with base tilt and junction mis-alignments. Computer modeling using a graphical user interface overcomes the limited precision of physical models for larger systems, and the limited interaction rate associated with earlier, command-line driven software. PMID:16630733
Wang, Yi-Shan; Potts, Jonathan R
2017-03-07
Recent advances in animal tracking have allowed us to uncover the drivers of movement in unprecedented detail. This has enabled modellers to construct ever more realistic models of animal movement, which aid in uncovering detailed patterns of space use in animal populations. Partial differential equations (PDEs) provide a popular tool for mathematically analysing such models. However, their construction often relies on simplifying assumptions which may greatly affect the model outcomes. Here, we analyse the effect of various PDE approximations on the analysis of some simple movement models, including a biased random walk, central-place foraging processes and movement in heterogeneous landscapes. Perhaps the most commonly-used PDE method dates back to a seminal paper of Patlak from 1953. However, our results show that this can be a very poor approximation in even quite simple models. On the other hand, more recent methods, based on transport equation formalisms, can provide more accurate results, as long as the kernel describing the animal's movement is sufficiently smooth. When the movement kernel is not smooth, we show that both the older and newer methods can lead to quantitatively misleading results. Our detailed analysis will aid future researchers in the appropriate choice of PDE approximation for analysing models of animal movement. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Numerical and Experimental Study of Damage Growth in a Composite Laminate
NASA Technical Reports Server (NTRS)
McElroy, Mark; Ratcliffe, James; Czabaj, Michael; Wang, John; Yuan, Fuh-Gwo
2014-01-01
The present study has three goals: (1) perform an experiment where a simple laminate damage process can be characterized in high detail; (2) evaluate the performance of existing commercially available laminate damage simulation tools by modeling the experiment; (3) observe and understand the underlying physics of damage in a composite honeycomb sandwich structure subjected to low-velocity impact. A quasi-static indentation experiment has been devised to provide detailed information about a simple mixed-mode damage growth process. The test specimens consist of an aluminum honeycomb core with a cross-ply laminate facesheet supported on a stiff uniform surface. When the sample is subjected to an indentation load, the honeycomb core provides support to the facesheet resulting in a gradual and stable damage growth process in the skin. This enables real time observation as a matrix crack forms, propagates through a ply, and then causes a delamination. Finite element analyses were conducted in ABAQUS/Explicit(TradeMark) 6.13 that used continuum and cohesive modeling techniques to simulate facesheet damage and a geometric and material nonlinear model to simulate core crushing. The high fidelity of the experimental data allows a detailed investigation and discussion of the accuracy of each numerical modeling approach.
Probability, statistics, and computational science.
Beerenwinkel, Niko; Siebourg, Juliane
2012-01-01
In this chapter, we review basic concepts from probability theory and computational statistics that are fundamental to evolutionary genomics. We provide a very basic introduction to statistical modeling and discuss general principles, including maximum likelihood and Bayesian inference. Markov chains, hidden Markov models, and Bayesian network models are introduced in more detail as they occur frequently and in many variations in genomics applications. In particular, we discuss efficient inference algorithms and methods for learning these models from partially observed data. Several simple examples are given throughout the text, some of which point to models that are discussed in more detail in subsequent chapters.
Quantitative Modeling of Earth Surface Processes
NASA Astrophysics Data System (ADS)
Pelletier, Jon D.
This textbook describes some of the most effective and straightforward quantitative techniques for modeling Earth surface processes. By emphasizing a core set of equations and solution techniques, the book presents state-of-the-art models currently employed in Earth surface process research, as well as a set of simple but practical research tools. Detailed case studies demonstrate application of the methods to a wide variety of processes including hillslope, fluvial, aeolian, glacial, tectonic, and climatic systems. Exercises at the end of each chapter begin with simple calculations and then progress to more sophisticated problems that require computer programming. All the necessary computer codes are available online at www.cambridge.org/9780521855976. Assuming some knowledge of calculus and basic programming experience, this quantitative textbook is designed for advanced geomorphology courses and as a reference book for professional researchers in Earth and planetary science looking for a quantitative approach to Earth surface processes.
Selective and directional actuation of elastomer films using chained magnetic nanoparticles
NASA Astrophysics Data System (ADS)
Mishra, Sumeet R.; Dickey, Michael D.; Velev, Orlin D.; Tracy, Joseph B.
2016-01-01
We report selective and directional actuation of elastomer films utilizing magnetic anisotropy introduced by chains of Fe3O4 magnetic nanoparticles (MNPs). Under uniform magnetic fields or field gradients, dipolar interactions between the MNPs favor magnetization along the chain direction and cause selective lifting. This mechanism is described using a simple model.We report selective and directional actuation of elastomer films utilizing magnetic anisotropy introduced by chains of Fe3O4 magnetic nanoparticles (MNPs). Under uniform magnetic fields or field gradients, dipolar interactions between the MNPs favor magnetization along the chain direction and cause selective lifting. This mechanism is described using a simple model. Electronic supplementary information (ESI) available: Two videos for actuation while rotating the sample, experimental details of nanoparticle synthesis, polymer composite preparation, and alignment and bending studies, details of the theoretical model of actuation, and supplemental figures for understanding the behavior of rotating samples and results from modelling. See DOI: 10.1039/c5nr07410j
NASA Astrophysics Data System (ADS)
Wong, Tony E.; Bakker, Alexander M. R.; Ruckert, Kelsey; Applegate, Patrick; Slangen, Aimée B. A.; Keller, Klaus
2017-07-01
Simple models can play pivotal roles in the quantification and framing of uncertainties surrounding climate change and sea-level rise. They are computationally efficient, transparent, and easy to reproduce. These qualities also make simple models useful for the characterization of risk. Simple model codes are increasingly distributed as open source, as well as actively shared and guided. Alas, computer codes used in the geosciences can often be hard to access, run, modify (e.g., with regards to assumptions and model components), and review. Here, we describe the simple model framework BRICK (Building blocks for Relevant Ice and Climate Knowledge) v0.2 and its underlying design principles. The paper adds detail to an earlier published model setup and discusses the inclusion of a land water storage component. The framework largely builds on existing models and allows for projections of global mean temperature as well as regional sea levels and coastal flood risk. BRICK is written in R and Fortran. BRICK gives special attention to the model values of transparency, accessibility, and flexibility in order to mitigate the above-mentioned issues while maintaining a high degree of computational efficiency. We demonstrate the flexibility of this framework through simple model intercomparison experiments. Furthermore, we demonstrate that BRICK is suitable for risk assessment applications by using a didactic example in local flood risk management.
Uncertainty analysis on simple mass balance model to calculate critical loads for soil acidity
Harbin Li; Steven G. McNulty
2007-01-01
Simple mass balance equations (SMBE) of critical acid loads (CAL) in forest soil were developed to assess potential risks of air pollutants to ecosystems. However, to apply SMBE reliably at large scales, SMBE must be tested for adequacy and uncertainty. Our goal was to provide a detailed analysis of uncertainty in SMBE so that sound strategies for scaling up CAL...
Calculation of tip clearance effects in a transonic compressor rotor
NASA Technical Reports Server (NTRS)
Chima, R. V.
1996-01-01
The flow through the tip clearance region of a transonic compressor rotor (NASA rotor 37) was computed and compared to aerodynamic probe and laser anemometer data. Tip clearance effects were modeled both by gridding the clearance gap and by using a simple periodicity model across the ungridded gap. The simple model was run with both the full gap height, and with half the gap height to simulate a vena-contracta effect. Comparisons between computed and measured performance maps and downstream profiles were used to validate the models and to assess the effects of gap height on the simple clearance model. Recommendations were made concerning the use of the simple clearance model. Detailed comparisons were made between the gridded clearance gap solution and the laser anemometer data near the tip at two operating points. The computer results agreed fairly well with the data but overpredicted the extent of the casing separation and underpredicted the wake decay rate. The computations were then used to describe the interaction of the tip vortex, the passage shock, and the casing boundary layer.
Comparing fire spread algorithms using equivalence testing and neutral landscape models
Brian R. Miranda; Brian R. Sturtevant; Jian Yang; Eric J. Gustafson
2009-01-01
We demonstrate a method to evaluate the degree to which a meta-model approximates spatial disturbance processes represented by a more detailed model across a range of landscape conditions, using neutral landscapes and equivalence testing. We illustrate this approach by comparing burn patterns produced by a relatively simple fire spread algorithm with those generated by...
A Simple Forecasting Model Linking Macroeconomic Policy to Industrial Employment Demand.
ERIC Educational Resources Information Center
Malley, James R.; Hady, Thomas F.
A study detailed further a model linking monetary and fiscal policy to industrial employment in metropolitan and nonmetropolitan areas of four United States regions. The model was used to simulate the impacts on area and regional employment of three events in the economy: changing real gross national product (GNP) via monetary policy, holding the…
A biochemically semi-detailed model of auxin-mediated vein formation in plant leaves.
Roussel, Marc R; Slingerland, Martin J
2012-09-01
We present here a model intended to capture the biochemistry of vein formation in plant leaves. The model consists of three modules. Two of these modules, those describing auxin signaling and transport in plant cells, are biochemically detailed. We couple these modules to a simple model for PIN (auxin efflux carrier) protein localization based on an extracellular auxin sensor. We study the single-cell responses of this combined model in order to verify proper functioning of the modeled biochemical network. We then assemble a multicellular model from the single-cell building blocks. We find that the model can, under some conditions, generate files of polarized cells, but not true veins. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
CADDIS Volume 2. Sources, Stressors and Responses: Flow Alteration
Introduction to the flow alteration module, when to list flow alteration as a candidate cause, ways to measure flow alteration, simple and detailed conceptual model diagrams for flow alteration, flow alteration module references and literature reviews.
CADDIS Volume 2. Sources, Stressors and Responses: Dissolved Oxygen
Introduction to the dissolved oxygen module, when to list dissolved oxygen as a candidate cause, ways to measure dissolved oxygen, simple and detailed conceptual model diagrams for dissolved oxygen, references for the dissolved oxygen module.
Design Considerations for Heavily-Doped Cryogenic Schottky Diode Varactor Multipliers
NASA Technical Reports Server (NTRS)
Schlecht, E.; Maiwald, F.; Chattopadhyay, G.; Martin, S.; Mehdi, I.
2001-01-01
Diode modeling for Schottky varactor frequency multipliers above 500 GHz is presented with special emphasis placed on simple models and fitted equations for rapid circuit design. Temperature- and doping-dependent mobility, resistivity, and avalanche current multiplication and breakdown are presented. Next is a discussion of static junction current, including the effects of tunneling as well as thermionic emission. These results have been compared to detailed measurements made down to 80 K on diodes fabricated at JPL, followed by a discussion of the effect on multiplier efficiency. Finally, a simple model of current saturation in the undepleted active layer suitable for inclusion in harmonic balance simulators is derived.
Millimeter wave satellite communication studies. Results of the 1981 propagation modeling effort
NASA Technical Reports Server (NTRS)
Stutzman, W. L.; Tsolakis, A.; Dishman, W. K.
1982-01-01
Theoretical modeling associated with rain effects on millimeter wave propagation is detailed. Three areas of work are discussed. A simple model for prediction of rain attenuation is developed and evaluated. A method for computing scattering from single rain drops is presented. A complete multiple scattering model is described which permits accurate calculation of the effects on dual polarized signals passing through rain.
NASA Technical Reports Server (NTRS)
Yoshikawa, K. K.
1978-01-01
The semiclassical transition probability was incorporated in the simulation for energy exchange between rotational and translational energy. The results provide details on the fundamental mechanisms of gas kinetics where analytical methods were impractical. The validity of the local Maxwellian assumption and relaxation time, rotational-translational energy transition, and a velocity analysis of the inelastic collision were discussed in detail.
Detailed modeling analysis for soot formation and radiation in microgravity gas jet diffusion flames
NASA Technical Reports Server (NTRS)
Ku, Jerry C.; Tong, LI; Greenberg, Paul S.
1995-01-01
Radiation heat transfer in combustion systems has been receiving increasing interest. In the case of hydrocarbon fuels, a significant portion of the radiation comes from soot particles, justifying the need for detailed soot formation model and radiation transfer calculations. For laminar gas jet diffusion flames, results from this project (4/1/91 8/22/95) and another NASA study show that flame shape, soot concentration, and radiation heat fluxes are substantially different under microgravity conditions. Our emphasis is on including detailed soot transport models and a detailed solution for radiation heat transfer, and on coupling them with the flame structure calculations. In this paper, we will discuss the following three specific areas: (1) Comparing two existing soot formation models, and identifying possible improvements; (2) A simple yet reasonably accurate approach to calculating total radiative properties and/or fluxes over the spectral range; and (3) Investigating the convergence of iterations between the flame structure solver and the radiation heat transfer solver.
Compact divided-pupil line-scanning confocal microscope for investigation of human tissues
NASA Astrophysics Data System (ADS)
Glazowski, Christopher; Peterson, Gary; Rajadhyaksha, Milind
2013-03-01
Divided-pupil line-scanning confocal microscopy (DPLSCM) can provide a simple and low-cost approach for imaging of human tissues with pathology-like nuclear and cellular detail. Using results from a multidimensional numerical model of DPLSCM, we found optimal pupil configurations for improved axial sectioning, as well as control of speckle noise in the case of reflectance imaging. The modeling results guided the design and construction of a simple (10 component) microscope, packaged within the footprint of an iPhone, and capable of cellular resolution. We present the optical design with experimental video-images of in-vivo human tissues.
A detailed comparison of optimality and simplicity in perceptual decision-making
Shen, Shan; Ma, Wei Ji
2017-01-01
Two prominent ideas in the study of decision-making have been that organisms behave near-optimally, and that they use simple heuristic rules. These principles might be operating in different types of tasks, but this possibility cannot be fully investigated without a direct, rigorous comparison within a single task. Such a comparison was lacking in most previous studies, because a) the optimal decision rule was simple; b) no simple suboptimal rules were considered; c) it was unclear what was optimal, or d) a simple rule could closely approximate the optimal rule. Here, we used a perceptual decision-making task in which the optimal decision rule is well-defined and complex, and makes qualitatively distinct predictions from many simple suboptimal rules. We find that all simple rules tested fail to describe human behavior, that the optimal rule accounts well for the data, and that several complex suboptimal rules are indistinguishable from the optimal one. Moreover, we found evidence that the optimal model is close to the true model: first, the better the trial-to-trial predictions of a suboptimal model agree with those of the optimal model, the better that suboptimal model fits; second, our estimate of the Kullback-Leibler divergence between the optimal model and the true model is not significantly different from zero. When observers receive no feedback, the optimal model still describes behavior best, suggesting that sensory uncertainty is implicitly represented and taken into account. Beyond the task and models studied here, our results have implications for best practices of model comparison. PMID:27177259
Ponce, Carlos; Bravo, Carolina; Alonso, Juan Carlos
2014-01-01
Studies evaluating agri-environmental schemes (AES) usually focus on responses of single species or functional groups. Analyses are generally based on simple habitat measurements but ignore food availability and other important factors. This can limit our understanding of the ultimate causes determining the reactions of birds to AES. We investigated these issues in detail and throughout the main seasons of a bird's annual cycle (mating, postfledging and wintering) in a dry cereal farmland in a Special Protection Area for farmland birds in central Spain. First, we modeled four bird response parameters (abundance, species richness, diversity and “Species of European Conservation Concern” [SPEC]-score), using detailed food availability and vegetation structure measurements (food models). Second, we fitted new models, built using only substrate composition variables (habitat models). Whereas habitat models revealed that both, fields included and not included in the AES benefited birds, food models went a step further and included seed and arthropod biomass as important predictors, respectively, in winter and during the postfledging season. The validation process showed that food models were on average 13% better (up to 20% in some variables) in predicting bird responses. However, the cost of obtaining data for food models was five times higher than for habitat models. This novel approach highlighted the importance of food availability-related causal processes involved in bird responses to AES, which remained undetected when using conventional substrate composition assessment models. Despite their higher costs, measurements of food availability add important details to interpret the reactions of the bird community to AES interventions and thus facilitate evaluating the real efficiency of AES programs. PMID:25165523
ERIC Educational Resources Information Center
Brady, Timothy F.; Tenenbaum, Joshua B.
2013-01-01
When remembering a real-world scene, people encode both detailed information about specific objects and higher order information like the overall gist of the scene. However, formal models of change detection, like those used to estimate visual working memory capacity, assume observers encode only a simple memory representation that includes no…
A Simple Model of Global Aerosol Indirect Effects
NASA Technical Reports Server (NTRS)
Ghan, Steven J.; Smith, Steven J.; Wang, Minghuai; Zhang, Kai; Pringle, Kirsty; Carslaw, Kenneth; Pierce, Jeffrey; Bauer, Susanne; Adams, Peter
2013-01-01
Most estimates of the global mean indirect effect of anthropogenic aerosol on the Earth's energy balance are from simulations by global models of the aerosol lifecycle coupled with global models of clouds and the hydrologic cycle. Extremely simple models have been developed for integrated assessment models, but lack the flexibility to distinguish between primary and secondary sources of aerosol. Here a simple but more physically based model expresses the aerosol indirect effect (AIE) using analytic representations of cloud and aerosol distributions and processes. Although the simple model is able to produce estimates of AIEs that are comparable to those from some global aerosol models using the same global mean aerosol properties, the estimates by the simple model are sensitive to preindustrial cloud condensation nuclei concentration, preindustrial accumulation mode radius, width of the accumulation mode, size of primary particles, cloud thickness, primary and secondary anthropogenic emissions, the fraction of the secondary anthropogenic emissions that accumulates on the coarse mode, the fraction of the secondary mass that forms new particles, and the sensitivity of liquid water path to droplet number concentration. Estimates of present-day AIEs as low as 5 W/sq m and as high as 0.3 W/sq m are obtained for plausible sets of parameter values. Estimates are surprisingly linear in emissions. The estimates depend on parameter values in ways that are consistent with results from detailed global aerosol-climate simulation models, which adds to understanding of the dependence on AIE uncertainty on uncertainty in parameter values.
Modeling filtration and fouling with a microstructured membrane filter
NASA Astrophysics Data System (ADS)
Cummings, Linda; Sanaei, Pejman
2017-11-01
Membrane filters find widespread use in diverse applications such as A/C systems and water purification. While the details of the filtration process may vary significantly, the broad challenge of efficient filtration is the same: to achieve finely-controlled separation at low power consumption. The obvious resolution to the challenge would appear simple: use the largest pore size consistent with the separation requirement. However, the membrane characteristics (and hence the filter performance) are far from constant over its lifetime: the particles removed from the feed are deposited within and on the membrane filter, fouling it and degrading the performance over time. The processes by which this occurs are complex, and depend on several factors, including: the internal structure of the membrane and the type of particles in the feed. We present a model for fouling of a simple microstructured membrane, and investigate how the details of the microstructure affect the filtration efficiency. Our idealized membrane consists of bifurcating pores, arranged in a layered structure, so that the number (and size) of pores changes in the depth of the membrane. In particular, we address how the details of the membrane microstructure affect the filter lifetime, and the total throughput. NSF DMS 1615719.
A simple Lagrangian forecast system with aviation forecast potential
NASA Technical Reports Server (NTRS)
Petersen, R. A.; Homan, J. H.
1983-01-01
A trajectory forecast procedure is developed which uses geopotential tendency fields obtained from a simple, multiple layer, potential vorticity conservative isentropic model. This model can objectively account for short-term advective changes in the mass field when combined with fine-scale initial analyses. This procedure for producing short-term, upper-tropospheric trajectory forecasts employs a combination of a detailed objective analysis technique, an efficient mass advection model, and a diagnostically proven trajectory algorithm, none of which require extensive computer resources. Results of initial tests are presented, which indicate an exceptionally good agreement for trajectory paths entering the jet stream and passing through an intensifying trough. It is concluded that this technique not only has potential for aiding in route determination, fuel use estimation, and clear air turbulence detection, but also provides an example of the types of short range forecasting procedures which can be applied at local forecast centers using simple algorithms and a minimum of computer resources.
NASA Technical Reports Server (NTRS)
Sayood, K.; Chen, Y. C.; Wang, X.
1992-01-01
During this reporting period we have worked on three somewhat different problems. These are modeling of video traffic in packet networks, low rate video compression, and the development of a lossy + lossless image compression algorithm, which might have some application in browsing algorithms. The lossy + lossless scheme is an extension of work previously done under this grant. It provides a simple technique for incorporating browsing capability. The low rate coding scheme is also a simple variation on the standard discrete cosine transform (DCT) coding approach. In spite of its simplicity, the approach provides surprisingly high quality reconstructions. The modeling approach is borrowed from the speech recognition literature, and seems to be promising in that it provides a simple way of obtaining an idea about the second order behavior of a particular coding scheme. Details about these are presented.
NASA Technical Reports Server (NTRS)
Poole, L. R.; Huckins, E. K., III
1972-01-01
A general theory on mathematical modeling of elastic parachute suspension lines during the unfurling process was developed. Massless-spring modeling of suspension-line elasticity was evaluated in detail. For this simple model, equations which govern the motion were developed and numerically integrated. The results were compared with flight test data. In most regions, agreement was satisfactory. However, poor agreement was obtained during periods of rapid fluctuations in line tension.
Photo-Modeling and Cloud Computing. Applications in the Survey of Late Gothic Architectural Elements
NASA Astrophysics Data System (ADS)
Casu, P.; Pisu, C.
2013-02-01
This work proposes the application of the latest methods of photo-modeling to the study of Gothic architecture in Sardinia. The aim is to consider the versatility and ease of use of such documentation tools in order to study architecture and its ornamental details. The paper illustrates a procedure of integrated survey and restitution, with the purpose to obtain an accurate 3D model of some gothic portals. We combined the contact survey and the photographic survey oriented to the photo-modelling. The software used is 123D Catch by Autodesk an Image Based Modelling (IBM) system available free. It is a web-based application that requires a few simple steps to produce a mesh from a set of not oriented photos. We tested the application on four portals, working at different scale of detail: at first the whole portal and then the different architectural elements that composed it. We were able to model all the elements and to quickly extrapolate simple sections, in order to make a comparison between the moldings, highlighting similarities and differences. Working in different sites at different scale of detail, have allowed us to test the procedure under different conditions of exposure, sunshine, accessibility, degradation of surface, type of material, and with different equipment and operators, showing if the final result could be affected by these factors. We tested a procedure, articulated in a few repeatable steps, that can be applied, with the right corrections and adaptations, to similar cases and/or larger or smaller elements.
ERIC Educational Resources Information Center
Atwood, Ronald K.; Atwood, Virginia A.
1997-01-01
Details a study that tests the effectiveness of brief instruction on the causes of night and day and the seasons. Employs simple, inexpensive models. Findings are useful for science teacher educators. Contains 32 references. (DDR)
Modeling an explosion : the devil is in the details
Peter W. Hart; Alan W. Rudie
2011-01-01
The Chemical Safety and Hazards Investigation Board has recently encouraged chemical engineering faculty to address student knowledge about reactive hazards in their curricula. This paper presents a simple approach that may be used to illustrate the importance of these types of safety considerations.
Levels of detail analysis of microwave scattering from human head models for brain stroke detection
2017-01-01
In this paper, we have presented a microwave scattering analysis from multiple human head models. This study incorporates different levels of detail in the human head models and its effect on microwave scattering phenomenon. Two levels of detail are taken into account; (i) Simplified ellipse shaped head model (ii) Anatomically realistic head model, implemented using 2-D geometry. In addition, heterogenic and frequency-dispersive behavior of the brain tissues has also been incorporated in our head models. It is identified during this study that the microwave scattering phenomenon changes significantly once the complexity of head model is increased by incorporating more details using magnetic resonance imaging database. It is also found out that the microwave scattering results match in both types of head model (i.e., geometrically simple and anatomically realistic), once the measurements are made in the structurally simplified regions. However, the results diverge considerably in the complex areas of brain due to the arbitrary shape interface of tissue layers in the anatomically realistic head model. After incorporating various levels of detail, the solution of subject microwave scattering problem and the measurement of transmitted and backscattered signals were obtained using finite element method. Mesh convergence analysis was also performed to achieve error free results with a minimum number of mesh elements and a lesser degree of freedom in the fast computational time. The results were promising and the E-Field values converged for both simple and complex geometrical models. However, the E-Field difference between both types of head model at the same reference point differentiated a lot in terms of magnitude. At complex location, a high difference value of 0.04236 V/m was measured compared to the simple location, where it turned out to be 0.00197 V/m. This study also contributes to provide a comparison analysis between the direct and iterative solvers so as to find out the solution of subject microwave scattering problem in a minimum computational time along with memory resources requirement. It is seen from this study that the microwave imaging may effectively be utilized for the detection, localization and differentiation of different types of brain stroke. The simulation results verified that the microwave imaging can be efficiently exploited to study the significant contrast between electric field values of the normal and abnormal brain tissues for the investigation of brain anomalies. In the end, a specific absorption rate analysis was carried out to compare the ionizing effects of microwave signals to different types of head model using a factor of safety for brain tissues. It is also suggested after careful study of various inversion methods in practice for microwave head imaging, that the contrast source inversion method may be more suitable and computationally efficient for such problems. PMID:29177115
Soft modes in the perceptron model for jamming.
NASA Astrophysics Data System (ADS)
Franz, Silvio
I will show how a well known neural network model \\x9Dthe perceptro provides a simple solvable model of glassy behavior and jamming. The glassy minima of the energy function of this model can be studied in full analytic detail. This allows the identification of two kind of soft modes the first ones associated to the existence a marginal glass phase and a hierarchical structure of the energy landscape, the second ones associated to isostaticity and marginality of jamming. These results highlight the universality of the spectrum of normal modes in disordered systems, and open the way toward a detailed analytical understanding of the vibrational spectrum of low-temperature glasses. This work was supported by a Grant from the Simons Foundation (454941 to Silvio Franz).
Using CONTENT 1.5 to analyze an SIR model for childhood infectious diseases
NASA Astrophysics Data System (ADS)
Su, Rui; He, Daihai
2008-11-01
In this work, we introduce a standard software CONTENT 1.5 for analysis of dynamical systems. A simple model for childhood infectious diseases is used as an example. The detailed steps to obtain the bifurcation structures of the system are given. These bifurcation structures can be used to explain the observed dynamical transition in measles incidences.
A Novel Shape Parameterization Approach
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
1999-01-01
This paper presents a novel parameterization approach for complex shapes suitable for a multidisciplinary design optimization application. The approach consists of two basic concepts: (1) parameterizing the shape perturbations rather than the geometry itself and (2) performing the shape deformation by means of the soft objects animation algorithms used in computer graphics. Because the formulation presented in this paper is independent of grid topology, we can treat computational fluid dynamics and finite element grids in a similar manner. The proposed approach is simple, compact, and efficient. Also, the analytical sensitivity derivatives are easily computed for use in a gradient-based optimization. This algorithm is suitable for low-fidelity (e.g., linear aerodynamics and equivalent laminated plate structures) and high-fidelity analysis tools (e.g., nonlinear computational fluid dynamics and detailed finite element modeling). This paper contains the implementation details of parameterizing for planform, twist, dihedral, thickness, and camber. The results are presented for a multidisciplinary design optimization application consisting of nonlinear computational fluid dynamics, detailed computational structural mechanics, performance, and a simple propulsion module.
Studies of aerothermal loads generated in regions of shock/shock interaction in hypersonic flow
NASA Technical Reports Server (NTRS)
Holden, Michael S.; Moselle, John R.; Lee, Jinho
1991-01-01
Experimental studies were conducted to examine the aerothermal characteristics of shock/shock/boundary layer interaction regions generated by single and multiple incident shocks. The presented experimental studies were conducted over a Mach number range from 6 to 19 for a range of Reynolds numbers to obtain both laminar and turbulent interaction regions. Detailed heat transfer and pressure measurements were made for a range of interaction types and incident shock strengths over a transverse cylinder, with emphasis on the 3 and 4 type interaction regions. The measurements were compared with the simple Edney, Keyes, and Hains models for a range of interaction configurations and freestream conditions. The complex flowfields and aerothermal loads generated by multiple-shock impingement, while not generating as large peak loads, provide important test cases for code prediction. The detailed heat transfer and pressure measurements proved a good basis for evaluating the accuracy of simple prediction methods and detailed numerical solutions for laminar and transitional regions or shock/shock interactions.
Multidisciplinary Aerodynamic-Structural Shape Optimization Using Deformation (MASSOUD)
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
2000-01-01
This paper presents a multidisciplinary shape parameterization approach. The approach consists of two basic concepts: (1) parameterizing the shape perturbations rather than the geometry itself and (2) performing the shape deformation by means of the soft object animation algorithms used in computer graphics. Because the formulation presented in this paper is independent of grid topology, we can treat computational fluid dynamics and finite element grids in the same manner. The proposed approach is simple, compact, and efficient. Also, the analytical sensitivity derivatives are easily computed for use in a gradient-based optimization. This algorithm is suitable for low-fidelity (e.g., linear aerodynamics and equivalent laminate plate structures) and high-fidelity (e.g., nonlinear computational fluid dynamics and detailed finite element modeling) analysis tools. This paper contains the implementation details of parameterizing for planform, twist, dihedral, thickness, camber, and free-form surface. Results are presented for a multidisciplinary application consisting of nonlinear computational fluid dynamics, detailed computational structural mechanics, and a simple performance module.
Multidisciplinary Aerodynamic-Structural Shape Optimization Using Deformation (MASSOUD)
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
2000-01-01
This paper presents a multidisciplinary shape parameterization approach. The approach consists of two basic concepts: (1) parameterizing the shape perturbations rather than the geometry itself and (2) performing the shape deformation by means of the soft object animation algorithms used in computer graphics. Because the formulation presented in this paper is independent of grid topology, we can treat computational fluid dynamics and finite element grids in a similar manner. The proposed approach is simple, compact, and efficient. Also, the analytical sensitivity derivatives are easily computed for use in a gradient-based optimization. This algorithm is suitable for low-fidelity (e.g., linear aerodynamics and equivalent laminated plate structures) and high-fidelity (e.g., nonlinear computational fluid dynamics and detailed finite element modeling analysis tools. This paper contains the implementation details of parameterizing for planform, twist, dihedral, thickness, camber, and free-form surface. Results are presented for a multidisciplinary design optimization application consisting of nonlinear computational fluid dynamics, detailed computational structural mechanics, and a simple performance module.
NASA Astrophysics Data System (ADS)
Strassmann, Kuno M.; Joos, Fortunat
2018-05-01
The Bern Simple Climate Model (BernSCM) is a free open-source re-implementation of a reduced-form carbon cycle-climate model which has been used widely in previous scientific work and IPCC assessments. BernSCM represents the carbon cycle and climate system with a small set of equations for the heat and carbon budget, the parametrization of major nonlinearities, and the substitution of complex component systems with impulse response functions (IRFs). The IRF approach allows cost-efficient yet accurate substitution of detailed parent models of climate system components with near-linear behavior. Illustrative simulations of scenarios from previous multimodel studies show that BernSCM is broadly representative of the range of the climate-carbon cycle response simulated by more complex and detailed models. Model code (in Fortran) was written from scratch with transparency and extensibility in mind, and is provided open source. BernSCM makes scientifically sound carbon cycle-climate modeling available for many applications. Supporting up to decadal time steps with high accuracy, it is suitable for studies with high computational load and for coupling with integrated assessment models (IAMs), for example. Further applications include climate risk assessment in a business, public, or educational context and the estimation of CO2 and climate benefits of emission mitigation options.
Zhang, Y; Joines, W T; Jirtle, R L; Samulski, T V
1993-08-01
The magnitude of E-field patterns generated by an annular array prototype device has been calculated and measured. Two models were used to describe the radiating sources: a simple linear dipole and a stripline antenna model. The stripline model includes detailed geometry of the actual antennas used in the prototype and an estimate of the antenna current based on microstrip transmission line theory. This more detailed model yields better agreement with the measured field patterns, reducing the rms discrepancy by a factor of about 6 (from approximately 23 to 4%) in the central region of interest where the SEM is within 25% of the maximum. We conclude that accurate modeling of source current distributions is important for determining SEM distributions associated with such heating devices.
Anisotropy of fluctuation dynamics of proteins with an elastic network model.
Atilgan, A R; Durell, S R; Jernigan, R L; Demirel, M C; Keskin, O; Bahar, I
2001-01-01
Fluctuations about the native conformation of proteins have proven to be suitably reproduced with a simple elastic network model, which has shown excellent agreement with a number of different properties for a wide variety of proteins. This scalar model simply investigates the magnitudes of motion of individual residues in the structure. To use the elastic model approach further for developing the details of protein mechanisms, it becomes essential to expand this model to include the added details of the directions of individual residue fluctuations. In this paper a new tool is presented for this purpose and applied to the retinol-binding protein, which indicates enhanced flexibility in the region of entry to the ligand binding site and for the portion of the protein binding to its carrier protein. PMID:11159421
In Vitro and In Silico Risk Assessment in Acquired Long QT Syndrome: The Devil Is in the Details.
Lee, William; Windley, Monique J; Vandenberg, Jamie I; Hill, Adam P
2017-01-01
Acquired long QT syndrome, mostly as a result of drug block of the Kv11. 1 potassium channel in the heart, is characterized by delayed cardiac myocyte repolarization, prolongation of the T interval on the ECG, syncope and sudden cardiac death due to the polymorphic ventricular arrhythmia Torsade de Pointes (TdP). In recent years, efforts are underway through the Comprehensive in vitro proarrhythmic assay (CiPA) initiative, to develop better tests for this drug induced arrhythmia based in part on in silico simulations of pharmacological disruption of repolarization. However, drug binding to Kv11.1 is more complex than a simple binary molecular reaction, meaning simple steady state measures of potency are poor surrogates for risk. As a result, there is a plethora of mechanistic detail describing the drug/Kv11.1 interaction-such as drug binding kinetics, state preference, temperature dependence and trapping-that needs to be considered when developing in silico models for risk prediction. In addition to this, other factors, such as multichannel pharmacological profile and the nature of the ventricular cell models used in simulations also need to be considered in the search for the optimum in silico approach. Here we consider how much of mechanistic detail needs to be included for in silico models to accurately predict risk and further, how much of this detail can be retrieved from protocols that are practical to implement in high throughout screens as part of next generation of preclinical in silico drug screening approaches?
Robert R. Ziemer
1979-01-01
For years, the principal objective of evapotranspiration research has been to calculate the loss of water under varying conditions of climate, soil, and vegetation. The early simple empirical methods have generally been replaced by more detailed models which more closely represent the physical and biological processes involved. Monteith's modification of the...
Estimation of surface temperature in remote pollution measurement experiments
NASA Technical Reports Server (NTRS)
Gupta, S. K.; Tiwari, S. N.
1978-01-01
A simple algorithm has been developed for estimating the actual surface temperature by applying corrections to the effective brightness temperature measured by radiometers mounted on remote sensing platforms. Corrections to effective brightness temperature are computed using an accurate radiative transfer model for the 'basic atmosphere' and several modifications of this caused by deviations of the various atmospheric and surface parameters from their base model values. Model calculations are employed to establish simple analytical relations between the deviations of these parameters and the additional temperature corrections required to compensate for them. Effects of simultaneous variation of two parameters are also examined. Use of these analytical relations instead of detailed radiative transfer calculations for routine data analysis results in a severalfold reduction in computation costs.
Towards a Model for Protein Production Rates
NASA Astrophysics Data System (ADS)
Dong, J. J.; Schmittmann, B.; Zia, R. K. P.
2007-07-01
In the process of translation, ribosomes read the genetic code on an mRNA and assemble the corresponding polypeptide chain. The ribosomes perform discrete directed motion which is well modeled by a totally asymmetric simple exclusion process (TASEP) with open boundaries. Using Monte Carlo simulations and a simple mean-field theory, we discuss the effect of one or two "bottlenecks" (i.e., slow codons) on the production rate of the final protein. Confirming and extending previous work by Chou and Lakatos, we find that the location and spacing of the slow codons can affect the production rate quite dramatically. In particular, we observe a novel "edge" effect, i.e., an interaction of a single slow codon with the system boundary. We focus in detail on ribosome density profiles and provide a simple explanation for the length scale which controls the range of these interactions.
Gastroschisis Simulation Model: Pre-surgical Management Technical Report.
Rosen, Orna; Angert, Robert M
2017-03-22
This technical report describes the creation of a gastroschisis model for a newborn. This is a simple, low-cost task trainer that provides the opportunity for Neonatology providers, including fellows, residents, nurse practitioners, physician assistants, and nurses, to practice the management of a baby with gastroschisis after birth and prior to surgery. Included is a suggested checklist with which the model can be employed. The details can be modified to suit different learning objectives.
A Simple Close Range Photogrammetry Technique to Assess Soil Erosion in the Field
USDA-ARS?s Scientific Manuscript database
Evaluating the performance of a soil erosion prediction model depends on the ability to accurately measure the gain or loss of sediment in an area. Recent development in acquiring detailed surface elevation data (DEM) makes it feasible to assess soil erosion and deposition spatially. Digital photogr...
Ice phase in altocumulus clouds over Leipzig: remote sensing observations and detailed modeling
NASA Astrophysics Data System (ADS)
Simmel, M.; Bühl, J.; Ansmann, A.; Tegen, I.
2015-09-01
The present work combines remote sensing observations and detailed cloud modeling to investigate two altocumulus cloud cases observed over Leipzig, Germany. A suite of remote sensing instruments was able to detect primary ice at rather high temperatures of -6 °C. For comparison, a second mixed phase case at about -25 °C is introduced. To further look into the details of cloud microphysical processes, a simple dynamics model of the Asai-Kasahara (AK) type is combined with detailed spectral microphysics (SPECS) forming the model system AK-SPECS. Vertical velocities are prescribed to force the dynamics, as well as main cloud features, to be close to the observations. Subsequently, sensitivity studies with respect to ice microphysical parameters are carried out with the aim to quantify the most important sensitivities for the cases investigated. For the cases selected, the liquid phase is mainly determined by the model dynamics (location and strength of vertical velocity), whereas the ice phase is much more sensitive to the microphysical parameters (ice nucleating particle (INP) number, ice particle shape). The choice of ice particle shape may induce large uncertainties that are on the same order as those for the temperature-dependent INP number distribution.
Ice phase in altocumulus clouds over Leipzig: remote sensing observations and detailed modelling
NASA Astrophysics Data System (ADS)
Simmel, M.; Bühl, J.; Ansmann, A.; Tegen, I.
2015-01-01
The present work combines remote sensing observations and detailed cloud modeling to investigate two altocumulus cloud cases observed over Leipzig, Germany. A suite of remote sensing instruments was able to detect primary ice at rather warm temperatures of -6 °C. For comparison, a second mixed phase case at about -25 °C is introduced. To further look into the details of cloud microphysical processes a simple dynamics model of the Asai-Kasahara type is combined with detailed spectral microphysics forming the model system AK-SPECS. Vertical velocities are prescribed to force the dynamics as well as main cloud features to be close to the observations. Subsequently, sensitivity studies with respect to ice microphysical parameters are carried out with the aim to quantify the most important sensitivities for the cases investigated. For the cases selected, the liquid phase is mainly determined by the model dynamics (location and strength of vertical velocity) whereas the ice phase is much more sensitive to the microphysical parameters (ice nuclei (IN) number, ice particle shape). The choice of ice particle shape may induce large uncertainties which are in the same order as those for the temperature-dependent IN number distribution.
Renton, Michael
2011-01-01
Background and aims Simulations that integrate sub-models of important biological processes can be used to ask questions about optimal management strategies in agricultural and ecological systems. Building sub-models with more detail and aiming for greater accuracy and realism may seem attractive, but is likely to be more expensive and time-consuming and result in more complicated models that lack transparency. This paper illustrates a general integrated approach for constructing models of agricultural and ecological systems that is based on the principle of starting simple and then directly testing for the need to add additional detail and complexity. Methodology The approach is demonstrated using LUSO (Land Use Sequence Optimizer), an agricultural system analysis framework based on simulation and optimization. A simple sensitivity analysis and functional perturbation analysis is used to test to what extent LUSO's crop–weed competition sub-model affects the answers to a number of questions at the scale of the whole farming system regarding optimal land-use sequencing strategies and resulting profitability. Principal results The need for accuracy in the crop–weed competition sub-model within LUSO depended to a small extent on the parameter being varied, but more importantly and interestingly on the type of question being addressed with the model. Only a small part of the crop–weed competition model actually affects the answers to these questions. Conclusions This study illustrates an example application of the proposed integrated approach for constructing models of agricultural and ecological systems based on testing whether complexity needs to be added to address particular questions of interest. We conclude that this example clearly demonstrates the potential value of the general approach. Advantages of this approach include minimizing costs and resources required for model construction, keeping models transparent and easy to analyse, and ensuring the model is well suited to address the question of interest. PMID:22476477
Crises and Collective Socio-Economic Phenomena: Simple Models and Challenges
NASA Astrophysics Data System (ADS)
Bouchaud, Jean-Philippe
2013-05-01
Financial and economic history is strewn with bubbles and crashes, booms and busts, crises and upheavals of all sorts. Understanding the origin of these events is arguably one of the most important problems in economic theory. In this paper, we review recent efforts to include heterogeneities and interactions in models of decision. We argue that the so-called Random Field Ising model ( rfim) provides a unifying framework to account for many collective socio-economic phenomena that lead to sudden ruptures and crises. We discuss different models that can capture potentially destabilizing self-referential feedback loops, induced either by herding, i.e. reference to peers, or trending, i.e. reference to the past, and that account for some of the phenomenology missing in the standard models. We discuss some empirically testable predictions of these models, for example robust signatures of rfim-like herding effects, or the logarithmic decay of spatial correlations of voting patterns. One of the most striking result, inspired by statistical physics methods, is that Adam Smith's invisible hand can fail badly at solving simple coordination problems. We also insist on the issue of time-scales, that can be extremely long in some cases, and prevent socially optimal equilibria from being reached. As a theoretical challenge, the study of so-called "detailed-balance" violating decision rules is needed to decide whether conclusions based on current models (that all assume detailed-balance) are indeed robust and generic.
Nonlinear Modeling by Assembling Piecewise Linear Models
NASA Technical Reports Server (NTRS)
Yao, Weigang; Liou, Meng-Sing
2013-01-01
To preserve nonlinearity of a full order system over a parameters range of interest, we propose a simple modeling approach by assembling a set of piecewise local solutions, including the first-order Taylor series terms expanded about some sampling states. The work by Rewienski and White inspired our use of piecewise linear local solutions. The assembly of these local approximations is accomplished by assigning nonlinear weights, through radial basis functions in this study. The efficacy of the proposed procedure is validated for a two-dimensional airfoil moving at different Mach numbers and pitching motions, under which the flow exhibits prominent nonlinear behaviors. All results confirm that our nonlinear model is accurate and stable for predicting not only aerodynamic forces but also detailed flowfields. Moreover, the model is robustness-accurate for inputs considerably different from the base trajectory in form and magnitude. This modeling preserves nonlinearity of the problems considered in a rather simple and accurate manner.
Potential flow theory and operation guide for the panel code PMARC
NASA Technical Reports Server (NTRS)
Ashby, Dale L.; Dudley, Michael R.; Iguchi, Steve K.; Browne, Lindsey; Katz, Joseph
1991-01-01
The theoretical basis for PMARC, a low-order potential-flow panel code for modeling complex three-dimensional geometries, is outlined. Several of the advanced features currently included in the code, such as internal flow modeling, a simple jet model, and a time-stepping wake model, are discussed in some detail. The code is written using adjustable size arrays so that it can be easily redimensioned for the size problem being solved and the computer hardware being used. An overview of the program input is presented, with a detailed description of the input available in the appendices. Finally, PMARC results for a generic wing/body configuration are compared with experimental data to demonstrate the accuracy of the code. The input file for this test case is given in the appendices.
NASA Technical Reports Server (NTRS)
Holbrook, G. T.; Dunham, D. M.
1985-01-01
Detailed pressure distribution measurements were made for 11 twist configurations of a unique, multisegmented wing model having an aspect ratio of 7 and a taper ratio of 1. These configurations encompassed span loads ranging from that of an untwisted wing to simple flapped wings both with and without upper-surface spoilers attached. For each of the wing twist configurations, electronic scanning pressure transducers were used to obtain 580 surface pressure measurements over the wing in about 0.1 sec. Integrated pressure distribution measurements compared favorably with force-balance measurements of lift on the model when the model centerbody lift was included. Complete plots and tabulations of the pressure distribution data for each wing twist configuration are provided.
On the Modeling of Thermal Radiation at the Top Surface of a Vacuum Arc Remelting Ingot
NASA Astrophysics Data System (ADS)
Delzant, P.-O.; Baqué, B.; Chapelle, P.; Jardy, A.
2018-02-01
Two models have been implemented for calculating the thermal radiation emitted at the ingot top in the VAR process, namely, a crude model that considers only radiative heat transfer between the free surface and electrode tip and a more detailed model that describes all radiative exchanges between the ingot, electrode, and crucible wall using a radiosity method. From the results of the second model, it is found that the radiative heat flux at the ingot top may depend heavily on the arc gap length and the electrode radius, but remains almost unaffected by variations of the electrode height. Both radiation models have been integrated into a CFD numerical code that simulates the growth and solidification of a VAR ingot. The simulation of a Ti-6-4 alloy melt shows that use of the detailed radiation model leads to some significant modification of the simulation results compared with the simple model. This is especially true during the hot-topping phase, where the top radiation plays an increasingly important role compared with the arc energy input. Thus, while the crude model has the advantage of its simplicity, use of the detailed model should be preferred.
On the Modeling of Thermal Radiation at the Top Surface of a Vacuum Arc Remelting Ingot
NASA Astrophysics Data System (ADS)
Delzant, P.-O.; Baqué, B.; Chapelle, P.; Jardy, A.
2018-06-01
Two models have been implemented for calculating the thermal radiation emitted at the ingot top in the VAR process, namely, a crude model that considers only radiative heat transfer between the free surface and electrode tip and a more detailed model that describes all radiative exchanges between the ingot, electrode, and crucible wall using a radiosity method. From the results of the second model, it is found that the radiative heat flux at the ingot top may depend heavily on the arc gap length and the electrode radius, but remains almost unaffected by variations of the electrode height. Both radiation models have been integrated into a CFD numerical code that simulates the growth and solidification of a VAR ingot. The simulation of a Ti-6-4 alloy melt shows that use of the detailed radiation model leads to some significant modification of the simulation results compared with the simple model. This is especially true during the hot-topping phase, where the top radiation plays an increasingly important role compared with the arc energy input. Thus, while the crude model has the advantage of its simplicity, use of the detailed model should be preferred.
Development of Maps of Simple and Complex Cells in the Primary Visual Cortex
Antolík, Ján; Bednar, James A.
2011-01-01
Hubel and Wiesel (1962) classified primary visual cortex (V1) neurons as either simple, with responses modulated by the spatial phase of a sine grating, or complex, i.e., largely phase invariant. Much progress has been made in understanding how simple-cells develop, and there are now detailed computational models establishing how they can form topographic maps ordered by orientation preference. There are also models of how complex cells can develop using outputs from simple cells with different phase preferences, but no model of how a topographic orientation map of complex cells could be formed based on the actual connectivity patterns found in V1. Addressing this question is important, because the majority of existing developmental models of simple-cell maps group neurons selective to similar spatial phases together, which is contrary to experimental evidence, and makes it difficult to construct complex cells. Overcoming this limitation is not trivial, because mechanisms responsible for map development drive receptive fields (RF) of nearby neurons to be highly correlated, while co-oriented RFs of opposite phases are anti-correlated. In this work, we model V1 as two topographically organized sheets representing cortical layer 4 and 2/3. Only layer 4 receives direct thalamic input. Both sheets are connected with narrow feed-forward and feedback connectivity. Only layer 2/3 contains strong long-range lateral connectivity, in line with current anatomical findings. Initially all weights in the model are random, and each is modified via a Hebbian learning rule. The model develops smooth, matching, orientation preference maps in both sheets. Layer 4 units become simple cells, with phase preference arranged randomly, while those in layer 2/3 are primarily complex cells. To our knowledge this model is the first explaining how simple cells can develop with random phase preference, and how maps of complex cells can develop, using only realistic patterns of connectivity. PMID:21559067
Simple stochastic model for El Niño with westerly wind bursts
Thual, Sulian; Majda, Andrew J.; Chen, Nan; Stechmann, Samuel N.
2016-01-01
Atmospheric wind bursts in the tropics play a key role in the dynamics of the El Niño Southern Oscillation (ENSO). A simple modeling framework is proposed that summarizes this relationship and captures major features of the observational record while remaining physically consistent and amenable to detailed analysis. Within this simple framework, wind burst activity evolves according to a stochastic two-state Markov switching–diffusion process that depends on the strength of the western Pacific warm pool, and is coupled to simple ocean–atmosphere processes that are otherwise deterministic, stable, and linear. A simple model with this parameterization and no additional nonlinearities reproduces a realistic ENSO cycle with intermittent El Niño and La Niña events of varying intensity and strength as well as realistic buildup and shutdown of wind burst activity in the western Pacific. The wind burst activity has a direct causal effect on the ENSO variability: in particular, it intermittently triggers regular El Niño or La Niña events, super El Niño events, or no events at all, which enables the model to capture observed ENSO statistics such as the probability density function and power spectrum of eastern Pacific sea surface temperatures. The present framework provides further theoretical and practical insight on the relationship between wind burst activity and the ENSO. PMID:27573821
Gastroschisis Simulation Model: Pre-surgical Management Technical Report
Angert, Robert M
2017-01-01
This technical report describes the creation of a gastroschisis model for a newborn. This is a simple, low-cost task trainer that provides the opportunity for Neonatology providers, including fellows, residents, nurse practitioners, physician assistants, and nurses, to practice the management of a baby with gastroschisis after birth and prior to surgery. Included is a suggested checklist with which the model can be employed. The details can be modified to suit different learning objectives. PMID:28439484
ERIC Educational Resources Information Center
Jaques, David
1981-01-01
Argues that games with a simple communication structure and/or an abstract content have more virtues than games which introduce too many details into the roles and scenario. Four such "simple" games are described, one in detail, and four references are listed. (LLS)
Modeling epidemics on adaptively evolving networks: A data-mining perspective.
Kattis, Assimakis A; Holiday, Alexander; Stoica, Ana-Andreea; Kevrekidis, Ioannis G
2016-01-01
The exploration of epidemic dynamics on dynamically evolving ("adaptive") networks poses nontrivial challenges to the modeler, such as the determination of a small number of informative statistics of the detailed network state (that is, a few "good observables") that usefully summarize the overall (macroscopic, systems-level) behavior. Obtaining reduced, small size accurate models in terms of these few statistical observables--that is, trying to coarse-grain the full network epidemic model to a small but useful macroscopic one--is even more daunting. Here we describe a data-based approach to solving the first challenge: the detection of a few informative collective observables of the detailed epidemic dynamics. This is accomplished through Diffusion Maps (DMAPS), a recently developed data-mining technique. We illustrate the approach through simulations of a simple mathematical model of epidemics on a network: a model known to exhibit complex temporal dynamics. We discuss potential extensions of the approach, as well as possible shortcomings.
Automatic network coupling analysis for dynamical systems based on detailed kinetic models.
Lebiedz, Dirk; Kammerer, Julia; Brandt-Pollmann, Ulrich
2005-10-01
We introduce a numerical complexity reduction method for the automatic identification and analysis of dynamic network decompositions in (bio)chemical kinetics based on error-controlled computation of a minimal model dimension represented by the number of (locally) active dynamical modes. Our algorithm exploits a generalized sensitivity analysis along state trajectories and subsequent singular value decomposition of sensitivity matrices for the identification of these dominant dynamical modes. It allows for a dynamic coupling analysis of (bio)chemical species in kinetic models that can be exploited for the piecewise computation of a minimal model on small time intervals and offers valuable functional insight into highly nonlinear reaction mechanisms and network dynamics. We present results for the identification of network decompositions in a simple oscillatory chemical reaction, time scale separation based model reduction in a Michaelis-Menten enzyme system and network decomposition of a detailed model for the oscillatory peroxidase-oxidase enzyme system.
NASA Astrophysics Data System (ADS)
Legates, David R.; Junghenn, Katherine T.
2018-04-01
Many local weather station networks that measure a number of meteorological variables (i.e. , mesonetworks) have recently been established, with soil moisture occasionally being part of the suite of measured variables. These mesonetworks provide data from which detailed estimates of various hydrological parameters, such as precipitation and reference evapotranspiration, can be made which, when coupled with simple surface characteristics available from soil surveys, can be used to obtain estimates of soil moisture. The question is Can meteorological data be used with a simple hydrologic model to estimate accurately daily soil moisture at a mesonetwork site? Using a state-of-the-art mesonetwork that also includes soil moisture measurements across the US State of Delaware, the efficacy of a simple, modified Thornthwaite/Mather-based daily water balance model based on these mesonetwork observations to estimate site-specific soil moisture is determined. Results suggest that the model works reasonably well for most well-drained sites and provides good qualitative estimates of measured soil moisture, often near the accuracy of the soil moisture instrumentation. The model exhibits particular trouble in that it cannot properly simulate the slow drainage that occurs in poorly drained soils after heavy rains and interception loss, resulting from grass not being short cropped as expected also adversely affects the simulation. However, the model could be tuned to accommodate some non-standard siting characteristics.
Do the Details Matter? Comparing Performance Forecasts from Two Computational Theories of Fatigue
2009-12-01
Bulletin & Review , 9(1), 3-25. Dinges, D. F., & Powell, J. W. (1985). Microcomputer analyses of performance on a portable, simple visual RT task during...Force Office of Scientific Research (AFOSR). References Estes, W. K. (2002). Traps in the route to models of memory and decision. Psychonomic
CFD study of a simple orifice pulse tube cooler
NASA Astrophysics Data System (ADS)
Zhang, X. B.; Qiu, L. M.; Gan, Z. H.; He, Y. L.
2007-05-01
Pulse tube cooler (PTC) has the advantages of long-life and low vibration over the conventional cryocoolers, such as G-M and Stirling coolers because of the absence of moving parts in low temperature. This paper performs a two-dimensional axis-symmetric computational fluid dynamic (CFD) simulation of a GM-type simple orifice PTC (OPTC). The detailed modeling process and the general results such as the phase difference between velocity and pressure at cold end, the temperature profiles along the wall as well as the temperature oscillations at cold end with different heat loads are presented. Emphases are put on analyzing the complicated phenomena of multi-dimensional flow and heat transfer in the pulse tube under conditions of oscillating pressure. Swirling flow pattern in the pulse tube is observed and the mechanism of formation is analyzed in details, which is further validated by modeling a basic PTC. The swirl causes undesirable mixing in the thermally stratified fluid and is partially responsible for the poor overall performance of the cooler, such as unsteady cold-end temperature.
Huang, Yuan-sheng; Yang, Zhi-rong; Zhan, Si-yan
2015-06-18
To investigate the use of simple pooling and bivariate model in meta-analyses of diagnostic test accuracy (DTA) published in Chinese journals (January to November, 2014), compare the differences of results from these two models, and explore the impact of between-study variability of sensitivity and specificity on the differences. DTA meta-analyses were searched through Chinese Biomedical Literature Database (January to November, 2014). Details in models and data for fourfold table were extracted. Descriptive analysis was conducted to investigate the prevalence of the use of simple pooling method and bivariate model in the included literature. Data were re-analyzed with the two models respectively. Differences in the results were examined by Wilcoxon signed rank test. How the results differences were affected by between-study variability of sensitivity and specificity, expressed by I2, was explored. The 55 systematic reviews, containing 58 DTA meta-analyses, were included and 25 DTA meta-analyses were eligible for re-analysis. Simple pooling was used in 50 (90.9%) systematic reviews and bivariate model in 1 (1.8%). The remaining 4 (7.3%) articles used other models pooling sensitivity and specificity or pooled neither of them. Of the reviews simply pooling sensitivity and specificity, 41(82.0%) were at the risk of wrongly using Meta-disc software. The differences in medians of sensitivity and specificity between two models were both 0.011 (P<0.001, P=0.031 respectively). Greater differences could be found as I2 of sensitivity or specificity became larger, especially when I2>75%. Most DTA meta-analyses published in Chinese journals(January to November, 2014) combine the sensitivity and specificity by simple pooling. Meta-disc software can pool the sensitivity and specificity only through fixed-effect model, but a high proportion of authors think it can implement random-effect model. Simple pooling tends to underestimate the results compared with bivariate model. The greater the between-study variance is, the more likely the simple pooling has larger deviation. It is necessary to increase the knowledge level of statistical methods and software for meta-analyses of DTA data.
Empirical testing of an analytical model predicting electrical isolation of photovoltaic models
NASA Astrophysics Data System (ADS)
Garcia, A., III; Minning, C. P.; Cuddihy, E. F.
A major design requirement for photovoltaic modules is that the encapsulation system be capable of withstanding large DC potentials without electrical breakdown. Presented is a simple analytical model which can be used to estimate material thickness to meet this requirement for a candidate encapsulation system or to predict the breakdown voltage of an existing module design. A series of electrical tests to verify the model are described in detail. The results of these verification tests confirmed the utility of the analytical model for preliminary design of photovoltaic modules.
A simple model of bipartite cooperation for ecological and organizational networks.
Saavedra, Serguei; Reed-Tsochas, Felix; Uzzi, Brian
2009-01-22
In theoretical ecology, simple stochastic models that satisfy two basic conditions about the distribution of niche values and feeding ranges have proved successful in reproducing the overall structural properties of real food webs, using species richness and connectance as the only input parameters. Recently, more detailed models have incorporated higher levels of constraint in order to reproduce the actual links observed in real food webs. Here, building on previous stochastic models of consumer-resource interactions between species, we propose a highly parsimonious model that can reproduce the overall bipartite structure of cooperative partner-partner interactions, as exemplified by plant-animal mutualistic networks. Our stochastic model of bipartite cooperation uses simple specialization and interaction rules, and only requires three empirical input parameters. We test the bipartite cooperation model on ten large pollination data sets that have been compiled in the literature, and find that it successfully replicates the degree distribution, nestedness and modularity of the empirical networks. These properties are regarded as key to understanding cooperation in mutualistic networks. We also apply our model to an extensive data set of two classes of company engaged in joint production in the garment industry. Using the same metrics, we find that the network of manufacturer-contractor interactions exhibits similar structural patterns to plant-animal pollination networks. This surprising correspondence between ecological and organizational networks suggests that the simple rules of cooperation that generate bipartite networks may be generic, and could prove relevant in many different domains, ranging from biological systems to human society.
Modelling the complete operation of a free-piston shock tunnel for a low enthalpy condition
NASA Astrophysics Data System (ADS)
McGilvray, M.; Dann, A. G.; Jacobs, P. A.
2013-07-01
Only a limited number of free-stream flow properties can be measured in hypersonic impulse facilities at the nozzle exit. This poses challenges for experimenters when subsequently analysing experimental data obtained from these facilities. Typically in a reflected shock tunnel, a simple analysis that requires small amounts of computational resources is used to calculate quasi-steady gas properties. This simple analysis requires initial fill conditions and experimental measurements in analytical calculations of each major flow process, using forward coupling with minor corrections to include processes that are not directly modeled. However, this simplistic approach leads to an unknown level of discrepancy to the true flow properties. To explore the simple modelling techniques accuracy, this paper details the use of transient one and two-dimensional numerical simulations of a complete facility to obtain more refined free-stream flow properties from a free-piston reflected shock tunnel operating at low-enthalpy conditions. These calculations were verified by comparison to experimental data obtained from the facility. For the condition and facility investigated, the test conditions at nozzle exit produced with the simple modelling technique agree with the time and space averaged results from the complete facility calculations to within the accuracy of the experimental measurements.
Prediction of nearfield jet entrainment by an interactive mixing/afterburning model
NASA Technical Reports Server (NTRS)
Dash, S. M.; Pergament, H. S.; Wilmoth, R. G.
1978-01-01
The development of a computational model (BOAT) for calculating nearfield jet entrainment, and its application to the prediction of nozzle boattail pressures, is discussed. BOAT accounts for the detailed turbulence and thermochemical processes occurring in the nearfield shear layers of jet engine (and rocket) exhaust plumes while interfacing with the inviscid exhaust and external flowfield regions in an overlaid, interactive manner. The ability of the model to analyze simple free shear flows is assessed by detailed comparisons with fundamental laboratory data. The overlaid methodology and the entrainment correction employed to yield the effective plume boundary conditions are assessed via application of BOAT in conjunction with the codes comprising the NASA/LRC patched viscous/inviscid model for determining nozzle boattail drag for subsonic/transonic external flows. Comparisons between the predictions and data on underexpanded laboratory cold air jets are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, William H., E-mail: millerwh@berkeley.edu; Cotton, Stephen J., E-mail: StephenJCotton47@gmail.com
2015-04-07
It is noted that the recently developed symmetrical quasi-classical (SQC) treatment of the Meyer-Miller (MM) model for the simulation of electronically non-adiabatic dynamics provides a good description of detailed balance, even though the dynamics which results from the classical MM Hamiltonian is “Ehrenfest dynamics” (i.e., the force on the nuclei is an instantaneous coherent average over all electronic states). This is seen to be a consequence of the SQC windowing methodology for “processing” the results of the trajectory calculation. For a particularly simple model discussed here, this is shown to be true regardless of the choice of windowing function employedmore » in the SQC model, and for a more realistic full classical molecular dynamics simulation, it is seen to be maintained correctly for very long time.« less
Complex Autocatalysis in Simple Chemistries.
Virgo, Nathaniel; Ikegami, Takashi; McGregor, Simon
2016-01-01
Life on Earth must originally have arisen from abiotic chemistry. Since the details of this chemistry are unknown, we wish to understand, in general, which types of chemistry can lead to complex, lifelike behavior. Here we show that even very simple chemistries in the thermodynamically reversible regime can self-organize to form complex autocatalytic cycles, with the catalytic effects emerging from the network structure. We demonstrate this with a very simple but thermodynamically reasonable artificial chemistry model. By suppressing the direct reaction from reactants to products, we obtain the simplest kind of autocatalytic cycle, resulting in exponential growth. When these simple first-order cycles are prevented from forming, the system achieves superexponential growth through more complex, higher-order autocatalytic cycles. This leads to nonlinear phenomena such as oscillations and bistability, the latter of which is of particular interest regarding the origins of life.
NASA Astrophysics Data System (ADS)
Jaiswal, D.; Long, S.; Parton, W. J.; Hartman, M.
2012-12-01
A coupled modeling system of crop growth model (BioCro) and biogeochemical model (DayCent) has been developed to assess the two-way interactions between plant growth and biogeochemistry. Crop growth in BioCro is simulated using a detailed mechanistic biochemical and biophysical multi-layer canopy model and partitioning of dry biomass into different plant organs according to phenological stages. Using hourly weather records, the model partitions light between dynamically changing sunlit and shaded portions of the canopy and computes carbon and water exchange with the atmosphere and through the canopy for each hour of the day, each day of the year. The model has been parameterized for the bioenergy crops sugarcane, Miscanthus and switchgrass, and validation has shown it to predict growth cycles and partitioning of biomass to a high degree of accuracy. As such it provides an ideal input for a soil biogeochemical model. DayCent is an established model for predicting long-term changes in soil C & N and soil-atmosphere exchanges of greenhouse gases. At present, DayCent uses a relatively simple productivity model. In this project BioCro has replaced this simple model to provide DayCent with a productivity and growth model equal in detail to its biogeochemistry. Dynamic coupling of these two models to produce CroCent allows for differential C: N ratios of litter fall (based on rates of senescence of different plant organs) and calibration of the model for realistic plant productivity in a mechanistic way. A process-based approach to modeling plant growth is needed for bioenergy crops because research on these crops (especially second generation feedstocks) has started only recently, and detailed agronomic information for growth, yield and management is too limited for effective empirical models. The coupled model provides means to test and improve the model against high resolution data, such as that obtained by eddy covariance and explore yield implications of different crop and soil management.
Analysis of NASA JP-4 fire tests data and development of a simple fire model
NASA Technical Reports Server (NTRS)
Raj, P.
1980-01-01
The temperature, velocity and species concentration data obtained during the NASA fire tests (3m, 7.5m and 15m diameter JP-4 fires) were analyzed. Utilizing the data analysis, a sample theoretical model was formulated to predict the temperature and velocity profiles in JP-4 fires. The theoretical model, which does not take into account the detailed chemistry of combustion, is capable of predicting the extent of necking of the fire near its base.
Horizon Brightness Revisited: Measurements and a Model of Clear-Sky Radiances
1994-07-20
Clear daytime skies persistently display a subtle local maximum of radiance near the astronomical horizon. Spectroradiometry and digital image analysis confirm this maximum’s reality, and they show that its angular width and elevation vary with solar elevation, azimuth relative to the Sun, and aerosol optical depth. Many existing models of atmospheric scattering do not generate this near-horizon radiance maximum, but a simple second-order scattering model does, and it reproduces many of the maximum’s details.
NASA Technical Reports Server (NTRS)
Appleby, J. F.; Van Blerkom, D. J.
1975-01-01
The article details an inhomogeneous reflecting layer (IRFL) model designed to survey absorption line behavior from a Squires-like cloud cover (which is characterized by convection cell structure). Computational problems and procedures are discussed in detail. The results show trends usually opposite to those predicted by a simple reflecting layer model. Per cent equivalent width variations for the tower model are usually somewhat greater for weak than for relatively strong absorption lines, with differences of a factor of about two or three. IRFL equivalent width variations do not differ drastically as a function of geometry when the total volume of absorbing gas is held constant. The IRFL results are in many instances consistent with observed equivalent width variations of Jupiter, Saturn, and Venus.
Automated adaptive inference of phenomenological dynamical models.
Daniels, Bryan C; Nemenman, Ilya
2015-08-21
Dynamics of complex systems is often driven by large and intricate networks of microscopic interactions, whose sheer size obfuscates understanding. With limited experimental data, many parameters of such dynamics are unknown, and thus detailed, mechanistic models risk overfitting and making faulty predictions. At the other extreme, simple ad hoc models often miss defining features of the underlying systems. Here we develop an approach that instead constructs phenomenological, coarse-grained models of network dynamics that automatically adapt their complexity to the available data. Such adaptive models produce accurate predictions even when microscopic details are unknown. The approach is computationally tractable, even for a relatively large number of dynamical variables. Using simulated data, it correctly infers the phase space structure for planetary motion, avoids overfitting in a biological signalling system and produces accurate predictions for yeast glycolysis with tens of data points and over half of the interacting species unobserved.
Automated adaptive inference of phenomenological dynamical models
Daniels, Bryan C.; Nemenman, Ilya
2015-01-01
Dynamics of complex systems is often driven by large and intricate networks of microscopic interactions, whose sheer size obfuscates understanding. With limited experimental data, many parameters of such dynamics are unknown, and thus detailed, mechanistic models risk overfitting and making faulty predictions. At the other extreme, simple ad hoc models often miss defining features of the underlying systems. Here we develop an approach that instead constructs phenomenological, coarse-grained models of network dynamics that automatically adapt their complexity to the available data. Such adaptive models produce accurate predictions even when microscopic details are unknown. The approach is computationally tractable, even for a relatively large number of dynamical variables. Using simulated data, it correctly infers the phase space structure for planetary motion, avoids overfitting in a biological signalling system and produces accurate predictions for yeast glycolysis with tens of data points and over half of the interacting species unobserved. PMID:26293508
NASA Astrophysics Data System (ADS)
Doulamis, A.; Doulamis, N.; Ioannidis, C.; Chrysouli, C.; Grammalidis, N.; Dimitropoulos, K.; Potsiou, C.; Stathopoulou, E.-K.; Ioannides, M.
2015-08-01
Outdoor large-scale cultural sites are mostly sensitive to environmental, natural and human made factors, implying an imminent need for a spatio-temporal assessment to identify regions of potential cultural interest (material degradation, structuring, conservation). On the other hand, in Cultural Heritage research quite different actors are involved (archaeologists, curators, conservators, simple users) each of diverse needs. All these statements advocate that a 5D modelling (3D geometry plus time plus levels of details) is ideally required for preservation and assessment of outdoor large scale cultural sites, which is currently implemented as a simple aggregation of 3D digital models at different time and levels of details. The main bottleneck of such an approach is its complexity, making 5D modelling impossible to be validated in real life conditions. In this paper, a cost effective and affordable framework for 5D modelling is proposed based on a spatial-temporal dependent aggregation of 3D digital models, by incorporating a predictive assessment procedure to indicate which regions (surfaces) of an object should be reconstructed at higher levels of details at next time instances and which at lower ones. In this way, dynamic change history maps are created, indicating spatial probabilities of regions needed further 3D modelling at forthcoming instances. Using these maps, predictive assessment can be made, that is, to localize surfaces within the objects where a high accuracy reconstruction process needs to be activated at the forthcoming time instances. The proposed 5D Digital Cultural Heritage Model (5D-DCHM) is implemented using open interoperable standards based on the CityGML framework, which also allows the description of additional semantic metadata information. Visualization aspects are also supported to allow easy manipulation, interaction and representation of the 5D-DCHM geometry and the respective semantic information. The open source 3DCityDB incorporating a PostgreSQL geo-database is used to manage and manipulate 3D data and their semantics.
Wiederholt, Ruscena; Bagstad, Kenneth J.; McCracken, Gary F.; Diffendorfer, Jay E.; Loomis, John B.; Semmens, Darius J.; Russell, Amy L.; Sansone, Chris; LaSharr, Kelsie; Cryan, Paul; Reynoso, Claudia; Medellin, Rodrigo A.; Lopez-Hoffman, Laura
2017-01-01
Given rapid changes in agricultural practice, it is critical to understand how alterations in ecological, technological, and economic conditions over time and space impact ecosystem services in agroecosystems. Here, we present a benefit transfer approach to quantify cotton pest-control services provided by a generalist predator, the Mexican free-tailed bat (Tadarida brasiliensis mexicana), in the southwestern United States. We show that pest-control estimates derived using (1) a compound spatial–temporal model – which incorporates spatial and temporal variability in crop pest-control service values – are likely to exhibit less error than those derived using (2) a simple-spatial model (i.e., a model that extrapolates values derived for one area directly, without adjustment, to other areas) or (3) a simple-temporal model (i.e., a model that extrapolates data from a few points in time over longer time periods). Using our compound spatial–temporal approach, the annualized pest-control value was \\$12.2 million, in contrast to an estimate of \\$70.1 million (5.7 times greater), obtained from the simple-spatial approach. Using estimates from one year (simple-temporal approach) revealed large value differences (0.4 times smaller to 2 times greater). Finally, we present a detailed protocol for valuing pest-control services, which can be used to develop robust pest-control transfer functions for generalist predators in agroecosystems.
2013-11-01
duration, or shock-pulse shape. Used in this computational study is a coarse-grained model of the lipid vesicle as a simplified model of a cell...Figures iv List of Tables iv 1. Introduction 1 2. Model and Methods 3 3. Results and Discussion 6 3.1 Simulation of the Blast Waves with Low Peak...realistic detail but to focus on a simple model of the major constituent of a cell membrane, the phospholipid bilayer. In this work, we studied the
Incorporating inductances in tissue-scale models of cardiac electrophysiology
NASA Astrophysics Data System (ADS)
Rossi, Simone; Griffith, Boyce E.
2017-09-01
In standard models of cardiac electrophysiology, including the bidomain and monodomain models, local perturbations can propagate at infinite speed. We address this unrealistic property by developing a hyperbolic bidomain model that is based on a generalization of Ohm's law with a Cattaneo-type model for the fluxes. Further, we obtain a hyperbolic monodomain model in the case that the intracellular and extracellular conductivity tensors have the same anisotropy ratio. In one spatial dimension, the hyperbolic monodomain model is equivalent to a cable model that includes axial inductances, and the relaxation times of the Cattaneo fluxes are strictly related to these inductances. A purely linear analysis shows that the inductances are negligible, but models of cardiac electrophysiology are highly nonlinear, and linear predictions may not capture the fully nonlinear dynamics. In fact, contrary to the linear analysis, we show that for simple nonlinear ionic models, an increase in conduction velocity is obtained for small and moderate values of the relaxation time. A similar behavior is also demonstrated with biophysically detailed ionic models. Using the Fenton-Karma model along with a low-order finite element spatial discretization, we numerically analyze differences between the standard monodomain model and the hyperbolic monodomain model. In a simple benchmark test, we show that the propagation of the action potential is strongly influenced by the alignment of the fibers with respect to the mesh in both the parabolic and hyperbolic models when using relatively coarse spatial discretizations. Accurate predictions of the conduction velocity require computational mesh spacings on the order of a single cardiac cell. We also compare the two formulations in the case of spiral break up and atrial fibrillation in an anatomically detailed model of the left atrium, and we examine the effect of intracellular and extracellular inductances on the virtual electrode phenomenon.
Ghorbani, Maryam; Mohammad-Rafiee, Farshid
2011-01-01
We develop a simple elastic model to study the conformation of DNA in the nucleosome core particle. In this model, the changes in the energy of the covalent bonds that connect the base pairs of each strand of the DNA double helix, as well as the lateral displacements and the rotation of adjacent base pairs are considered. We show that because of the rigidity of the covalent bonds in the sugar-phosphate backbones, the base pair parameters are highly correlated, especially, strong twist-roll-slide correlation in the conformation of the nucleosomal DNA is vividly observed in the calculated results. This simple model succeeds to account for the detailed features of the structure of the nucleosomal DNA, particularly, its more important base pair parameters, roll and slide, in good agreement with the experimental results. PMID:20972223
Predictability in community dynamics.
Blonder, Benjamin; Moulton, Derek E; Blois, Jessica; Enquist, Brian J; Graae, Bente J; Macias-Fauria, Marc; McGill, Brian; Nogué, Sandra; Ordonez, Alejandro; Sandel, Brody; Svenning, Jens-Christian
2017-03-01
The coupling between community composition and climate change spans a gradient from no lags to strong lags. The no-lag hypothesis is the foundation of many ecophysiological models, correlative species distribution modelling and climate reconstruction approaches. Simple lag hypotheses have become prominent in disequilibrium ecology, proposing that communities track climate change following a fixed function or with a time delay. However, more complex dynamics are possible and may lead to memory effects and alternate unstable states. We develop graphical and analytic methods for assessing these scenarios and show that these dynamics can appear in even simple models. The overall implications are that (1) complex community dynamics may be common and (2) detailed knowledge of past climate change and community states will often be necessary yet sometimes insufficient to make predictions of a community's future state. © 2017 John Wiley & Sons Ltd/CNRS.
Leveraging the UML Metamodel: Expressing ORM Semantics Using a UML Profile
DOE Office of Scientific and Technical Information (OSTI.GOV)
CUYLER,DAVID S.
2000-11-01
Object Role Modeling (ORM) techniques produce a detailed domain model from the perspective of the business owner/customer. The typical process begins with a set of simple sentences reflecting facts about the business. The output of the process is a single model representing primarily the persistent information needs of the business. This type of model contains little, if any reference to a targeted computerized implementation. It is a model of business entities not of software classes. Through well-defined procedures, an ORM model can be transformed into a high quality objector relational schema.
VisTrails SAHM: visualization and workflow management for species habitat modeling
Morisette, Jeffrey T.; Jarnevich, Catherine S.; Holcombe, Tracy R.; Talbert, Colin B.; Ignizio, Drew A.; Talbert, Marian; Silva, Claudio; Koop, David; Swanson, Alan; Young, Nicholas E.
2013-01-01
The Software for Assisted Habitat Modeling (SAHM) has been created to both expedite habitat modeling and help maintain a record of the various input data, pre- and post-processing steps and modeling options incorporated in the construction of a species distribution model through the established workflow management and visualization VisTrails software. This paper provides an overview of the VisTrails:SAHM software including a link to the open source code, a table detailing the current SAHM modules, and a simple example modeling an invasive weed species in Rocky Mountain National Park, USA.
ERIC Educational Resources Information Center
Wholeben, Brent E.
This volume is an exposition of a mathematical modeling technique for use in the evaluation and solution of complex educational problems at all levels. It explores in detail the application of simple algebraic techniques to such issues as program reduction, fiscal rollbacks, and computer curriculum planning. Part I ("Introduction to the…
López-Guerra, Enrique A
2014-01-01
Summary We examine different approaches to model viscoelasticity within atomic force microscopy (AFM) simulation. Our study ranges from very simple linear spring–dashpot models to more sophisticated nonlinear systems that are able to reproduce fundamental properties of viscoelastic surfaces, including creep, stress relaxation and the presence of multiple relaxation times. Some of the models examined have been previously used in AFM simulation, but their applicability to different situations has not yet been examined in detail. The behavior of each model is analyzed here in terms of force–distance curves, dissipated energy and any inherent unphysical artifacts. We focus in this paper on single-eigenmode tip–sample impacts, but the models and results can also be useful in the context of multifrequency AFM, in which the tip trajectories are very complex and there is a wider range of sample deformation frequencies (descriptions of tip–sample model behaviors in the context of multifrequency AFM require detailed studies and are beyond the scope of this work). PMID:25551043
Preliminary results from a four-working space, double-acting piston, Stirling engine controls model
NASA Technical Reports Server (NTRS)
Daniele, C. J.; Lorenzo, C. F.
1980-01-01
A four working space, double acting piston, Stirling engine simulation is being developed for controls studies. The development method is to construct two simulations, one for detailed fluid behavior, and a second model with simple fluid behaviour but containing the four working space aspects and engine inertias, validate these models separately, then upgrade the four working space model by incorporating the detailed fluid behaviour model for all four working spaces. The single working space (SWS) model contains the detailed fluid dynamics. It has seven control volumes in which continuity, energy, and pressure loss effects are simulated. Comparison of the SWS model with experimental data shows reasonable agreement in net power versus speed characteristics for various mean pressure levels in the working space. The four working space (FWS) model was built to observe the behaviour of the whole engine. The drive dynamics and vehicle inertia effects are simulated. To reduce calculation time, only three volumes are used in each working space and the gas temperature are fixed (no energy equation). Comparison of the FWS model predicted power with experimental data shows reasonable agreement. Since all four working spaces are simulated, the unique capabilities of the model are exercised to look at working fluid supply transients, short circuit transients, and piston ring leakage effects.
Combination of geodetic measurements by means of a multi-resolution representation
NASA Astrophysics Data System (ADS)
Goebel, G.; Schmidt, M. G.; Börger, K.; List, H.; Bosch, W.
2010-12-01
Recent and in particular current satellite gravity missions provide important contributions for global Earth gravity models, and these global models can be refined by airborne and terrestrial gravity observations. The most common representation of a gravity field model in terms of spherical harmonics has the disadvantages that it is difficult to represent small spatial details and cannot handle data gaps appropriately. An adequate modeling using a multi-resolution representation (MRP) is necessary in order to exploit the highest degree of information out of all these mentioned measurements. The MRP provides a simple hierarchical framework for identifying the properties of a signal. The procedure starts from the measurements, performs the decomposition into frequency-dependent detail signals by applying a pyramidal algorithm and allows for data compression and filtering, i.e. data manipulations. Since different geodetic measurement types (terrestrial, airborne, spaceborne) cover different parts of the frequency spectrum, it seems reasonable to calculate the detail signals of the lower levels mainly from satellite data, the detail signals of medium levels mainly from airborne and the detail signals of the higher levels mainly from terrestrial data. A concept is presented how these different measurement types can be combined within the MRP. In this presentation the basic principles on strategies and concepts for the generation of MRPs will be shown. Examples of regional gravity field determination are presented.
Modelling tidewater glacier calving: from detailed process models to simple calving laws
NASA Astrophysics Data System (ADS)
Benn, Doug; Åström, Jan; Zwinger, Thomas; Todd, Joe; Nick, Faezeh
2017-04-01
The simple calving laws currently used in ice sheet models do not adequately reflect the complexity and diversity of calving processes. To be effective, calving laws must be grounded in a sound understanding of how calving actually works. We have developed a new approach to formulating calving laws, using a) the Helsinki Discrete Element Model (HiDEM) to explicitly model fracture and calving processes, and b) the full-Stokes continuum model Elmer/Ice to identify critical stress states associated with HiDEM calving events. A range of observed calving processes emerges spontaneously from HiDEM in response to variations in ice-front buoyancy and the size of subaqueous undercuts, and we show that HiDEM calving events are associated with characteristic stress patterns simulated in Elmer/Ice. Our results open the way to developing calving laws that properly reflect the diversity of calving processes, and provide a framework for a unified theory of the calving process continuum.
Teufel, Christoph; Fletcher, Paul C
2016-10-01
Computational models have become an integral part of basic neuroscience and have facilitated some of the major advances in the field. More recently, such models have also been applied to the understanding of disruptions in brain function. In this review, using examples and a simple analogy, we discuss the potential for computational models to inform our understanding of brain function and dysfunction. We argue that they may provide, in unprecedented detail, an understanding of the neurobiological and mental basis of brain disorders and that such insights will be key to progress in diagnosis and treatment. However, there are also potential problems attending this approach. We highlight these and identify simple principles that should always govern the use of computational models in clinical neuroscience, noting especially the importance of a clear specification of a model's purpose and of the mapping between mathematical concepts and reality. © The Author (2016). Published by Oxford University Press on behalf of the Guarantors of Brain.
Modeling the atmospheric chemistry of TICs
NASA Astrophysics Data System (ADS)
Henley, Michael V.; Burns, Douglas S.; Chynwat, Veeradej; Moore, William; Plitz, Angela; Rottmann, Shawn; Hearn, John
2009-05-01
An atmospheric chemistry model that describes the behavior and disposition of environmentally hazardous compounds discharged into the atmosphere was coupled with the transport and diffusion model, SCIPUFF. The atmospheric chemistry model was developed by reducing a detailed atmospheric chemistry mechanism to a simple empirical effective degradation rate term (keff) that is a function of important meteorological parameters such as solar flux, temperature, and cloud cover. Empirically derived keff functions that describe the degradation of target toxic industrial chemicals (TICs) were derived by statistically analyzing data generated from the detailed chemistry mechanism run over a wide range of (typical) atmospheric conditions. To assess and identify areas to improve the developed atmospheric chemistry model, sensitivity and uncertainty analyses were performed to (1) quantify the sensitivity of the model output (TIC concentrations) with respect to changes in the input parameters and (2) improve, where necessary, the quality of the input data based on sensitivity results. The model predictions were evaluated against experimental data. Chamber data were used to remove the complexities of dispersion in the atmosphere.
Gauge Theories of Vector Particles
DOE R&D Accomplishments Database
Glashow, S. L.; Gell-Mann, M.
1961-04-24
The possibility of generalizing the Yang-Mills trick is examined. Thus we seek theories of vector bosons invariant under continuous groups of coordinate-dependent linear transformations. All such theories may be expressed as superpositions of certain "simple" theories; we show that each "simple theory is associated with a simple Lie algebra. We may introduce mass terms for the vector bosons at the price of destroying the gauge-invariance for coordinate-dependent gauge functions. The theories corresponding to three particular simple Lie algebras - those which admit precisely two commuting quantum numbers - are examined in some detail as examples. One of them might play a role in the physics of the strong interactions if there is an underlying super-symmetry, transcending charge independence, that is badly broken. The intermediate vector boson theory of weak interactions is discussed also. The so-called "schizon" model cannot be made to conform to the requirements of partial gauge-invariance.
PVWatts Version 1 Technical Reference
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dobos, A. P.
2013-10-01
The NREL PVWatts(TM) calculator is a web application developed by the National Renewable Energy Laboratory (NREL) that estimates the electricity production of a grid-connected photovoltaic system based on a few simple inputs. PVWatts combines a number of sub-models to predict overall system performance, and makes several hidden assumptions about performance parameters. This technical reference details the individual sub-models, documents assumptions and hidden parameters, and explains the sequence of calculations that yield the final system performance estimation.
Toward micro-scale spatial modeling of gentrification
NASA Astrophysics Data System (ADS)
O'Sullivan, David
A simple preliminary model of gentrification is presented. The model is based on an irregular cellular automaton architecture drawing on the concept of proximal space, which is well suited to the spatial externalities present in housing markets at the local scale. The rent gap hypothesis on which the model's cell transition rules are based is discussed. The model's transition rules are described in detail. Practical difficulties in configuring and initializing the model are described and its typical behavior reported. Prospects for further development of the model are discussed. The current model structure, while inadequate, is well suited to further elaboration and the incorporation of other interesting and relevant effects.
The Use of Modeling Approach for Teaching Exponential Functions
NASA Astrophysics Data System (ADS)
Nunes, L. F.; Prates, D. B.; da Silva, J. M.
2017-12-01
This work presents a discussion related to the teaching and learning of mathematical contents related to the study of exponential functions in a freshman students group enrolled in the first semester of the Science and Technology Bachelor’s (STB of the Federal University of Jequitinhonha and Mucuri Valleys (UFVJM). As a contextualization tool strongly mentioned in the literature, the modelling approach was used as an educational teaching tool to produce contextualization in the teaching-learning process of exponential functions to these students. In this sense, were used some simple models elaborated with the GeoGebra software and, to have a qualitative evaluation of the investigation and the results, was used Didactic Engineering as a methodology research. As a consequence of this detailed research, some interesting details about the teaching and learning process were observed, discussed and described.
Proposed best practice for projects that involve modelling and simulation.
O'Kelly, Michael; Anisimov, Vladimir; Campbell, Chris; Hamilton, Sinéad
2017-03-01
Modelling and simulation has been used in many ways when developing new treatments. To be useful and credible, it is generally agreed that modelling and simulation should be undertaken according to some kind of best practice. A number of authors have suggested elements required for best practice in modelling and simulation. Elements that have been suggested include the pre-specification of goals, assumptions, methods, and outputs. However, a project that involves modelling and simulation could be simple or complex and could be of relatively low or high importance to the project. It has been argued that the level of detail and the strictness of pre-specification should be allowed to vary, depending on the complexity and importance of the project. This best practice document does not prescribe how to develop a statistical model. Rather, it describes the elements required for the specification of a project and requires that the practitioner justify in the specification the omission of any of the elements and, in addition, justify the level of detail provided about each element. This document is an initiative of the Special Interest Group for modelling and simulation. The Special Interest Group for modelling and simulation is a body open to members of Statisticians in the Pharmaceutical Industry and the European Federation of Statisticians in the Pharmaceutical Industry. Examples of a very detailed specification and a less detailed specification are included as appendices. Copyright © 2016 John Wiley & Sons, Ltd.
Overview of Rotating Cavitation and Cavitation Surge in the Fastrac Engine LOX Turbopump
NASA Technical Reports Server (NTRS)
Zoladz, Thomas; Turner, Jim (Technical Monitor)
2001-01-01
Observations regarding rotating cavitation and cavitation surge experienced during the development of the Fastrac 60 Klbf engine turbopump are discussed. Detailed observations from the analysis of both water flow and liquid oxygen test data are offered. Scaling and general comparison of rotating cavitation between water flow and liquid oxygen testing are discussed. Complex data features linking the localized rotating cavitation mechanism of the inducer to system surge components are described in detail. Finally a description of a simple lumped-parameter hydraulic system model developed to better understand observed data is given.
Luckhaupt, Sara E; Cohen, Martha A; Calvert, Geoffrey M
2013-09-01
To determine whether current job is a reasonable surrogate for usual job. Data from the 2010 National Health Interview Survey were utilized to determine concordance between current and usual jobs for workers employed within the past year. Concordance was quantitated by kappa values for both simple and detailed industry and occupational groups. Good agreement is considered to be present when kappa values exceed 60. Overall kappa values ± standard errors were 74.5 ± 0.5 for simple industry, 72.4 ± 0.5 for detailed industry, 76.3 ± 0.4 for simple occupation, 73.7 ± 0.5 for detailed occupation, and 80.4 ± 0.6 for very broad occupational class. Sixty-five of 73 detailed industry groups and 78 of 81 detailed occupation groups evaluated had good agreement between current and usual jobs. Current job can often serve as a reliable surrogate for usual job in epidemiologic studies.
Concordance Between Current Job and Usual Job in Occupational and Industry Groupings
Luckhaupt, Sara E.; Cohen, Martha A.; Calvert, Geoffrey M.
2015-01-01
Objective To determine whether current job is a reasonable surrogate for usual job. Methods Data from the 2010 National Health Interview Survey were utilized to determine concordance between current and usual jobs for workers employed within the past year. Concordance was quantitated by kappa values for both simple and detailed industry and occupational groups. Good agreement is considered to be present when kappa values exceed 60. Results Overall kappa values ± standard errors were 74.5 ± 0.5 for simple industry, 72.4 ± 0.5 for detailed industry, 76.3 ± 0.4 for simple occupation, 73.7 ± 0.5 for detailed occupation, and 80.4 ± 0.6 for very broad occupational class. Sixty-five of 73 detailed industry groups and 78 of 81 detailed occupation groups evaluated had good agreement between current and usual jobs. Conclusions Current job can often serve as a reliable surrogate for usual job in epidemiologic studies. PMID:23969506
Cloud fluid models of gas dynamics and star formation in galaxies
NASA Technical Reports Server (NTRS)
Struck-Marcell, Curtis; Scalo, John M.; Appleton, P. N.
1987-01-01
The large dynamic range of star formation in galaxies, and the apparently complex environmental influences involved in triggering or suppressing star formation, challenges the understanding. The key to this understanding may be the detailed study of simple physical models for the dominant nonlinear interactions in interstellar cloud systems. One such model is described, a generalized Oort model cloud fluid, and two simple applications of it are explored. The first of these is the relaxation of an isolated volume of cloud fluid following a disturbance. Though very idealized, this closed box study suggests a physical mechanism for starbursts, which is based on the approximate commensurability of massive cloud lifetimes and cloud collisional growth times. The second application is to the modeling of colliding ring galaxies. In this case, the driving processes operating on a dynamical timescale interact with the local cloud processes operating on the above timescale. The results is a variety of interesting nonequilibrium behaviors, including spatial variations of star formation that do not depend monotonically on gas density.
Calibrating White Dwarf Asteroseismic Fitting Techniques
NASA Astrophysics Data System (ADS)
Castanheira, B. G.; Romero, A. D.; Bischoff-Kim, A.
2017-03-01
The main goal of looking for intrinsic variability in stars is the unique opportunity to study their internal structure. Once we have extracted independent modes from the data, it appears to be a simple matter of comparing the period spectrum with those from theoretical model grids to learn the inner structure of that star. However, asteroseismology is much more complicated than this simple description. We must account not only for observational uncertainties in period determination, but most importantly for the limitations of the model grids, coming from the uncertainties in the constitutive physics, and of the fitting techniques. In this work, we will discuss results of numerical experiments where we used different independently calculated model grids (white dwarf cooling models WDEC and fully evolutionary LPCODE-PUL) and fitting techniques to fit synthetic stars. The advantage of using synthetic stars is that we know the details of their interior structure so we can assess how well our models and fitting techniques are able to the recover the interior structure, as well as the stellar parameters.
Homogenized modeling methodology for 18650 lithium-ion battery module under large deformation
Tang, Liang; Cheng, Pengle
2017-01-01
Effective lithium-ion battery module modeling has become a bottleneck for full-size electric vehicle crash safety numerical simulation. Modeling every single cell in detail would be costly. However, computational accuracy could be lost if the module is modeled by using a simple bulk material or rigid body. To solve this critical engineering problem, a general method to establish a computational homogenized model for the cylindrical battery module is proposed. A single battery cell model is developed and validated through radial compression and bending experiments. To analyze the homogenized mechanical properties of the module, a representative unit cell (RUC) is extracted with the periodic boundary condition applied on it. An elastic–plastic constitutive model is established to describe the computational homogenized model for the module. Two typical packing modes, i.e., cubic dense packing and hexagonal packing for the homogenized equivalent battery module (EBM) model, are targeted for validation compression tests, as well as the models with detailed single cell description. Further, the homogenized EBM model is confirmed to agree reasonably well with the detailed battery module (DBM) model for different packing modes with a length scale of up to 15 × 15 cells and 12% deformation where the short circuit takes place. The suggested homogenized model for battery module makes way for battery module and pack safety evaluation for full-size electric vehicle crashworthiness analysis. PMID:28746390
Homogenized modeling methodology for 18650 lithium-ion battery module under large deformation.
Tang, Liang; Zhang, Jinjie; Cheng, Pengle
2017-01-01
Effective lithium-ion battery module modeling has become a bottleneck for full-size electric vehicle crash safety numerical simulation. Modeling every single cell in detail would be costly. However, computational accuracy could be lost if the module is modeled by using a simple bulk material or rigid body. To solve this critical engineering problem, a general method to establish a computational homogenized model for the cylindrical battery module is proposed. A single battery cell model is developed and validated through radial compression and bending experiments. To analyze the homogenized mechanical properties of the module, a representative unit cell (RUC) is extracted with the periodic boundary condition applied on it. An elastic-plastic constitutive model is established to describe the computational homogenized model for the module. Two typical packing modes, i.e., cubic dense packing and hexagonal packing for the homogenized equivalent battery module (EBM) model, are targeted for validation compression tests, as well as the models with detailed single cell description. Further, the homogenized EBM model is confirmed to agree reasonably well with the detailed battery module (DBM) model for different packing modes with a length scale of up to 15 × 15 cells and 12% deformation where the short circuit takes place. The suggested homogenized model for battery module makes way for battery module and pack safety evaluation for full-size electric vehicle crashworthiness analysis.
NASA Technical Reports Server (NTRS)
Balakrishna, S.; Goglia, G. L.
1979-01-01
The details of the efforts to synthesize a control-compatible multivariable model of a liquid nitrogen cooled, gaseous nitrogen operated, closed circuit, cryogenic pressure tunnel are presented. The synthesized model was transformed into a real-time cryogenic tunnel simulator, and this model is validated by comparing the model responses to the actual tunnel responses of the 0.3 m transonic cryogenic tunnel, using the quasi-steady-state and the transient responses of the model and the tunnel. The global nature of the simple, explicit, lumped multivariable model of a closed circuit cryogenic tunnel is demonstrated.
Millimeter wave radiative transfer studies for precipitation measurements
NASA Technical Reports Server (NTRS)
Vivekanandan, J.; Evans, Frank
1989-01-01
Scattering calculations using the discrete dipole approximation and vector radiative transfer calculations were performed to model multiparameter radar return and passive microwave emission for a simple model of a winter storm. The issue of dendrite riming was addressed by computing scattering properties of thin ice disks with varying bulk density. It was shown that C-band multiparameter radar contains information about particle density and the number concentration of the ice particles. The radiative transfer modeling indicated that polarized multifrequency passive microwave emission may be used to infer some properties of ice hydrometers. Detailed radar modeling and vector radiative transfer modeling is in progress to enhance the understanding of simultaneous radar and radiometer measurements, as in the case of the proposed TRMM field program. A one-dimensional cloud model will be used to simulate the storm structure in detail and study the microphysics, such as size and density. Multifrequency polarized radiometer measurements from the SSMI satellite instrument will be analyzed in relation to dual-frequency and dual-polarization radar measurements.
Qualitative methods in quantum theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Migdal, A.B.
The author feels that the solution of most problems in theoretical physics begins with the application of qualitative methods - dimensional estimates and estimates made from simple models, the investigation of limiting cases, the use of the analytic properties of physical quantities, etc. This book proceeds in this spirit, rather than in a formal, mathematical way with no traces of the sweat involved in the original work left to show. The chapters are entitled Dimensional and model approximations, Various types of perturbation theory, The quasi-classical approximation, Analytic properties of physical quantities, Methods in the many-body problem, and Qualitative methods inmore » quantum field theory. Each chapter begins with a detailed introduction, in which the physical meaning of the results obtained in that chapter is explained in a simple way. 61 figures. (RWR)« less
1987-09-14
set of coordinates that will facilitate discussion. Figure 16 omits some of details included in most textbook renderings of the cerebellum. For...Engineering, BME -24: 449-456 (1977). Hawkins, R.D. and Kandel, E.R. Is there a cell-biological alphabet for simple forms of learning? Psychological Review
The objective of this research is to test the utility of simple functions of spatially integrated and temporally averaged ground water residence times in shallow "groundwatersheds" with field observations and more detailed computer simulations. The residence time of water in the...
Application Note: Power Grid Modeling With Xyce.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sholander, Peter E.
This application note describes how to model steady-state power flows and transient events in electric power grids with the SPICE-compatible Xyce TM Parallel Electronic Simulator developed at Sandia National Labs. This application notes provides a brief tutorial on the basic devices (branches, bus shunts, transformers and generators) found in power grids. The focus is on the features supported and assumptions made by the Xyce models for power grid elements. It then provides a detailed explanation, including working Xyce netlists, for simulating some simple power grid examples such as the IEEE 14-bus test case.
Interacting particle systems in time-dependent geometries
NASA Astrophysics Data System (ADS)
Ali, A.; Ball, R. C.; Grosskinsky, S.; Somfai, E.
2013-09-01
Many complex structures and stochastic patterns emerge from simple kinetic rules and local interactions, and are governed by scale invariance properties in combination with effects of the global geometry. We consider systems that can be described effectively by space-time trajectories of interacting particles, such as domain boundaries in two-dimensional growth or river networks. We study trajectories embedded in time-dependent geometries, and the main focus is on uniformly expanding or decreasing domains for which we obtain an exact mapping to simple fixed domain systems while preserving the local scale invariance properties. This approach was recently introduced in Ali et al (2013 Phys. Rev. E 87 020102(R)) and here we provide a detailed discussion on its applicability for self-affine Markovian models, and how it can be adapted to self-affine models with memory or explicit time dependence. The mapping corresponds to a nonlinear time transformation which converges to a finite value for a large class of trajectories, enabling an exact analysis of asymptotic properties in expanding domains. We further provide a detailed discussion of different particle interactions and generalized geometries. All our findings are based on exact computations and are illustrated numerically for various examples, including Lévy processes and fractional Brownian motion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xue, Yuxin; Suto, Yasushi; Taruya, Atsushi
The angle between the stellar spin and the planetary orbit axes (the spin-orbit angle) is supposed to carry valuable information concerning the initial condition of planetary formation and subsequent migration history. Indeed, current observations of the Rossiter-McLaughlin effect have revealed a wide range of spin-orbit misalignments for transiting exoplanets. We examine in detail the tidal evolution of a simple system comprising a Sun-like star and a hot Jupiter adopting the equilibrium tide and the inertial wave dissipation effects simultaneously. We find that the combined tidal model works as a very efficient realignment mechanism; it predicts three distinct states of themore » spin-orbit angle (i.e., parallel, polar, and antiparallel orbits) for a while, but the latter two states eventually approach the parallel spin-orbit configuration. The intermediate spin-orbit angles as measured in recent observations are difficult to obtain. Therefore the current model cannot reproduce the observed broad distribution of the spin-orbit angles, at least in its simple form. This indicates that the observed diversity of the spin-orbit angles may emerge from more complicated interactions with outer planets and/or may be the consequence of the primordial misalignment between the protoplanetary disk and the stellar spin, which requires future detailed studies.« less
Kanematsu, Nobuyuki
2009-03-07
Dose calculation for radiotherapy with protons and heavier ions deals with a large volume of path integrals involving a scattering power of body tissue. This work provides a simple model for such demanding applications. There is an approximate linearity between RMS end-point displacement and range of incident particles in water, empirically found in measurements and detailed calculations. This fact was translated into a simple linear formula, from which the scattering power that is only inversely proportional to the residual range was derived. The simplicity enabled the analytical formulation for ions stopping in water, which was designed to be equivalent with the extended Highland model and agreed with measurements within 2% or 0.02 cm in RMS displacement. The simplicity will also improve the efficiency of numerical path integrals in the presence of heterogeneity.
Telfer, Scott; Erdemir, Ahmet; Woodburn, James; Cavanagh, Peter R
2016-01-25
Integration of patient-specific biomechanical measurements into the design of therapeutic footwear has been shown to improve clinical outcomes in patients with diabetic foot disease. The addition of numerical simulations intended to optimise intervention design may help to build on these advances, however at present the time and labour required to generate and run personalised models of foot anatomy restrict their routine clinical utility. In this study we developed second-generation personalised simple finite element (FE) models of the forefoot with varying geometric fidelities. Plantar pressure predictions from barefoot, shod, and shod with insole simulations using simplified models were compared to those obtained from CT-based FE models incorporating more detailed representations of bone and tissue geometry. A simplified model including representations of metatarsals based on simple geometric shapes, embedded within a contoured soft tissue block with outer geometry acquired from a 3D surface scan was found to provide pressure predictions closest to the more complex model, with mean differences of 13.3kPa (SD 13.4), 12.52kPa (SD 11.9) and 9.6kPa (SD 9.3) for barefoot, shod, and insole conditions respectively. The simplified model design could be produced in <1h compared to >3h in the case of the more detailed model, and solved on average 24% faster. FE models of the forefoot based on simplified geometric representations of the metatarsal bones and soft tissue surface geometry from 3D surface scans may potentially provide a simulation approach with improved clinical utility, however further validity testing around a range of therapeutic footwear types is required. Copyright © 2015 Elsevier Ltd. All rights reserved.
A continuum membrane model for small deformations of a spider orb-web
NASA Astrophysics Data System (ADS)
Morassi, Antonino; Soler, Alejandro; Zaera, Ramón
2017-09-01
In this paper we propose a continuum membrane model for the infinitesimal deformation of a spider web. The model is derived in the simple context of axially-symmetric webs formed by radial threads connected with circumferential threads belonging to concentric circles. Under suitable assumption on the tensile pre-stress acting in the referential configuration, the out-of-plane static equilibrium and the free transverse and in-plane vibration of a supported circular orb-web are studied in detail. The accuracy of the model in describing a discrete spider web is numerically investigated.
Selected bibliography on the modeling and control of plant processes
NASA Technical Reports Server (NTRS)
Viswanathan, M. M.; Julich, P. M.
1972-01-01
A bibliography of information pertinent to the problem of simulating plants is presented. Detailed simulations of constituent pieces are necessary to justify simple models which may be used for analysis. Thus, this area of study is necessary to support the Earth Resources Program. The report sums up the present state of the problem of simulating vegetation. This area holds the hope of major benefits to mankind through understanding the ecology of a region and in improving agricultural yield.
1983-01-13
Naval .1 Ordnance Systems Command ) codes are detailed propagation simulations mostly at lower frequencies . These are combined with WEPH code phenomenology...AD B062349L. Scope /Abstract: This report describes a simple model for predicting the loads on box-like target structures subject to air blast. A... model and applying it to targets which can be approximated by a series of rectangular parallelopipeds. In this report the physical phenomena of high
Image denoising for real-time MRI.
Klosowski, Jakob; Frahm, Jens
2017-03-01
To develop an image noise filter suitable for MRI in real time (acquisition and display), which preserves small isolated details and efficiently removes background noise without introducing blur, smearing, or patch artifacts. The proposed method extends the nonlocal means algorithm to adapt the influence of the original pixel value according to a simple measure for patch regularity. Detail preservation is improved by a compactly supported weighting kernel that closely approximates the commonly used exponential weight, while an oracle step ensures efficient background noise removal. Denoising experiments were conducted on real-time images of healthy subjects reconstructed by regularized nonlinear inversion from radial acquisitions with pronounced undersampling. The filter leads to a signal-to-noise ratio (SNR) improvement of at least 60% without noticeable artifacts or loss of detail. The method visually compares to more complex state-of-the-art filters as the block-matching three-dimensional filter and in certain cases better matches the underlying noise model. Acceleration of the computation to more than 100 complex frames per second using graphics processing units is straightforward. The sensitivity of nonlocal means to small details can be significantly increased by the simple strategies presented here, which allows partial restoration of SNR in iteratively reconstructed images without introducing a noticeable time delay or image artifacts. Magn Reson Med 77:1340-1352, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
NASA Technical Reports Server (NTRS)
Huning, J. R.; Logan, T. L.; Smith, J. H.
1982-01-01
The potential of using digital satellite data to establish a cloud cover data base for the United States, one that would provide detailed information on the temporal and spatial variability of cloud development are studied. Key elements include: (1) interfacing GOES data from the University of Wisconsin Meteorological Data Facility with the Jet Propulsion Laboratory's VICAR image processing system and IBIS geographic information system; (2) creation of a registered multitemporal GOES data base; (3) development of a simple normalization model to compensate for sun angle; (4) creation of a variable size georeference grid that provides detailed cloud information in selected areas and summarized information in other areas; and (5) development of a cloud/shadow model which details the percentage of each grid cell that is cloud and shadow covered, and the percentage of cloud or shadow opacity. In addition, comparison of model calculations of insolation with measured values at selected test sites was accomplished, as well as development of preliminary requirements for a large scale data base of cloud cover statistics.
Quantitative Imaging of Microwave Electric Fields through Near-Field Scanning Microwave Microscopy
NASA Astrophysics Data System (ADS)
Dutta, S. K.; Vlahacos, C. P.; Steinhauer, D. E.; Thanawalla, A.; Feenstra, B. J.; Wellstood, F. C.; Anlage, Steven M.; Newman, H. S.
1998-03-01
The ability to non-destructively image electric field patterns generated by operating microwave devices (e.g. filters, antennas, circulators, etc.) would greatly aid in the design and testing of these structures. Such detailed information can be used to reconcile discrepancies between simulated behavior and experimental data (such as scattering parameters). The near-field scanning microwave microscope we present uses a coaxial probe to provide a simple, broadband method of imaging electric fields.(S. M. Anlage, et al.) IEEE Trans. Appl. Supercond. 7, 3686 (1997).^,(See http://www.csr.umd.edu/research/hifreq/micr_microscopy.html) The signal that is measured is related to the incident electric flux normal to the face of the center conductor of the probe, allowing different components of the field to be measured by orienting the probe appropriately. By using a simple model of the system, we can also convert raw data to absolute electric field. Detailed images of standing waves on copper microstrip will be shown and compared to theory.
Reduced modeling of signal transduction – a modular approach
Koschorreck, Markus; Conzelmann, Holger; Ebert, Sybille; Ederer, Michael; Gilles, Ernst Dieter
2007-01-01
Background Combinatorial complexity is a challenging problem in detailed and mechanistic mathematical modeling of signal transduction. This subject has been discussed intensively and a lot of progress has been made within the last few years. A software tool (BioNetGen) was developed which allows an automatic rule-based set-up of mechanistic model equations. In many cases these models can be reduced by an exact domain-oriented lumping technique. However, the resulting models can still consist of a very large number of differential equations. Results We introduce a new reduction technique, which allows building modularized and highly reduced models. Compared to existing approaches further reduction of signal transduction networks is possible. The method also provides a new modularization criterion, which allows to dissect the model into smaller modules that are called layers and can be modeled independently. Hallmarks of the approach are conservation relations within each layer and connection of layers by signal flows instead of mass flows. The reduced model can be formulated directly without previous generation of detailed model equations. It can be understood and interpreted intuitively, as model variables are macroscopic quantities that are converted by rates following simple kinetics. The proposed technique is applicable without using complex mathematical tools and even without detailed knowledge of the mathematical background. However, we provide a detailed mathematical analysis to show performance and limitations of the method. For physiologically relevant parameter domains the transient as well as the stationary errors caused by the reduction are negligible. Conclusion The new layer based reduced modeling method allows building modularized and strongly reduced models of signal transduction networks. Reduced model equations can be directly formulated and are intuitively interpretable. Additionally, the method provides very good approximations especially for macroscopic variables. It can be combined with existing reduction methods without any difficulties. PMID:17854494
Giovannini, Giannina; Sbarciog, Mihaela; Steyer, Jean-Philippe; Chamy, Rolando; Vande Wouwer, Alain
2018-05-01
Hydrogen has been found to be an important intermediate during anaerobic digestion (AD) and a key variable for process monitoring as it gives valuable information about the stability of the reactor. However, simple dynamic models describing the evolution of hydrogen are not commonplace. In this work, such a dynamic model is derived using a systematic data driven-approach, which consists of a principal component analysis to deduce the dimension of the minimal reaction subspace explaining the data, followed by an identification of the kinetic parameters in the least-squares sense. The procedure requires the availability of informative data sets. When the available data does not fulfill this condition, the model can still be built from simulated data, obtained using a detailed model such as ADM1. This dynamic model could be exploited in monitoring and control applications after a re-identification of the parameters using actual process data. As an example, the model is used in the framework of a control strategy, and is also fitted to experimental data from raw industrial wine processing wastewater. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Holway, Kevin; Thaxton, Christopher S.; Calantoni, Joseph
2012-11-01
Morphodynamic models of coastal evolution require relatively simple parameterizations of sediment transport for application over larger scales. Calantoni and Thaxton (2008) [6] presented a transport parameterization for bimodal distributions of coarse quartz grains derived from detailed boundary layer simulations for sheet flow and near sheet flow conditions. The simulation results, valid over a range of wave forcing conditions and large- to small-grain diameter ratios, were successfully parameterized with a simple power law that allows for the prediction of the transport rates of each size fraction. Here, we have applied the simple power law to a two-dimensional cellular automaton to simulate sheet flow transport. Model results are validated with experiments performed in the small oscillating flow tunnel (S-OFT) at the Naval Research Laboratory at Stennis Space Center, MS, in which sheet flow transport was generated with a bed composed of a bimodal distribution of non-cohesive grains. The work presented suggests that, under the conditions specified, algorithms that incorporate the power law may correctly reproduce laboratory bed surface measurements of bimodal sheet flow transport while inherently incorporating vertical mixing by size.
Weak lensing shear and aperture mass from linear to non-linear scales
NASA Astrophysics Data System (ADS)
Munshi, Dipak; Valageas, Patrick; Barber, Andrew J.
2004-05-01
We describe the predictions for the smoothed weak lensing shear, γs, and aperture mass,Map, of two simple analytical models of the density field: the minimal tree model and the stellar model. Both models give identical results for the statistics of the three-dimensional density contrast smoothed over spherical cells and only differ by the detailed angular dependence of the many-body density correlations. We have shown in previous work that they also yield almost identical results for the probability distribution function (PDF) of the smoothed convergence, κs. We find that the two models give rather close results for both the shear and the positive tail of the aperture mass. However, we note that at small angular scales (θs<~ 2 arcmin) the tail of the PDF, , for negative Map shows a strong variation between the two models, and the stellar model actually breaks down for θs<~ 0.4 arcmin and Map < 0. This shows that the statistics of the aperture mass provides a very precise probe of the detailed structure of the density field, as it is sensitive to both the amplitude and the detailed angular behaviour of the many-body correlations. On the other hand, the minimal tree model shows good agreement with numerical simulations over all the scales and redshifts of interest, while both models provide a good description of the PDF, , of the smoothed shear components. Therefore, the shear and the aperture mass provide robust and complementary tools to measure the cosmological parameters as well as the detailed statistical properties of the density field.
Martins, Raquel R; McCracken, Andrew W; Simons, Mirre J P; Henriques, Catarina M; Rera, Michael
2018-02-05
The Smurf Assay (SA) was initially developed in the model organism Drosophila melanogaster where a dramatic increase of intestinal permeability has been shown to occur during aging (Rera et al. , 2011). We have since validated the protocol in multiple other model organisms (Dambroise et al. , 2016) and have utilized the assay to further our understanding of aging (Tricoire and Rera, 2015; Rera et al. , 2018). The SA has now also been used by other labs to assess intestinal barrier permeability (Clark et al. , 2015; Katzenberger et al. , 2015; Barekat et al. , 2016; Chakrabarti et al. , 2016; Gelino et al. , 2016). The SA in itself is simple; however, numerous small details can have a considerable impact on its experimental validity and subsequent interpretation. Here, we provide a detailed update on the SA technique and explain how to catch a Smurf while avoiding the most common experimental fallacies.
Convection driven zonal flows and vortices in the major planets.
Busse, F. H.
1994-06-01
The dynamical properties of convection in rotating cylindrical annuli and spherical shells are reviewed. Simple theoretical models and experimental simulations of planetary convection through the use of the centrifugal force in the laboratory are emphasized. The model of columnar convection in a cylindrical annulus not only serves as a guide to the dynamical properties of convection in rotating sphere; it also is of interest as a basic physical system that exhibits several dynamical properties in their most simple form. The generation of zonal mean flows is discussed in some detail and examples of recent numerical computations are presented. The exploration of the parameter space for the annulus model is not yet complete and the theoretical exploration of convection in rotating spheres is still in the beginning phase. Quantitative comparisons with the observations of the dynamics of planetary atmospheres will have to await the consideration in the models of the effects of magnetic fields and the deviations from the Boussinesq approximation.
Transition from Gaseous Compounds to Aerosols in Titan's Atmosphere
NASA Technical Reports Server (NTRS)
Lebonnois, Sebastien; Bakes, E. L. O.; McKay, Christopher P.; DeVincenzi, Donald (Technical Monitor)
2002-01-01
We investigate the chemical transition of simple molecules like C2H2 and HCN into aerosol particles in the context of Titan's atmosphere. Experiments that synthesize analogs (tholins) for these aerosols can help understand and constrain these polymerization mechanisms. Using information available from these experiments, we suggest chemical pathways that can link simple molecules to macromolecules, that will be the precursors to aerosol particles: polymers of acetylene and cyanoacetylene, polycyclic aromatics (PAHs), polymers of HCN and other nitriles, and polynes. Although our goal here is not to build a detailed kinetic model for this transition, we propose parameterizations to estimate the production rates of these macromolecules, their C/N and C/H ratios, and the loss of parent molecules (C2H2, HCN, HC3N and other nitriles, C6H6) from the gas phase to the haze. We use a 1-dimensional photochemical model of Titan's atmosphere to estimate the formation rate of precursors macromolecules. We find a production zone slightly lower than 200 km altitude with a total production rate of 4 x 10(exp -14) g/ sq cm s and a C/N approx. = 4. These results are compared with experimental data, and to microphysical models requirements. The Cassini/Huygens mission will bring a detailed picture of the haze distribution and properties, that will be a great challenge for our understanding of those chemical processes.
Criticality of Adaptive Control Dynamics
NASA Astrophysics Data System (ADS)
Patzelt, Felix; Pawelzik, Klaus
2011-12-01
We show, that stabilization of a dynamical system can annihilate observable information about its structure. This mechanism induces critical points as attractors in locally adaptive control. It also reveals, that previously reported criticality in simple controllers is caused by adaptation and not by other controller details. We apply these results to a real-system example: human balancing behavior. A model of predictive adaptive closed-loop control subject to some realistic constraints is introduced and shown to reproduce experimental observations in unprecedented detail. Our results suggests, that observed error distributions in between the Lévy and Gaussian regimes may reflect a nearly optimal compromise between the elimination of random local trends and rare large errors.
Vibration Control of Deployable Astromast Boom: Preliminary Experiments
NASA Technical Reports Server (NTRS)
Swaminadham, M.; Hamilton, David A.
1994-01-01
This paper deals with the dynamic characterization of a flexible aerospace solar boom. The modeling issues and sine dwell vibration testing to determine natural frequencies and mode shapes of a continuous-longer on deployable ASTROMAST lattice boom are discussed. The details of the proof-of-concept piezoelectric active vibration experiments on a simple cantilever beam to control its vibrations are presented. The control parameters like voltage to the controller crystal and its location are investigated, to determine the effectiveness of control element to suppress selected resonant vibrations of the test specimen. Details of this experiment and plans for its future adaptation to the prototype structure are also discussed.
Interpreting fMRI data: maps, modules and dimensions
Op de Beeck, Hans P.; Haushofer, Johannes; Kanwisher, Nancy G.
2009-01-01
Neuroimaging research over the past decade has revealed a detailed picture of the functional organization of the human brain. Here we focus on two fundamental questions that are raised by the detailed mapping of sensory and cognitive functions and illustrate these questions with findings from the object-vision pathway. First, are functionally specific regions that are located close together best understood as distinct cortical modules or as parts of a larger-scale cortical map? Second, what functional properties define each cortical map or module? We propose a model in which overlapping continuous maps of simple features give rise to discrete modules that are selective for complex stimuli. PMID:18200027
Load partitioning in Ai{sub 2}0{sub 3-}Al composites with three- dimensional periodic architecture.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, M. L.; Rao, R.; Almer, J. D.
2009-05-01
Interpenetrating composites are created by infiltration of liquid aluminum into three-dimensional (3-D) periodic Al{sub 2}O{sub 3} preforms with simple tetragonal symmetry produced by direct-write assembly. Volume-averaged lattice strains in the Al{sub 2}O{sub 3} phase of the composite are measured by synchrotron X-ray diffraction for various uniaxial compression stresses up to -350MPa. Load transfer, found by diffraction to occur from the metal phase to the ceramic phase, is in general agreement with simple rule-of-mixture models and in better agreement with more complex, 3-D finite-element models that account for metal plasticity and details of the geometry of both phases. Spatially resolved diffractionmore » measurements show variations in load transfer at two different positions within the composite.« less
Simulation studies of self-organization of microtubules and molecular motors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jian, Z.; Karpeev, D.; Aranson, I. S.
We perform Monte Carlo type simulation studies of self-organization of microtubules interacting with molecular motors. We model microtubules as stiff polar rods of equal length exhibiting anisotropic diffusion in the plane. The molecular motors are implicitly introduced by specifying certain probabilistic collision rules resulting in realignment of the rods. This approximation of the complicated microtubule-motor interaction by a simple instant collision allows us to bypass the 'computational bottlenecks' associated with the details of the diffusion and the dynamics of motors and the reorientation of microtubules. Consequently, we are able to perform simulations of large ensembles of microtubules and motors onmore » a very large time scale. This simple model reproduces all important phenomenology observed in in vitro experiments: Formation of vortices for low motor density and raylike asters and bundles for higher motor density.« less
Using Simplistic Shape/Surface Models to Predict Brightness in Estimation Filters
NASA Astrophysics Data System (ADS)
Wetterer, C.; Sheppard, D.; Hunt, B.
The prerequisite for using brightness (radiometric flux intensity) measurements in an estimation filter is to have a measurement function that accurately predicts a space objects brightness for variations in the parameters of interest. These parameters include changes in attitude and articulations of particular components (e.g. solar panel east-west offsets to direct sun-tracking). Typically, shape models and bidirectional reflectance distribution functions are combined to provide this forward light curve modeling capability. To achieve precise orbit predictions with the inclusion of shape/surface dependent forces such as radiation pressure, relatively complex and sophisticated modeling is required. Unfortunately, increasing the complexity of the models makes it difficult to estimate all those parameters simultaneously because changes in light curve features can now be explained by variations in a number of different properties. The classic example of this is the connection between the albedo and the area of a surface. If, however, the desire is to extract information about a single and specific parameter or feature from the light curve, a simple shape/surface model could be used. This paper details an example of this where a complex model is used to create simulated light curves, and then a simple model is used in an estimation filter to extract out a particular feature of interest. In order for this to be successful, however, the simple model must be first constructed using training data where the feature of interest is known or at least known to be constant.
NASA Astrophysics Data System (ADS)
De Lucas, Javier
2015-03-01
A simple geometrical model for calculating the effective emissivity in blackbody cylindrical cavities has been developed. The back ray tracing technique and the Monte Carlo method have been employed, making use of a suitable set of coordinates and auxiliary planes. In these planes, the trajectories of individual photons in the successive reflections between the cavity points are followed in detail. The theoretical model is implemented by using simple numerical tools, programmed in Microsoft Visual Basic for Application and Excel. The algorithm is applied to isothermal and non-isothermal diffuse cylindrical cavities with a lid; however, the basic geometrical structure can be generalized to a cylindro-conical shape and specular reflection. Additionally, the numerical algorithm and the program source code can be used, with minor changes, for determining the distribution of the cavity points, where photon absorption takes place. This distribution could be applied to the study of the influence of thermal gradients on the effective emissivity profiles, for example. Validation is performed by analyzing the convergence of the Monte Carlo method as a function of the number of trials and by comparison with published results of different authors.
The distribution of density in supersonic turbulence
NASA Astrophysics Data System (ADS)
Squire, Jonathan; Hopkins, Philip F.
2017-11-01
We propose a model for the statistics of the mass density in supersonic turbulence, which plays a crucial role in star formation and the physics of the interstellar medium (ISM). The model is derived by considering the density to be arranged as a collection of strong shocks of width ˜ M^{-2}, where M is the turbulent Mach number. With two physically motivated parameters, the model predicts all density statistics for M>1 turbulence: the density probability distribution and its intermittency (deviation from lognormality), the density variance-Mach number relation, power spectra and structure functions. For the proposed model parameters, reasonable agreement is seen between model predictions and numerical simulations, albeit within the large uncertainties associated with current simulation results. More generally, the model could provide a useful framework for more detailed analysis of future simulations and observational data. Due to the simple physical motivations for the model in terms of shocks, it is straightforward to generalize to more complex physical processes, which will be helpful in future more detailed applications to the ISM. We see good qualitative agreement between such extensions and recent simulations of non-isothermal turbulence.
An executable specification for the message processor in a simple combining network
NASA Technical Reports Server (NTRS)
Middleton, David
1995-01-01
While the primary function of the network in a parallel computer is to communicate data between processors, it is often useful if the network can also perform rudimentary calculations. That is, some simple processing ability in the network itself, particularly for performing parallel prefix computations, can reduce both the volume of data being communicated and the computational load on the processors proper. Unfortunately, typical implementations of such networks require a large fraction of the hardware budget, and so combining networks are viewed as being impractical. The FFP Machine has such a combining network, and various characteristics of the machine allow a good deal of simplification in the network design. Despite being simple in construction however, the network relies on many subtle details to work correctly. This paper describes an executable model of the network which will serve several purposes. It provides a complete and detailed description of the network which can substantiate its ability to support necessary functions. It provides an environment in which algorithms to be run on the network can be designed and debugged more easily than they would on physical hardware. Finally, it provides the foundation for exploring the design of the message receiving facility which connects the network to the individual processors.
An assessment on convective and radiative heat transfer modelling in tubular solid oxide fuel cells
NASA Astrophysics Data System (ADS)
Sánchez, D.; Muñoz, A.; Sánchez, T.
Four models of convective and radiative heat transfer inside tubular solid oxide fuel cells are presented in this paper, all of them applicable to multidimensional simulations. The work is aimed at assessing if it is necessary to use a very detailed and complicated model to simulate heat transfer inside this kind of device and, for those cases when simple models can be used, the errors are estimated and compared to those of the more complex models. For the convective heat transfer, two models are presented. One of them accounts for the variation of film coefficient as a function of local temperature and composition. This model gives a local value for the heat transfer coefficients and establishes the thermal entry length. The second model employs an average value of the transfer coefficient, which is applied to the whole length of the duct being studied. It is concluded that, unless there is a need to calculate local temperatures, a simple model can be used to evaluate the global performance of the cell with satisfactory accuracy. For the radiation heat transfer, two models are presented again. One of them considers radial radiation exclusively and, thus, radiative exchange between adjacent cells is neglected. On the other hand, the second model accounts for radiation in all directions but increases substantially the complexity of the problem. For this case, it is concluded that deviations between both models are higher than for convection. Actually, using a simple model can lead to a not negligible underestimation of the temperature of the cell.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Läsker, Ronald; Van de Ven, Glenn; Ferrarese, Laura, E-mail: laesker@mpia.de
2014-01-01
In an effort to secure, refine, and supplement the relation between central supermassive black hole masses, M {sub •}, and the bulge luminosities of their host galaxies, L {sub bul}, we obtained deep, high spatial resolution K-band images of 35 nearby galaxies with securely measured M {sub •}, using the wide-field WIRCam imager at the Canada-France-Hawaii-Telescope. A dedicated data reduction and sky subtraction strategy was adopted to estimate the brightness and structure of the sky, a critical step when tracing the light distribution of extended objects in the near-infrared. From the final image product, bulge and total magnitudes were extractedmore » via two-dimensional profile fitting. As a first order approximation, all galaxies were modeled using a simple Sérsic-bulge+exponential-disk decomposition. However, we found that such models did not adequately describe the structure that we observed in a large fraction of our sample galaxies which often include cores, bars, nuclei, inner disks, spiral arms, rings, and envelopes. In such cases, we adopted profile modifications and/or more complex models with additional components. The derived bulge magnitudes are very sensitive to the details and number of components used in the models, although total magnitudes remain almost unaffected. Usually, but not always, the luminosities and sizes of the bulges are overestimated when a simple bulge+disk decomposition is adopted in lieu of a more complex model. Furthermore, we found that some spheroids are not well fit when the ellipticity of the Sérsic model is held fixed. This paper presents the details of the image processing and analysis, while we discuss how model-induced biases and systematics in bulge magnitudes impact the M {sub •}-L {sub bul} relation in a companion paper.« less
NASA Astrophysics Data System (ADS)
Läsker, Ronald; Ferrarese, Laura; van de Ven, Glenn
2014-01-01
In an effort to secure, refine, and supplement the relation between central supermassive black hole masses, M •, and the bulge luminosities of their host galaxies, L bul, we obtained deep, high spatial resolution K-band images of 35 nearby galaxies with securely measured M •, using the wide-field WIRCam imager at the Canada-France-Hawaii-Telescope. A dedicated data reduction and sky subtraction strategy was adopted to estimate the brightness and structure of the sky, a critical step when tracing the light distribution of extended objects in the near-infrared. From the final image product, bulge and total magnitudes were extracted via two-dimensional profile fitting. As a first order approximation, all galaxies were modeled using a simple Sérsic-bulge+exponential-disk decomposition. However, we found that such models did not adequately describe the structure that we observed in a large fraction of our sample galaxies which often include cores, bars, nuclei, inner disks, spiral arms, rings, and envelopes. In such cases, we adopted profile modifications and/or more complex models with additional components. The derived bulge magnitudes are very sensitive to the details and number of components used in the models, although total magnitudes remain almost unaffected. Usually, but not always, the luminosities and sizes of the bulges are overestimated when a simple bulge+disk decomposition is adopted in lieu of a more complex model. Furthermore, we found that some spheroids are not well fit when the ellipticity of the Sérsic model is held fixed. This paper presents the details of the image processing and analysis, while we discuss how model-induced biases and systematics in bulge magnitudes impact the M •-L bul relation in a companion paper.
Uncertainty in temperature-based determination of time of death
NASA Astrophysics Data System (ADS)
Weiser, Martin; Erdmann, Bodo; Schenkl, Sebastian; Muggenthaler, Holger; Hubig, Michael; Mall, Gita; Zachow, Stefan
2018-03-01
Temperature-based estimation of time of death (ToD) can be performed either with the help of simple phenomenological models of corpse cooling or with detailed mechanistic (thermodynamic) heat transfer models. The latter are much more complex, but allow a higher accuracy of ToD estimation as in principle all relevant cooling mechanisms can be taken into account. The potentially higher accuracy depends on the accuracy of tissue and environmental parameters as well as on the geometric resolution. We investigate the impact of parameter variations and geometry representation on the estimated ToD. For this, numerical simulation of analytic heat transport models is performed on a highly detailed 3D corpse model, that has been segmented and geometrically reconstructed from a computed tomography (CT) data set, differentiating various organs and tissue types. From that and prior information available on thermal parameters and their variability, we identify the most crucial parameters to measure or estimate, and obtain an a priori uncertainty quantification for the ToD.
NASA Astrophysics Data System (ADS)
Pradeep, Krishna; Poiroux, Thierry; Scheer, Patrick; Juge, André; Gouget, Gilles; Ghibaudo, Gérard
2018-07-01
This work details the analysis of wafer level global process variability in 28 nm FD-SOI using split C-V measurements. The proposed approach initially evaluates the native on wafer process variability using efficient extraction methods on split C-V measurements. The on-wafer threshold voltage (VT) variability is first studied and modeled using a simple analytical model. Then, a statistical model based on the Leti-UTSOI compact model is proposed to describe the total C-V variability in different bias conditions. This statistical model is finally used to study the contribution of each process parameter to the total C-V variability.
Bieri, Michael; d'Auvergne, Edward J; Gooley, Paul R
2011-06-01
Investigation of protein dynamics on the ps-ns and μs-ms timeframes provides detailed insight into the mechanisms of enzymes and the binding properties of proteins. Nuclear magnetic resonance (NMR) is an excellent tool for studying protein dynamics at atomic resolution. Analysis of relaxation data using model-free analysis can be a tedious and time consuming process, which requires good knowledge of scripting procedures. The software relaxGUI was developed for fast and simple model-free analysis and is fully integrated into the software package relax. It is written in Python and uses wxPython to build the graphical user interface (GUI) for maximum performance and multi-platform use. This software allows the analysis of NMR relaxation data with ease and the generation of publication quality graphs as well as color coded images of molecular structures. The interface is designed for simple data analysis and management. The software was tested and validated against the command line version of relax.
Ground temperature measurement by PRT-5 for maps experiment
NASA Technical Reports Server (NTRS)
Gupta, S. K.; Tiwari, S. N.
1978-01-01
A simple algorithm and computer program were developed for determining the actual surface temperature from the effective brightness temperature as measured remotely by a radiation thermometer called PRT-5. This procedure allows the computation of atmospheric correction to the effective brightness temperature without performing detailed radiative transfer calculations. Model radiative transfer calculations were performed to compute atmospheric corrections for several values of the surface and atmospheric parameters individually and in combination. Polynomial regressions were performed between the magnitudes or deviations of these parameters and the corresponding computed corrections to establish simple analytical relations between them. Analytical relations were also developed to represent combined correction for simultaneous variation of parameters in terms of their individual corrections.
Watcharapong Tachajapong; Jesse Lozano; Shankar Mahalingam; Xiangyang Zhou; David R. Weise
2008-01-01
Crown fire initiation is studied by using a simple experimental and detailed physical modeling based on Large Eddy Simulation (LES). Experiments conducted thus far reveal that crown fuel ignition via surface fire occurs when the crown base is within the continuous flame region and does not occur when the crown base is located in the hot plume gas region of the surface...
Communication: Symmetrical quasi-classical analysis of linear optical spectroscopy
NASA Astrophysics Data System (ADS)
Provazza, Justin; Coker, David F.
2018-05-01
The symmetrical quasi-classical approach for propagation of a many degree of freedom density matrix is explored in the context of computing linear spectra. Calculations on a simple two state model for which exact results are available suggest that the approach gives a qualitative description of peak positions, relative amplitudes, and line broadening. Short time details in the computed dipole autocorrelation function result in exaggerated tails in the spectrum.
P(P bar)P elastic scattering and cosmic ray data
NASA Technical Reports Server (NTRS)
FAZAL-E-ALEEM; Saleem, M.
1985-01-01
It is shown that the total cross section for pp elastic scattering at cosmic ray energies, as well as the total cross section, the slope parameter b(s,t) and the differential cross section for small momentum transfer at ISR and collider energies for p(p)p elastic scattering can be simultaneously fitted by using a simple Regge pole model. The results of this theory is discussed in detail.
NASA Technical Reports Server (NTRS)
Kraft, R. E.; Yu, J.; Kwan, H. W.
1999-01-01
The primary purpose of this study is to develop improved models for the acoustic impedance of treatment panels at high frequencies, for application to subscale treatment designs. Effects that cause significant deviation of the impedance from simple geometric scaling are examined in detail, an improved high-frequency impedance model is developed, and the improved model is correlated with high-frequency impedance measurements. Only single-degree-of-freedom honeycomb sandwich resonator panels with either perforated sheet or "linear" wiremesh faceplates are considered. The objective is to understand those effects that cause the simple single-degree-of- freedom resonator panels to deviate at the higher-scaled frequency from the impedance that would be obtained at the corresponding full-scale frequency. This will allow the subscale panel to be designed to achieve a specified impedance spectrum over at least a limited range of frequencies. An advanced impedance prediction model has been developed that accounts for some of the known effects at high frequency that have previously been ignored as a small source of error for full-scale frequency ranges.
Multibody dynamic analysis using a rotation-free shell element with corotational frame
NASA Astrophysics Data System (ADS)
Shi, Jiabei; Liu, Zhuyong; Hong, Jiazhen
2018-03-01
Rotation-free shell formulation is a simple and effective method to model a shell with large deformation. Moreover, it can be compatible with the existing theories of finite element method. However, a rotation-free shell is seldom employed in multibody systems. Using a derivative of rigid body motion, an efficient nonlinear shell model is proposed based on the rotation-free shell element and corotational frame. The bending and membrane strains of the shell have been simplified by isolating deformational displacements from the detailed description of rigid body motion. The consistent stiffness matrix can be obtained easily in this form of shell model. To model the multibody system consisting of the presented shells, joint kinematic constraints including translational and rotational constraints are deduced in the context of geometric nonlinear rotation-free element. A simple node-to-surface contact discretization and penalty method are adopted for contacts between shells. A series of analyses for multibody system dynamics are presented to validate the proposed formulation. Furthermore, the deployment of a large scaled solar array is presented to verify the comprehensive performance of the nonlinear shell model.
Analysis of intrapulse chirp in CO2 oscillators
NASA Technical Reports Server (NTRS)
Moody, Stephen E.; Berger, Russell G.; Thayer, William J., III
1987-01-01
Pulsed single-frequency CO2 laser oscillators are often used as transmitters for coherent lidar applications. These oscillators suffer from intrapulse chirp, or dynamic frequency shifting. If excessive, such chirp can limit the signal-to-noise ratio of the lidar (by generating excess bandwidth), or limit the velocity resolution if the lidar is of the Doppler type. This paper describes a detailed numerical model that considers all known sources of intrapulse chirp. Some typical predictions of the model are shown, and simple design rules to minimize chirp are proposed.
Simple Fall Criteria for MEMS Sensors: Data Analysis and Sensor Concept
Ibrahim, Alwathiqbellah; Younis, Mohammad I.
2014-01-01
This paper presents a new and simple fall detection concept based on detailed experimental data of human falling and the activities of daily living (ADLs). Establishing appropriate fall algorithms compatible with MEMS sensors requires detailed data on falls and ADLs that indicate clearly the variations of the kinematics at the possible sensor node location on the human body, such as hip, head, and chest. Currently, there is a lack of data on the exact direction and magnitude of each acceleration component associated with these node locations. This is crucial for MEMS structures, which have inertia elements very close to the substrate and are capacitively biased, and hence, are very sensitive to the direction of motion whether it is toward or away from the substrate. This work presents detailed data of the acceleration components on various locations on the human body during various kinds of falls and ADLs. A two-degree-of-freedom model is used to help interpret the experimental data. An algorithm for fall detection based on MEMS switches is then established. A new sensing concept based on the algorithm is proposed. The concept is based on employing several inertia sensors, which are triggered simultaneously, as electrical switches connected in series, upon receiving a true fall signal. In the case of everyday life activities, some or no switches will be triggered resulting in an open circuit configuration, thereby preventing false positive. Lumped-parameter model is presented for the device and preliminary simulation results are presented illustrating the new device concept. PMID:25006997
Wardlow, Nathan; Polin, Chris; Villagomez-Bernabe, Balder; Currell, Fred
2015-11-01
We present a simple model for a component of the radiolytic production of any chemical species due to electron emission from irradiated nanoparticles (NPs) in a liquid environment, provided the expression for the G value for product formation is known and is reasonably well characterized by a linear dependence on beam energy. This model takes nanoparticle size, composition, density and a number of other readily available parameters (such as X-ray and electron attenuation data) as inputs and therefore allows for the ready determination of this contribution. Several approximations are used, thus this model provides an upper limit to the yield of chemical species due to electron emission, rather than a distinct value, and this upper limit is compared with experimental results. After the general model is developed we provide details of its application to the generation of HO• through irradiation of gold nanoparticles (AuNPs), a potentially important process in nanoparticle-based enhancement of radiotherapy. This model has been constructed with the intention of making it accessible to other researchers who wish to estimate chemical yields through this process, and is shown to be applicable to NPs of single elements and mixtures. The model can be applied without the need to develop additional skills (such as using a Monte Carlo toolkit), providing a fast and straightforward method of estimating chemical yields. A simple framework for determining the HO• yield for different NP sizes at constant NP concentration and initial photon energy is also presented.
Cacao, Eliedonna; Hada, Megumi; Saganti, Premkumar B; George, Kerry A; Cucinotta, Francis A
2016-01-01
The biological effects of high charge and energy (HZE) particle exposures are of interest in space radiation protection of astronauts and cosmonauts, and estimating secondary cancer risks for patients undergoing Hadron therapy for primary cancers. The large number of particles types and energies that makeup primary or secondary radiation in HZE particle exposures precludes tumor induction studies in animal models for all but a few particle types and energies, thus leading to the use of surrogate endpoints to investigate the details of the radiation quality dependence of relative biological effectiveness (RBE) factors. In this report we make detailed RBE predictions of the charge number and energy dependence of RBE's using a parametric track structure model to represent experimental results for the low dose response for chromosomal exchanges in normal human lymphocyte and fibroblast cells with comparison to published data for neoplastic transformation and gene mutation. RBE's are evaluated against acute doses of γ-rays for doses near 1 Gy. Models that assume linear or non-targeted effects at low dose are considered. Modest values of RBE (<10) are found for simple exchanges using a linear dose response model, however in the non-targeted effects model for fibroblast cells large RBE values (>10) are predicted at low doses <0.1 Gy. The radiation quality dependence of RBE's against the effects of acute doses γ-rays found for neoplastic transformation and gene mutation studies are similar to those found for simple exchanges if a linear response is assumed at low HZE particle doses. Comparisons of the resulting model parameters to those used in the NASA radiation quality factor function are discussed.
Cacao, Eliedonna; Hada, Megumi; Saganti, Premkumar B.; ...
2016-04-25
The biological effects of high charge and energy (HZE) particle exposures are of interest in space radiation protection of astronauts and cosmonauts, and estimating secondary cancer risks for patients undergoing Hadron therapy for primary cancers. The large number of particles types and energies that makeup primary or secondary radiation in HZE particle exposures precludes tumor induction studies in animal models for all but a few particle types and energies, thus leading to the use of surrogate endpoints to investigate the details of the radiation quality dependence of relative biological effectiveness (RBE) factors. In this report we make detailed RBE predictionsmore » of the charge number and energy dependence of RBE’s using a parametric track structure model to represent experimental results for the low dose response for chromosomal exchanges in normal human lymphocyte and fibroblast cells with comparison to published data for neoplastic transformation and gene mutation. RBE’s are evaluated against acute doses of γ-rays for doses near 1 Gy. Models that assume linear or non-targeted effects at low dose are considered. Modest values of RBE (<10) are found for simple exchanges using a linear dose response model, however in the non-targeted effects model for fibroblast cells large RBE values (>10) are predicted at low doses <0.1 Gy. The radiation quality dependence of RBE’s against the effects of acute doses γ-rays found for neoplastic transformation and gene mutation studies are similar to those found for simple exchanges if a linear response is assumed at low HZE particle doses. Finally, we discuss comparisons of the resulting model parameters to those used in the NASA radiation quality factor function.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cacao, Eliedonna; Hada, Megumi; Saganti, Premkumar B.
The biological effects of high charge and energy (HZE) particle exposures are of interest in space radiation protection of astronauts and cosmonauts, and estimating secondary cancer risks for patients undergoing Hadron therapy for primary cancers. The large number of particles types and energies that makeup primary or secondary radiation in HZE particle exposures precludes tumor induction studies in animal models for all but a few particle types and energies, thus leading to the use of surrogate endpoints to investigate the details of the radiation quality dependence of relative biological effectiveness (RBE) factors. In this report we make detailed RBE predictionsmore » of the charge number and energy dependence of RBE’s using a parametric track structure model to represent experimental results for the low dose response for chromosomal exchanges in normal human lymphocyte and fibroblast cells with comparison to published data for neoplastic transformation and gene mutation. RBE’s are evaluated against acute doses of γ-rays for doses near 1 Gy. Models that assume linear or non-targeted effects at low dose are considered. Modest values of RBE (<10) are found for simple exchanges using a linear dose response model, however in the non-targeted effects model for fibroblast cells large RBE values (>10) are predicted at low doses <0.1 Gy. The radiation quality dependence of RBE’s against the effects of acute doses γ-rays found for neoplastic transformation and gene mutation studies are similar to those found for simple exchanges if a linear response is assumed at low HZE particle doses. Finally, we discuss comparisons of the resulting model parameters to those used in the NASA radiation quality factor function.« less
Macroscopic Fluctuation Theory for Stationary Non-Equilibrium States
NASA Astrophysics Data System (ADS)
Bertini, L.; de Sole, A.; Gabrielli, D.; Jona-Lasinio, G.; Landim, C.
2002-05-01
We formulate a dynamical fluctuation theory for stationary non-equilibrium states (SNS) which is tested explicitly in stochastic models of interacting particles. In our theory a crucial role is played by the time reversed dynamics. Within this theory we derive the following results: the modification of the Onsager-Machlup theory in the SNS; a general Hamilton-Jacobi equation for the macroscopic entropy; a non-equilibrium, nonlinear fluctuation dissipation relation valid for a wide class of systems; an H theorem for the entropy. We discuss in detail two models of stochastic boundary driven lattice gases: the zero range and the simple exclusion processes. In the first model the invariant measure is explicitly known and we verify the predictions of the general theory. For the one dimensional simple exclusion process, as recently shown by Derrida, Lebowitz, and Speer, it is possible to express the macroscopic entropy in terms of the solution of a nonlinear ordinary differential equation; by using the Hamilton-Jacobi equation, we obtain a logically independent derivation of this result.
Saponification reaction system: a detailed mass transfer coefficient determination.
Pečar, Darja; Goršek, Andreja
2015-01-01
The saponification of an aromatic ester with an aqueous sodium hydroxide was studied within a heterogeneous reaction medium in order to determine the overall kinetics of the selected system. The extended thermo-kinetic model was developed compared to the previously used simple one. The reaction rate within a heterogeneous liquid-liquid system incorporates a chemical kinetics term as well as mass transfer between both phases. Chemical rate constant was obtained from experiments within a homogeneous medium, whilst the mass-transfer coefficient was determined separately. The measured thermal profiles were then the bases for determining the overall reaction-rate. This study presents the development of an extended kinetic model for considering mass transfer regarding the saponification of ethyl benzoate with sodium hydroxide within a heterogeneous reaction medium. The time-dependences are presented for the mass transfer coefficient and the interfacial areas at different heterogeneous stages and temperatures. The results indicated an important role of reliable kinetic model, as significant difference in k(L)a product was obtained with extended and simple approach.
Decoding spike timing: the differential reverse correlation method
Tkačik, Gašper; Magnasco, Marcelo O.
2009-01-01
It is widely acknowledged that detailed timing of action potentials is used to encode information, for example in auditory pathways; however the computational tools required to analyze encoding through timing are still in their infancy. We present a simple example of encoding, based on a recent model of time-frequency analysis, in which units fire action potentials when a certain condition is met, but the timing of the action potential depends also on other features of the stimulus. We show that, as a result, spike-triggered averages are smoothed so much they do not represent the true features of the encoding. Inspired by this example, we present a simple method, differential reverse correlations, that can separate an analysis of what causes a neuron to spike, and what controls its timing. We analyze with this method the leaky integrate-and-fire neuron and show the method accurately reconstructs the model's kernel. PMID:18597928
Uncertainty analysis on simple mass balance model to calculate critical loads for soil acidity.
Li, Harbin; McNulty, Steven G
2007-10-01
Simple mass balance equations (SMBE) of critical acid loads (CAL) in forest soil were developed to assess potential risks of air pollutants to ecosystems. However, to apply SMBE reliably at large scales, SMBE must be tested for adequacy and uncertainty. Our goal was to provide a detailed analysis of uncertainty in SMBE so that sound strategies for scaling up CAL estimates to the national scale could be developed. Specifically, we wanted to quantify CAL uncertainty under natural variability in 17 model parameters, and determine their relative contributions in predicting CAL. Results indicated that uncertainty in CAL came primarily from components of base cation weathering (BC(w); 49%) and acid neutralizing capacity (46%), whereas the most critical parameters were BC(w) base rate (62%), soil depth (20%), and soil temperature (11%). Thus, improvements in estimates of these factors are crucial to reducing uncertainty and successfully scaling up SMBE for national assessments of CAL.
The gravitational self-interaction of the Earth's tidal bulge
NASA Astrophysics Data System (ADS)
Norsen, Travis; Dreese, Mackenzie; West, Christopher
2017-09-01
According to a standard, idealized analysis, the Moon would produce a 54 cm equilibrium tidal bulge in the Earth's oceans. This analysis omits many factors (beyond the scope of the simple idealized model) that dramatically influence the actual height and timing of the tides at different locations, but it is nevertheless an important foundation for more detailed studies. Here, we show that the standard analysis also omits another factor—the gravitational interaction of the tidal bulge with itself—which is entirely compatible with the simple, idealized equilibrium model and which produces a surprisingly non-trivial correction to the predicted size of the tidal bulge. Our analysis uses ideas and techniques that are familiar from electrostatics, and should thus be of interest to teachers and students of undergraduate E&M, Classical Mechanics (and/or other courses that cover the tides), and geophysics courses that cover the closely related topic of Earth's equatorial bulge.
Logic-Based Models for the Analysis of Cell Signaling Networks†
2010-01-01
Computational models are increasingly used to analyze the operation of complex biochemical networks, including those involved in cell signaling networks. Here we review recent advances in applying logic-based modeling to mammalian cell biology. Logic-based models represent biomolecular networks in a simple and intuitive manner without describing the detailed biochemistry of each interaction. A brief description of several logic-based modeling methods is followed by six case studies that demonstrate biological questions recently addressed using logic-based models and point to potential advances in model formalisms and training procedures that promise to enhance the utility of logic-based methods for studying the relationship between environmental inputs and phenotypic or signaling state outputs of complex signaling networks. PMID:20225868
NASA Astrophysics Data System (ADS)
Simmel, Martin; Bühl, Johannes; Ansmann, Albert; Tegen, Ina
2015-04-01
The present work combines remote sensing observations and detailed microphysics cloud modeling to investigate two altocumulus cloud cases observed over Leipzig, Germany. A suite of remote sensing instruments was able to detect primary ice at rather warm temperatures of -6°C. For comparison, a second mixed phase case at about -25°C is introduced. To further look into the details of cloud microphysical processes a simple dynamics model of the Asai-Kasahara type is combined with detailed spectral microphysics forming the model system AK-SPECS. Temperature and humidity profiles are taken either from observation (radiosonde) or GDAS reanalysis. Vertical velocities are prescribed to force the dynamics as well as main cloud features to be close to the observations. Subsequently, sensitivity studies with respect to dynamical as well as ice microphysical parameters are carried out with the aim to quantify the most important sensitivities for the cases investigated. For the cases selected, the liquid phase is mainly determined by the model dynamics (location and strength of vertical velocity) whereas the ice phase is much more sensitive to the microphysical parameters (ice nuclei (IN) number, ice particle shape). The choice of ice particle shape may induce large uncertainties which are in the same order as those for the temperature-dependent IN number distribution.
Bachis, Giulia; Maruéjouls, Thibaud; Tik, Sovanna; Amerlinck, Youri; Melcer, Henryk; Nopens, Ingmar; Lessard, Paul; Vanrolleghem, Peter A
2015-01-01
Characterization and modelling of primary settlers have been neglected pretty much to date. However, whole plant and resource recovery modelling requires primary settler model development, as current models lack detail in describing the dynamics and the diversity of the removal process for different particulate fractions. This paper focuses on the improved modelling and experimental characterization of primary settlers. First, a new modelling concept based on particle settling velocity distribution is proposed which is then applied for the development of an improved primary settler model as well as for its characterization under addition of chemicals (chemically enhanced primary treatment, CEPT). This model is compared to two existing simple primary settler models (Otterpohl and Freund; Lessard and Beck), showing to be better than the first one and statistically comparable to the second one, but with easier calibration thanks to the ease with which wastewater characteristics can be translated into model parameters. Second, the changes in the activated sludge model (ASM)-based chemical oxygen demand fractionation between inlet and outlet induced by primary settling is investigated, showing that typical wastewater fractions are modified by primary treatment. As they clearly impact the downstream processes, both model improvements demonstrate the need for more detailed primary settler models in view of whole plant modelling.
Magnetic Doppler imaging of Ap stars
NASA Astrophysics Data System (ADS)
Silvester, J.; Wade, G. A.; Kochukhov, O.; Landstreet, J. D.; Bagnulo, S.
2008-04-01
Historically, the magnetic field geometries of the chemically peculiar Ap stars were modelled in the context of a simple dipole field. However, with the acquisition of increasingly sophisticated diagnostic data, it has become clear that the large-scale field topologies exhibit important departures from this simple model. Recently, new high-resolution circular and linear polarisation spectroscopy has even hinted at the presence of strong, small-scale field structures, which were completely unexpected based on earlier modelling. This project investigates the detailed structure of these strong fossil magnetic fields, in particular the large-scale field geometry, as well as small scale magnetic structures, by mapping the magnetic and chemical surface structure of a selected sample of Ap stars. These maps will be used to investigate the relationship between the local field vector and local surface chemistry, looking for the influence the field may have on the various chemical transport mechanisms (i.e., diffusion, convection and mass loss). This will lead to better constraints on the origin and evolution, as well as refining the magnetic field model for Ap stars. Mapping will be performed using high resolution and signal-to-noise ratio time-series of spectra in both circular and linear polarisation obtained using the new-generation ESPaDOnS (CFHT, Mauna Kea, Hawaii) and NARVAL spectropolarimeters (Pic du Midi Observatory). With these data we will perform tomographic inversion of Doppler-broadened Stokes IQUV Zeeman profiles of a large variety of spectral lines using the INVERS10 magnetic Doppler imaging code, simultaneously recovering the detailed surface maps of the vector magnetic field and chemical abundances.
Unidirectional random growth with resetting
NASA Astrophysics Data System (ADS)
Biró, T. S.; Néda, Z.
2018-06-01
We review stochastic processes without detailed balance condition and derive their H-theorem. We obtain stationary distributions and investigate their stability in terms of generalized entropic distances beyond the Kullback-Leibler formula. A simple stochastic model with local growth rates and direct resetting to the ground state is investigated and applied to various networks, scientific citations and Facebook popularity, hadronic yields in high energy particle reactions, income and wealth distributions, biodiversity and settlement size distributions.
Autocatalytic polymerization generates persistent random walk of crawling cells.
Sambeth, R; Baumgaertner, A
2001-05-28
The autocatalytic polymerization kinetics of the cytoskeletal actin network provides the basic mechanism for a persistent random walk of a crawling cell. It is shown that network remodeling by branching processes near the cell membrane is essential for the bimodal spatial stability of the network which induces a spontaneous breaking of isotropic cell motion. Details of the phenomena are analyzed using a simple polymerization model studied by analytical and simulation methods.
Well bore breakouts and in situ stress
Zoback, Mark D.; Moos, Daniel; Mastin, Larry; Anderson, Roger N.
1985-01-01
The detailed cross-sectional shape of stress induced well bore breakouts has been studied using specially processed ultrasonic borehole televiewer data. Breakout shapes are shown for a variety of rock types and introduce a simple elastic failure model which explains many features of the observations. Both the observations and calculations indicate that the breakouts define relatively broad and flat curvilinear surfaces which enlarge the borehole in the direction of minimum horizontal compression. Refs.
High-fidelity simulation capability for virtual testing of seismic and acoustic sensors
NASA Astrophysics Data System (ADS)
Wilson, D. Keith; Moran, Mark L.; Ketcham, Stephen A.; Lacombe, James; Anderson, Thomas S.; Symons, Neill P.; Aldridge, David F.; Marlin, David H.; Collier, Sandra L.; Ostashev, Vladimir E.
2005-05-01
This paper describes development and application of a high-fidelity, seismic/acoustic simulation capability for battlefield sensors. The purpose is to provide simulated sensor data so realistic that they cannot be distinguished by experts from actual field data. This emerging capability provides rapid, low-cost trade studies of unattended ground sensor network configurations, data processing and fusion strategies, and signatures emitted by prototype vehicles. There are three essential components to the modeling: (1) detailed mechanical signature models for vehicles and walkers, (2) high-resolution characterization of the subsurface and atmospheric environments, and (3) state-of-the-art seismic/acoustic models for propagating moving-vehicle signatures through realistic, complex environments. With regard to the first of these components, dynamic models of wheeled and tracked vehicles have been developed to generate ground force inputs to seismic propagation models. Vehicle models range from simple, 2D representations to highly detailed, 3D representations of entire linked-track suspension systems. Similarly detailed models of acoustic emissions from vehicle engines are under development. The propagation calculations for both the seismics and acoustics are based on finite-difference, time-domain (FDTD) methodologies capable of handling complex environmental features such as heterogeneous geologies, urban structures, surface vegetation, and dynamic atmospheric turbulence. Any number of dynamic sources and virtual sensors may be incorporated into the FDTD model. The computational demands of 3D FDTD simulation over tactical distances require massively parallel computers. Several example calculations of seismic/acoustic wave propagation through complex atmospheric and terrain environments are shown.
A Heuristic Probabilistic Approach to Estimating Size-Dependent Mobility of Nonuniform Sediment
NASA Astrophysics Data System (ADS)
Woldegiorgis, B. T.; Wu, F. C.; van Griensven, A.; Bauwens, W.
2017-12-01
Simulating the mechanism of bed sediment mobility is essential for modelling sediment dynamics. Despite the fact that many studies are carried out on this subject, they use complex mathematical formulations that are computationally expensive, and are often not easy for implementation. In order to present a simple and computationally efficient complement to detailed sediment mobility models, we developed a heuristic probabilistic approach to estimating the size-dependent mobilities of nonuniform sediment based on the pre- and post-entrainment particle size distributions (PSDs), assuming that the PSDs are lognormally distributed. The approach fits a lognormal probability density function (PDF) to the pre-entrainment PSD of bed sediment and uses the threshold particle size of incipient motion and the concept of sediment mixture to estimate the PSDs of the entrained sediment and post-entrainment bed sediment. The new approach is simple in physical sense and significantly reduces the complexity and computation time and resource required by detailed sediment mobility models. It is calibrated and validated with laboratory and field data by comparing to the size-dependent mobilities predicted with the existing empirical lognormal cumulative distribution function (CDF) approach. The novel features of the current approach are: (1) separating the entrained and non-entrained sediments by a threshold particle size, which is a modified critical particle size of incipient motion by accounting for the mixed-size effects, and (2) using the mixture-based pre- and post-entrainment PSDs to provide a continuous estimate of the size-dependent sediment mobility.
NASA Astrophysics Data System (ADS)
Valentin, M. M.; Hay, L.; Van Beusekom, A. E.; Viger, R. J.; Hogue, T. S.
2016-12-01
Forecasting the hydrologic response to climate change in Alaska's glaciated watersheds remains daunting for hydrologists due to sparse field data and few modeling tools, which frustrates efforts to manage and protect critical aquatic habitat. Approximately 20% of the 64,000 square kilometer Copper River watershed is glaciated, and its glacier-fed tributaries support renowned salmon fisheries that are economically, culturally, and nutritionally invaluable to the local communities. This study adapts a simple, yet powerful, conceptual hydrologic model to simulate changes in the timing and volume of streamflow in the Copper River, Alaska as glaciers change under plausible future climate scenarios. The USGS monthly water balance model (MWBM), a hydrologic tool used for two decades to evaluate a broad range of hydrologic questions in the contiguous U.S., was enhanced to include glacier melt simulations and remotely sensed data. In this presentation we summarize the technical details behind our MWBM adaptation and demonstrate its use in the Copper River Basin to evaluate glacier and streamflow responses to climate change.
Hypercat - Hypercube of AGN tori
NASA Astrophysics Data System (ADS)
Nikutta, Robert; Lopez-Rodriguez, Enrique; Ichikawa, Kohei; Levenson, Nancy A.; Packham, Christopher C.
2018-06-01
AGN unification and observations hold that a dusty torus obscures the central accretion engine along some lines of sight. SEDs of dust tori have been modeled for a long time, but resolved emission morphologies have not been studied in much detail, because resolved observations are only possible recently (VLTI,ALMA) and in the near future (TMT,ELT,GMT). Some observations challenge a simple torus model, because in several objects most of MIR emission appears to emanate from polar regions high above the equatorial plane, i.e. not where the dust supposedly resides.We introduce our software framework and hypercube of AGN tori (Hypercat) made with CLUMPY (www.clumpy.org), a large set of images (6 model parameters + wavelength) to facilitate studies of emission and dust morphologies. We make use of Hypercat to study the morphological properties of the emission and dust distributions as function of model parameters. We find that a simple clumpy torus can indeed produce 10-micron emission patterns extended in polar directions, with extension ratios compatible with those found in observations. We are able to constrain the range of parameters that produce such morphologies.
Mergers of Non-spinning Black-hole Binaries: Gravitational Radiation Characteristics
NASA Technical Reports Server (NTRS)
Baker, John G.; Boggs, William D.; Centrella, Joan; Kelly, Bernard J.; McWilliams, Sean T.; vanMeter, James R.
2008-01-01
We present a detailed descriptive analysis of the gravitational radiation from black-hole binary mergers of non-spinning black holes, based on numerical simulations of systems varying from equal-mass to a 6:1 mass ratio. Our primary goal is to present relatively complete information about the waveforms, including all the leading multipolar components, to interested researchers. In our analysis, we pursue the simplest physical description of the dominant features in the radiation, providing an interpretation of the waveforms in terms of an implicit rotating source. This interpretation applies uniformly to the full wavetrain, from inspiral through ringdown. We emphasize strong relationships among the l = m modes that persist through the full wavetrain. Exploring the structure of the waveforms in more detail, we conduct detailed analytic fitting of the late-time frequency evolution, identifying a key quantitative feature shared by the l = m modes among all mass-ratios. We identify relationships, with a simple interpretation in terms of the implicit rotating source, among the evolution of frequency and amplitude, which hold for the late-time radiation. These detailed relationships provide sufficient information about the late-time radiation to yield a predictive model for the late-time waveforms, an alternative to the common practice of modeling by a sum of quasinormal mode overtones. We demonstrate an application of this in a new effective-one-body-based analytic waveform model.
Mergers of nonspinning black-hole binaries: Gravitational radiation characteristics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, John G.; Centrella, Joan; Kelly, Bernard J.
2008-08-15
We present a detailed descriptive analysis of the gravitational radiation from black-hole binary mergers of nonspinning black holes, based on numerical simulations of systems varying from equal mass to a 6 ratio 1 mass ratio. Our primary goal is to present relatively complete information about the waveforms, including all the leading multipolar components, to interested researchers. In our analysis, we pursue the simplest physical description of the dominant features in the radiation, providing an interpretation of the waveforms in terms of an implicit rotating source. This interpretation applies uniformly to the full wave train, from inspiral through ringdown. We emphasizemore » strong relationships among the l=m modes that persist through the full wave train. Exploring the structure of the waveforms in more detail, we conduct detailed analytic fitting of the late-time frequency evolution, identifying a key quantitative feature shared by the l=m modes among all mass ratios. We identify relationships, with a simple interpretation in terms of the implicit rotating source, among the evolution of frequency and amplitude, which hold for the late-time radiation. These detailed relationships provide sufficient information about the late-time radiation to yield a predictive model for the late-time waveforms, an alternative to the common practice of modeling by a sum of quasinormal mode overtones. We demonstrate an application of this in a new effective-one-body-based analytic waveform model.« less
A mean spherical model for soft potentials: The hard core revealed as a perturbation
NASA Technical Reports Server (NTRS)
Rosenfeld, Y.; Ashcroft, N. W.
1978-01-01
The mean spherical approximation for fluids is extended to treat the case of dense systems interacting via soft-potentials. The extension takes the form of a generalized statement concerning the behavior of the direct correlation function c(r) and radial distribution g(r). From a detailed analysis that views the hard core portion of a potential as a perturbation on the whole, a specific model is proposed which possesses analytic solutions for both Coulomb and Yukawa potentials, in addition to certain other remarkable properties. A variational principle for the model leads to a relatively simple method for obtaining numerical solutions.
NASA Technical Reports Server (NTRS)
Ghil, M.
1980-01-01
A unified theoretical approach to both the four-dimensional assimilation of asynoptic data and the initialization problem is attempted. This approach relies on the derivation of certain relationships between geopotential tendencies and tendencies of the horizontal velocity field in primitive-equation models of atmospheric flow. The approach is worked out and analyzed in detail for some simple barotropic models. Certain independent results of numerical experiments for the time-continuous assimilation of real asynoptic meteorological data into a complex, baroclinic weather prediction model are discussed in the context of the present approach. Tentative inferences are drawn for practical assimilation procedures.
Language competition in a population of migrating agents.
Lipowska, Dorota; Lipowski, Adam
2017-05-01
Influencing various aspects of human activity, migration is associated also with language formation. To examine the mutual interaction of these processes, we study a Naming Game with migrating agents. The dynamics of the model leads to formation of low-mobility clusters, which turns out to break the symmetry of the model: although the Naming Game remains symmetric, low-mobility languages are favored. High-mobility languages are gradually eliminated from the system, and the dynamics of language formation considerably slows down. Our model is too simple to explain in detail language competition of migrating human communities, but it certainly shows that languages of settlers are favored over nomadic ones.
On the topology of flux transfer events
NASA Technical Reports Server (NTRS)
Hesse, Michael; Birn, Joachim; Schindler, Karl
1990-01-01
A topological analysis is made of a simple model magnetic field of a perturbation at the magnetopause that shares magnetic properties with flux transfer events. The aim is to clarify a number of topological aspects that arise in the case of fully three-dimensional magnetic fields. It is shown that a localized perturbation at the magnetopause can in principle open a closed magnetosphere by establishing magnetic connections across the magnetopause by the formation of a ropelike magnetic field structure. For this purpose a global topological model of a closed magnetosphere is considered as the unperturbed state. The topological substructure of the model flux rope is discussed in detail.
Evolution of cosmic string networks
NASA Technical Reports Server (NTRS)
Albrecht, Andreas; Turok, Neil
1989-01-01
A discussion of the evolution and observable consequences of a network of cosmic strings is given. A simple model for the evolution of the string network is presented, and related to the statistical mechanics of string networks. The model predicts the long string density throughout the history of the universe from a single parameter, which researchers calculate in radiation era simulations. The statistical mechanics arguments indicate a particular thermal form for the spectrum of loops chopped off the network. Detailed numerical simulations of string networks in expanding backgrounds are performed to test the model. Consequences for large scale structure, the microwave and gravity wave backgrounds, nucleosynthesis and gravitational lensing are calculated.
NASA Astrophysics Data System (ADS)
Massip, Florian; Arndt, Peter F.
2013-04-01
Recently, an enrichment of identical matching sequences has been found in many eukaryotic genomes. Their length distribution exhibits a power law tail raising the question of what evolutionary mechanism or functional constraints would be able to shape this distribution. Here we introduce a simple and evolutionarily neutral model, which involves only point mutations and segmental duplications, and produces the same statistical features as observed for genomic data. Further, we extend a mathematical model for random stick breaking to analytically show that the exponent of the power law tail is -3 and universal as it does not depend on the microscopic details of the model.
Language competition in a population of migrating agents
NASA Astrophysics Data System (ADS)
Lipowska, Dorota; Lipowski, Adam
2017-05-01
Influencing various aspects of human activity, migration is associated also with language formation. To examine the mutual interaction of these processes, we study a Naming Game with migrating agents. The dynamics of the model leads to formation of low-mobility clusters, which turns out to break the symmetry of the model: although the Naming Game remains symmetric, low-mobility languages are favored. High-mobility languages are gradually eliminated from the system, and the dynamics of language formation considerably slows down. Our model is too simple to explain in detail language competition of migrating human communities, but it certainly shows that languages of settlers are favored over nomadic ones.
A parsimonious dynamic model for river water quality assessment.
Mannina, Giorgio; Viviani, Gaspare
2010-01-01
Water quality modelling is of crucial importance for the assessment of physical, chemical, and biological changes in water bodies. Mathematical approaches to water modelling have become more prevalent over recent years. Different model types ranging from detailed physical models to simplified conceptual models are available. Actually, a possible middle ground between detailed and simplified models may be parsimonious models that represent the simplest approach that fits the application. The appropriate modelling approach depends on the research goal as well as on data available for correct model application. When there is inadequate data, it is mandatory to focus on a simple river water quality model rather than detailed ones. The study presents a parsimonious river water quality model to evaluate the propagation of pollutants in natural rivers. The model is made up of two sub-models: a quantity one and a quality one. The model employs a river schematisation that considers different stretches according to the geometric characteristics and to the gradient of the river bed. Each stretch is represented with a conceptual model of a series of linear channels and reservoirs. The channels determine the delay in the pollution wave and the reservoirs cause its dispersion. To assess the river water quality, the model employs four state variables: DO, BOD, NH(4), and NO. The model was applied to the Savena River (Italy), which is the focus of a European-financed project in which quantity and quality data were gathered. A sensitivity analysis of the model output to the model input or parameters was done based on the Generalised Likelihood Uncertainty Estimation methodology. The results demonstrate the suitability of such a model as a tool for river water quality management.
Simple heuristics and rules of thumb: where psychologists and behavioural biologists might meet.
Hutchinson, John M C; Gigerenzer, Gerd
2005-05-31
The Centre for Adaptive Behaviour and Cognition (ABC) has hypothesised that much human decision-making can be described by simple algorithmic process models (heuristics). This paper explains this approach and relates it to research in biology on rules of thumb, which we also review. As an example of a simple heuristic, consider the lexicographic strategy of Take The Best for choosing between two alternatives: cues are searched in turn until one discriminates, then search stops and all other cues are ignored. Heuristics consist of building blocks, and building blocks exploit evolved or learned abilities such as recognition memory; it is the complexity of these abilities that allows the heuristics to be simple. Simple heuristics have an advantage in making decisions fast and with little information, and in avoiding overfitting. Furthermore, humans are observed to use simple heuristics. Simulations show that the statistical structures of different environments affect which heuristics perform better, a relationship referred to as ecological rationality. We contrast ecological rationality with the stronger claim of adaptation. Rules of thumb from biology provide clearer examples of adaptation because animals can be studied in the environments in which they evolved. The range of examples is also much more diverse. To investigate them, biologists have sometimes used similar simulation techniques to ABC, but many examples depend on empirically driven approaches. ABC's theoretical framework can be useful in connecting some of these examples, particularly the scattered literature on how information from different cues is integrated. Optimality modelling is usually used to explain less detailed aspects of behaviour but might more often be redirected to investigate rules of thumb.
Non-Boltzmann Modeling for Air Shock-Layer Radiation at Lunar-Return Conditions
NASA Technical Reports Server (NTRS)
Johnston, Christopher O.; Hollis, Brian R.; Sutton, Kenneth
2008-01-01
This paper investigates the non-Boltzmann modeling of the radiating atomic and molecular electronic states present in lunar-return shock-layers. The Master Equation is derived for a general atom or molecule while accounting for a variety of excitation and de-excitation mechanisms. A new set of electronic-impact excitation rates is compiled for N, O, and N2+, which are the main radiating species for most lunar-return shock-layers. Based on these new rates, a novel approach of curve-fitting the non-Boltzmann populations of the radiating atomic and molecular states is developed. This new approach provides a simple and accurate method for calculating the atomic and molecular non-Boltzmann populations while avoiding the matrix inversion procedure required for the detailed solution of the Master Equation. The radiative flux values predicted by the present detailed non-Boltzmann model and the approximate curve-fitting approach are shown to agree within 5% for the Fire 1634 s case.
Automated map sharpening by maximization of detail and connectivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terwilliger, Thomas C.; Sobolev, Oleg V.; Afonine, Pavel V.
An algorithm for automatic map sharpening is presented that is based on optimization of the detail and connectivity of the sharpened map. The detail in the map is reflected in the surface area of an iso-contour surface that contains a fixed fraction of the volume of the map, where a map with high level of detail has a high surface area. The connectivity of the sharpened map is reflected in the number of connected regions defined by the same iso-contour surfaces, where a map with high connectivity has a small number of connected regions. By combining these two measures inmore » a metric termed the `adjusted surface area', map quality can be evaluated in an automated fashion. This metric was used to choose optimal map-sharpening parameters without reference to a model or other interpretations of the map. Map sharpening by optimization of the adjusted surface area can be carried out for a map as a whole or it can be carried out locally, yielding a locally sharpened map. To evaluate the performance of various approaches, a simple metric based on map–model correlation that can reproduce visual choices of optimally sharpened maps was used. The map–model correlation is calculated using a model withBfactors (atomic displacement factors; ADPs) set to zero. Finally, this model-based metric was used to evaluate map sharpening and to evaluate map-sharpening approaches, and it was found that optimization of the adjusted surface area can be an effective tool for map sharpening.« less
Automated map sharpening by maximization of detail and connectivity
Terwilliger, Thomas C.; Sobolev, Oleg V.; Afonine, Pavel V.; ...
2018-05-18
An algorithm for automatic map sharpening is presented that is based on optimization of the detail and connectivity of the sharpened map. The detail in the map is reflected in the surface area of an iso-contour surface that contains a fixed fraction of the volume of the map, where a map with high level of detail has a high surface area. The connectivity of the sharpened map is reflected in the number of connected regions defined by the same iso-contour surfaces, where a map with high connectivity has a small number of connected regions. By combining these two measures inmore » a metric termed the `adjusted surface area', map quality can be evaluated in an automated fashion. This metric was used to choose optimal map-sharpening parameters without reference to a model or other interpretations of the map. Map sharpening by optimization of the adjusted surface area can be carried out for a map as a whole or it can be carried out locally, yielding a locally sharpened map. To evaluate the performance of various approaches, a simple metric based on map–model correlation that can reproduce visual choices of optimally sharpened maps was used. The map–model correlation is calculated using a model withBfactors (atomic displacement factors; ADPs) set to zero. Finally, this model-based metric was used to evaluate map sharpening and to evaluate map-sharpening approaches, and it was found that optimization of the adjusted surface area can be an effective tool for map sharpening.« less
Caffrey, Emily A; Johansen, Mathew P; Higley, Kathryn A
2015-10-01
Radiological dosimetry for nonhuman biota typically relies on calculations that utilize the Monte Carlo simulations of simple, ellipsoidal geometries with internal radioactivity distributed homogeneously throughout. In this manner it is quick and easy to estimate whole-body dose rates to biota. Voxel models are detailed anatomical phantoms that were first used for calculating radiation dose to humans, which are now being extended to nonhuman biota dose calculations. However, if simple ellipsoidal models provide conservative dose-rate estimates, then the additional labor involved in creating voxel models may be unnecessary for most scenarios. Here we show that the ellipsoidal method provides conservative estimates of organ dose rates to small mammals. Organ dose rates were calculated for environmental source terms from Maralinga, the Nevada Test Site, Hanford and Fukushima using both the ellipsoidal and voxel techniques, and in all cases the ellipsoidal method yielded more conservative dose rates by factors of 1.2-1.4 for photons and 5.3 for beta particles. Dose rates for alpha-emitting radionuclides are identical for each method as full energy absorption in source tissue is assumed. The voxel procedure includes contributions to dose from organ-to-organ irradiation (shown here to comprise 2-50% of total dose from photons and 0-93% of total dose from beta particles) that is not specifically quantified in the ellipsoidal approach. Overall, the voxel models provide robust dosimetry for the nonhuman mammals considered in this study, and though the level of detail is likely extraneous to demonstrating regulatory compliance today, voxel models may nevertheless be advantageous in resolving ongoing questions regarding the effects of ionizing radiation on wildlife.
Szilágyi, N; Kovács, R; Kenyeres, I; Csikor, Zs
2013-01-01
Biofilm development in a fixed bed biofilm reactor system performing municipal wastewater treatment was monitored aiming at accumulating colonization and maximum biofilm mass data usable in engineering practice for process design purposes. Initially a 6 month experimental period was selected for investigations where the biofilm formation and the performance of the reactors were monitored. The results were analyzed by two methods: for simple, steady-state process design purposes the maximum biofilm mass on carriers versus influent load and a time constant of the biofilm growth were determined, whereas for design approaches using dynamic models a simple biofilm mass prediction model including attachment and detachment mechanisms was selected and fitted to the experimental data. According to a detailed statistical analysis, the collected data have not allowed us to determine both the time constant of biofilm growth and the maximum biofilm mass on carriers at the same time. The observed maximum biofilm mass could be determined with a reasonable error and ranged between 438 gTS/m(2) carrier surface and 843 gTS/m(2), depending on influent load, and hydrodynamic conditions. The parallel analysis of the attachment-detachment model showed that the experimental data set allowed us to determine the attachment rate coefficient which was in the range of 0.05-0.4 m d(-1) depending on influent load and hydrodynamic conditions.
Current State of the Art Historic Building Information Modelling
NASA Astrophysics Data System (ADS)
Dore, C.; Murphy, M.
2017-08-01
In an extensive review of existing literature a number of observations were made in relation to the current approaches for recording and modelling existing buildings and environments: Data collection and pre-processing techniques are becoming increasingly automated to allow for near real-time data capture and fast processing of this data for later modelling applications. Current BIM software is almost completely focused on new buildings and has very limited tools and pre-defined libraries for modelling existing and historic buildings. The development of reusable parametric library objects for existing and historic buildings supports modelling with high levels of detail while decreasing the modelling time. Mapping these parametric objects to survey data, however, is still a time-consuming task that requires further research. Promising developments have been made towards automatic object recognition and feature extraction from point clouds for as-built BIM. However, results are currently limited to simple and planar features. Further work is required for automatic accurate and reliable reconstruction of complex geometries from point cloud data. Procedural modelling can provide an automated solution for generating 3D geometries but lacks the detail and accuracy required for most as-built applications in AEC and heritage fields.
Mathematical Astronomy in India
NASA Astrophysics Data System (ADS)
Plofker, Kim
Astronomy in South Asia's Sanskrit tradition, apparently originating in simple calendric computations regulating the timing of ancient ritual practices, expanded over the course of two or three millennia to include detailed spherical models, an endless variety of astrological systems, and academic mathematics in general. Assimilating various technical models, methods, and genres from the astronomy of neighboring cultures, Indian astronomers created new forms that were in turn borrowed by their foreign counterparts. Always recognizably related to the main themes of Eurasian geocentric mathematical astronomy, Indian astral science nonetheless maintained its culturally distinct character until Keplerian heliocentrism and Newtonian mechanics replaced it in colonial South Asia's academic mainstream.
NASA Technical Reports Server (NTRS)
Mazuruk, Konstantin; Grugel, Richard N.
2003-01-01
A magnetohydrodynamic model that examines the effect of rotating an electrically conducting cylinder with a uniform external magnetic field applied orthogonal to its axis is presented. Noting a simple geometry, it can be classified as a fundamental dynamo problem. For the case of an infinitely long cylinder, an analytical solution is obtained and analyzed in detail. A semi-analytical model was developed that considers a finite cylinder. Experimental data from a spinning brass wheel in the presence of Earth's magnetic field were compared to the proposed theory and found to fit well.
Remotely sensed soil moisture input to a hydrologic model
NASA Technical Reports Server (NTRS)
Engman, E. T.; Kustas, W. P.; Wang, J. R.
1989-01-01
The possibility of using detailed spatial soil moisture maps as input to a runoff model was investigated. The water balance of a small drainage basin was simulated using a simple storage model. Aircraft microwave measurements of soil moisture were used to construct two-dimensional maps of the spatial distribution of the soil moisture. Data from overflights on different dates provided the temporal changes resulting from soil drainage and evapotranspiration. The study site and data collection are described, and the soil measurement data are given. The model selection is discussed, and the simulation results are summarized. It is concluded that a time series of soil moisture is a valuable new type of data for verifying model performance and for updating and correcting simulated streamflow.
High-fidelity meshes from tissue samples for diffusion MRI simulations.
Panagiotaki, Eleftheria; Hall, Matt G; Zhang, Hui; Siow, Bernard; Lythgoe, Mark F; Alexander, Daniel C
2010-01-01
This paper presents a method for constructing detailed geometric models of tissue microstructure for synthesizing realistic diffusion MRI data. We construct three-dimensional mesh models from confocal microscopy image stacks using the marching cubes algorithm. Random-walk simulations within the resulting meshes provide synthetic diffusion MRI measurements. Experiments optimise simulation parameters and complexity of the meshes to achieve accuracy and reproducibility while minimizing computation time. Finally we assess the quality of the synthesized data from the mesh models by comparison with scanner data as well as synthetic data from simple geometric models and simplified meshes that vary only in two dimensions. The results support the extra complexity of the three-dimensional mesh compared to simpler models although sensitivity to the mesh resolution is quite robust.
NASA Astrophysics Data System (ADS)
Paiewonsky, Pablo; Elison Timm, Oliver
2018-03-01
In this paper, we present a simple dynamic global vegetation model whose primary intended use is auxiliary to the land-atmosphere coupling scheme of a climate model, particularly one of intermediate complexity. The model simulates and provides important ecological-only variables but also some hydrological and surface energy variables that are typically either simulated by land surface schemes or else used as boundary data input for these schemes. The model formulations and their derivations are presented here, in detail. The model includes some realistic and useful features for its level of complexity, including a photosynthetic dependency on light, full coupling of photosynthesis and transpiration through an interactive canopy resistance, and a soil organic carbon dependence for bare-soil albedo. We evaluate the model's performance by running it as part of a simple land surface scheme that is driven by reanalysis data. The evaluation against observational data includes net primary productivity, leaf area index, surface albedo, and diagnosed variables relevant for the closure of the hydrological cycle. In this setup, we find that the model gives an adequate to good simulation of basic large-scale ecological and hydrological variables. Of the variables analyzed in this paper, gross primary productivity is particularly well simulated. The results also reveal the current limitations of the model. The most significant deficiency is the excessive simulation of evapotranspiration in mid- to high northern latitudes during their winter to spring transition. The model has a relative advantage in situations that require some combination of computational efficiency, model transparency and tractability, and the simulation of the large-scale vegetation and land surface characteristics under non-present-day conditions.
CADDIS Volume 2. Sources, Stressors and Responses: Temperature - Simple Conceptual Diagram
Introduction to the temperature module, when to list temperature as a candidate cause, ways to measure temperature, simple and detailed conceptual diagrams for temperature, temperature module references and literature reviews.
CADDIS Volume 2. Sources, Stressors and Responses: Nutrients - Simple Conceptual Diagram
Introduction to the nutrients module, when to list nutrients as a candidate cause, ways to measure nutrients, simple and detailed conceptual diagrams for nutrients, nutrients module references and literature reviews.
CADDIS Volume 2. Sources, Stressors and Responses: Insecticides - Simple Conceptual Diagram
Introduction to the insecticides module, when to list insecticides as a candidate cause, ways to measure insecticides, simple and detailed conceptual diagrams for insecticides, insecticides module references and literature reviews.
CADDIS Volume 2. Sources, Stressors and Responses: Herbicides - Simple Conceptual Diagram
Introduction to the herbicides module, when to list herbicides as a candidate cause, ways to measure herbicides, simple and detailed conceptual diagrams for herbicides, herbicides module references and literature reviews.
CADDIS Volume 2. Sources, Stressors and Responses: Sediments - Simple Conceptual Diagram
Introduction to the Sediments module, when to list Sediments as a candidate cause, ways to measure Sediments, simple and detailed conceptual diagrams for Sediments, Sediments module references and literature reviews.
NASA Astrophysics Data System (ADS)
Klapisch, M.; Bar-Shalom, A.
1997-12-01
Busquet's RADIOM model for effective ionization temperature Tz is an appealing and simple way to introduce non LTE effects in hydrocodes. The authors report checking the validity of RADIOM in the optically thin case by comparison with two collisional radiative models, MICCRON (level-by-level) for C and Al and SCROLL (superconfiguration- by-superconfiguration) for Lu and Au. MICCRON is described in detail. The agreement between the average ion charge >Z< and the corresponding Tz obtained from RADIOM on the one hand, and from MICCRON on the other hand for C and Al is excellent. The absorption spectra at Tz agree very well with the one generated by SCROLL near LTE conditions (small β). Farther from LTE (large β) the agreement is still good, but another effective temperature gives an excellent agreement. It is concluded that the model of Busquet is very good in most cases. There is however room for improvement when the departure from LTE is more pronounced for heavy atoms and for emissivity. Improvement appears possible because the concept of ionization temperature seems to hold in a broader range of parameters.
Hardware-Based Non-Optimum Factors for Launch Vehicle Structural Design
NASA Technical Reports Server (NTRS)
Wu, K. Chauncey; Cerro, Jeffrey A.
2010-01-01
During aerospace vehicle conceptual and preliminary design, empirical non-optimum factors are typically applied to predicted structural component weights to account for undefined manufacturing and design details. Non-optimum factors are developed here for 32 aluminum-lithium 2195 orthogrid panels comprising the liquid hydrogen tank barrel of the Space Shuttle External Tank using measured panel weights and manufacturing drawings. Minimum values for skin thickness, axial and circumferential blade stiffener thickness and spacing, and overall panel thickness are used to estimate individual panel weights. Panel non-optimum factors computed using a coarse weights model range from 1.21 to 1.77, and a refined weights model (including weld lands and skin and stiffener transition details) yields non-optimum factors of between 1.02 and 1.54. Acreage panels have an average 1.24 non-optimum factor using the coarse model, and 1.03 with the refined version. The observed consistency of these acreage non-optimum factors suggests that relatively simple models can be used to accurately predict large structural component weights for future launch vehicles.
Aerodynamic Parameters of a UK City Derived from Morphological Data
NASA Astrophysics Data System (ADS)
Millward-Hopkins, J. T.; Tomlin, A. S.; Ma, L.; Ingham, D. B.; Pourkashanian, M.
2013-03-01
Detailed three-dimensional building data and a morphometric model are used to estimate the aerodynamic roughness length z 0 and displacement height d over a major UK city (Leeds). Firstly, using an adaptive grid, the city is divided into neighbourhood regions that are each of a relatively consistent geometry throughout. Secondly, for each neighbourhood, a number of geometric parameters are calculated. Finally, these are used as input into a morphometric model that considers the influence of height variability to predict aerodynamic roughness length and displacement height. Predictions are compared with estimations made using standard tables of aerodynamic parameters. The comparison suggests that the accuracy of plan-area-density based tables is likely to be limited, and that height-based tables of aerodynamic parameters may be more accurate for UK cities. The displacement heights in the standard tables are shown to be lower than the current predictions. The importance of geometric details in determining z 0 and d is then explored. Height variability is observed to greatly increase the predicted values. However, building footprint shape only has a significant influence upon the predictions when height variability is not considered. Finally, we develop simple relations to quantify the influence of height variation upon predicted z 0 and d via the standard deviation of building heights. The difference in these predictions compared to the more complex approach highlights the importance of considering the specific shape of the building-height distributions. Collectively, these results suggest that to accurately predict aerodynamic parameters of real urban areas, height variability must be considered in detail, but it may be acceptable to make simple assumptions about building layout and footprint shape.
2014-07-25
composition of simple temporal structures to a speaker diarization task with the goal of segmenting conference audio in the presence of an unknown number of...application domains including neuroimaging, diverse document selection, speaker diarization , stock modeling, and target tracking. We detail each of...recall performance than competing methods in a task of discovering articles preferred by the user • a gold-standard speaker diarization method, as
Robot Control Based On Spatial-Operator Algebra
NASA Technical Reports Server (NTRS)
Rodriguez, Guillermo; Kreutz, Kenneth K.; Jain, Abhinandan
1992-01-01
Method for mathematical modeling and control of robotic manipulators based on spatial-operator algebra providing concise representation and simple, high-level theoretical frame-work for solution of kinematical and dynamical problems involving complicated temporal and spatial relationships. Recursive algorithms derived immediately from abstract spatial-operator expressions by inspection. Transition from abstract formulation through abstract solution to detailed implementation of specific algorithms to compute solution greatly simplified. Complicated dynamical problems like two cooperating robot arms solved more easily.
The Beneficial Role of Random Strategies in Social and Financial Systems
NASA Astrophysics Data System (ADS)
Biondo, Alessio Emanuele; Pluchino, Alessandro; Rapisarda, Andrea
2013-05-01
In this paper we focus on the beneficial role of random strategies in social sciences by means of simple mathematical and computational models. We briefly review recent results obtained by two of us in previous contributions for the case of the Peter principle and the efficiency of a Parliament. Then, we develop a new application of random strategies to the case of financial trading and discuss in detail our findings about forecasts of markets dynamics.
Use of system identification techniques for improving airframe finite element models using test data
NASA Technical Reports Server (NTRS)
Hanagud, Sathya V.; Zhou, Weiyu; Craig, James I.; Weston, Neil J.
1991-01-01
A method for using system identification techniques to improve airframe finite element models was developed and demonstrated. The method uses linear sensitivity matrices to relate changes in selected physical parameters to changes in total system matrices. The values for these physical parameters were determined using constrained optimization with singular value decomposition. The method was confirmed using both simple and complex finite element models for which pseudo-experimental data was synthesized directly from the finite element model. The method was then applied to a real airframe model which incorporated all the complexities and details of a large finite element model and for which extensive test data was available. The method was shown to work, and the differences between the identified model and the measured results were considered satisfactory.
Thalamic neuron models encode stimulus information by burst-size modulation
Elijah, Daniel H.; Samengo, Inés; Montemurro, Marcelo A.
2015-01-01
Thalamic neurons have been long assumed to fire in tonic mode during perceptive states, and in burst mode during sleep and unconsciousness. However, recent evidence suggests that bursts may also be relevant in the encoding of sensory information. Here, we explore the neural code of such thalamic bursts. In order to assess whether the burst code is generic or whether it depends on the detailed properties of each bursting neuron, we analyzed two neuron models incorporating different levels of biological detail. One of the models contained no information of the biophysical processes entailed in spike generation, and described neuron activity at a phenomenological level. The second model represented the evolution of the individual ionic conductances involved in spiking and bursting, and required a large number of parameters. We analyzed the models' input selectivity using reverse correlation methods and information theory. We found that n-spike bursts from both models transmit information by modulating their spike count in response to changes to instantaneous input features, such as slope, phase, amplitude, etc. The stimulus feature that is most efficiently encoded by bursts, however, need not coincide with one of such classical features. We therefore searched for the optimal feature among all those that could be expressed as a linear transformation of the time-dependent input current. We found that bursting neurons transmitted 6 times more information about such more general features. The relevant events in the stimulus were located in a time window spanning ~100 ms before and ~20 ms after burst onset. Most importantly, the neural code employed by the simple and the biologically realistic models was largely the same, implying that the simple thalamic neuron model contains the essential ingredients that account for the computational properties of the thalamic burst code. Thus, our results suggest the n-spike burst code is a general property of thalamic neurons. PMID:26441623
Thalamic neuron models encode stimulus information by burst-size modulation.
Elijah, Daniel H; Samengo, Inés; Montemurro, Marcelo A
2015-01-01
Thalamic neurons have been long assumed to fire in tonic mode during perceptive states, and in burst mode during sleep and unconsciousness. However, recent evidence suggests that bursts may also be relevant in the encoding of sensory information. Here, we explore the neural code of such thalamic bursts. In order to assess whether the burst code is generic or whether it depends on the detailed properties of each bursting neuron, we analyzed two neuron models incorporating different levels of biological detail. One of the models contained no information of the biophysical processes entailed in spike generation, and described neuron activity at a phenomenological level. The second model represented the evolution of the individual ionic conductances involved in spiking and bursting, and required a large number of parameters. We analyzed the models' input selectivity using reverse correlation methods and information theory. We found that n-spike bursts from both models transmit information by modulating their spike count in response to changes to instantaneous input features, such as slope, phase, amplitude, etc. The stimulus feature that is most efficiently encoded by bursts, however, need not coincide with one of such classical features. We therefore searched for the optimal feature among all those that could be expressed as a linear transformation of the time-dependent input current. We found that bursting neurons transmitted 6 times more information about such more general features. The relevant events in the stimulus were located in a time window spanning ~100 ms before and ~20 ms after burst onset. Most importantly, the neural code employed by the simple and the biologically realistic models was largely the same, implying that the simple thalamic neuron model contains the essential ingredients that account for the computational properties of the thalamic burst code. Thus, our results suggest the n-spike burst code is a general property of thalamic neurons.
Lateral interactions and non-equilibrium in surface kinetics
NASA Astrophysics Data System (ADS)
Menzel, Dietrich
2016-08-01
Work modelling reactions between surface species frequently use Langmuir kinetics, assuming that the layer is in internal equilibrium, and that the chemical potential of adsorbates corresponds to that of an ideal gas. Coverage dependences of reacting species and of site blocking are usually treated with simple power law coverage dependences (linear in the simplest case), neglecting that lateral interactions are strong in adsorbate and co-adsorbate layers which may influence kinetics considerably. My research group has in the past investigated many co-adsorbate systems and simple reactions in them. We have collected a number of examples where strong deviations from simple coverage dependences exist, in blocking, promoting, and selecting reactions. Interactions can range from those between next neighbors to larger distances, and can be quite complex. In addition, internal equilibrium in the layer as well as equilibrium distributions over product degrees of freedom can be violated. The latter effect leads to non-equipartition of energy over molecular degrees of freedom (for products) or non-equal response to those of reactants. While such behavior can usually be described by dynamic or kinetic models, the deeper reasons require detailed theoretical analysis. Here, a selection of such cases is reviewed to exemplify these points.
A Reduced-Order Model For Zero-Mass Synthetic Jet Actuators
NASA Technical Reports Server (NTRS)
Yamaleev, Nail K.; Carpenter, Mark H.; Vatsa, Veer S.
2007-01-01
Accurate details of the general performance of fluid actuators is desirable over a range of flow conditions, within some predetermined error tolerance. Designers typically model actuators with different levels of fidelity depending on the acceptable level of error in each circumstance. Crude properties of the actuator (e.g., peak mass rate and frequency) may be sufficient for some designs, while detailed information is needed for other applications (e.g., multiple actuator interactions). This work attempts to address two primary objectives. The first objective is to develop a systematic methodology for approximating realistic 3-D fluid actuators, using quasi-1-D reduced-order models. Near full fidelity can be achieved with this approach at a fraction of the cost of full simulation and only a modest increase in cost relative to most actuator models used today. The second objective, which is a direct consequence of the first, is to determine the approximate magnitude of errors committed by actuator model approximations of various fidelities. This objective attempts to identify which model (ranging from simple orifice exit boundary conditions to full numerical simulations of the actuator) is appropriate for a given error tolerance.
LLSURE: local linear SURE-based edge-preserving image filtering.
Qiu, Tianshuang; Wang, Aiqi; Yu, Nannan; Song, Aimin
2013-01-01
In this paper, we propose a novel approach for performing high-quality edge-preserving image filtering. Based on a local linear model and using the principle of Stein's unbiased risk estimate as an estimator for the mean squared error from the noisy image only, we derive a simple explicit image filter which can filter out noise while preserving edges and fine-scale details. Moreover, this filter has a fast and exact linear-time algorithm whose computational complexity is independent of the filtering kernel size; thus, it can be applied to real time image processing tasks. The experimental results demonstrate the effectiveness of the new filter for various computer vision applications, including noise reduction, detail smoothing and enhancement, high dynamic range compression, and flash/no-flash denoising.
CADDIS Volume 2. Sources, Stressors and Responses: Ammonia - Simple Conceptual Diagram
Introduction to the ammonia module, when to list ammonia as a candidate cause, ways to measure ammonia, simple and detailed conceptual diagrams for ammonia, literature reviews and references for the ammonia module.
Simulated Carbon Cycling in a Model Microbial Mat.
NASA Astrophysics Data System (ADS)
Decker, K. L.; Potter, C. S.
2006-12-01
We present here the novel addition of detailed organic carbon cycling to our model of a hypersaline microbial mat ecosystem. This ecosystem model, MBGC (Microbial BioGeoChemistry), simulates carbon fixation through oxygenic and anoxygenic photosynthesis, and the release of C and electrons for microbial heterotrophs via cyanobacterial exudates and also via a pool of dead cells. Previously in MBGC, the organic portion of the carbon cycle was simplified into a black-box rate of accumulation of simple and complex organic compounds based on photosynthesis and mortality rates. We will discuss the novel inclusion of fermentation as a source of carbon and electrons for use in methanogenesis and sulfate reduction, and the influence of photorespiration on labile carbon exudation rates in cyanobacteria. We will also discuss the modeling of decomposition of dead cells and the ultimate release of inorganic carbon. The detailed modeling of organic carbon cycling is important to the accurate representation of inorganic carbon flux through the mat, as well as to accurate representation of growth models of the heterotrophs under different environmental conditions. Because the model ecosystem is an analog of ancient microbial mats that had huge impacts on the atmosphere of early earth, this MBGC can be useful as a biological component to either early earth models or models of other planets that potentially harbor life.
Rocket exhaust ground cloud/atmospheric interactions
NASA Technical Reports Server (NTRS)
Hwang, B.; Gould, R. K.
1978-01-01
An attempt to identify and minimize the uncertainties and potential inaccuracies of the NASA Multilayer Diffusion Model (MDM) is performed using data from selected Titan 3 launches. The study is based on detailed parametric calculations using the MDM code and a comparative study of several other diffusion models, the NASA measurements, and the MDM. The results are discussed and evaluated. In addition, the physical/chemical processes taking place during the rocket cloud rise are analyzed. The exhaust properties and the deluge water effects are evaluated. A time-dependent model for two aerosol coagulations is developed and documented. Calculations using this model for dry deposition during cloud rise are made. A simple model for calculating physical properties such as temperature and air mass entrainment during cloud rise is also developed and incorporated with the aerosol model.
CP violation in multibody B decays from QCD factorization
NASA Astrophysics Data System (ADS)
Klein, Rebecca; Mannel, Thomas; Virto, Javier; Vos, K. Keri
2017-10-01
We test a data-driven approach based on QCD factorization for charmless three-body B-decays by confronting it to measurements of CP violation in B - → π - π + π -. While some of the needed non-perturbative objects can be directly extracted from data, some others can, so far, only be modelled. Although this approach is currently model dependent, we comment on the perspectives to reduce this model dependence. While our model naturally accommodates the gross features of the Dalitz distribution, it cannot quantitatively explain the details seen in the current experimental data on local CP asymmetries. We comment on possible refinements of our simple model and conclude by briefly discussing a possible extension of the model to large invariant masses, where large local CP asymmetries have been measured.
O'Brien, Susan H; Cook, Aonghais S C P; Robinson, Robert A
2017-10-01
Assessing the potential impact of additional mortality from anthropogenic causes on animal populations requires detailed demographic information. However, these data are frequently lacking, making simple algorithms, which require little data, appealing. Because of their simplicity, these algorithms often rely on implicit assumptions, some of which may be quite restrictive. Potential Biological Removal (PBR) is a simple harvest model that estimates the number of additional mortalities that a population can theoretically sustain without causing population extinction. However, PBR relies on a number of implicit assumptions, particularly around density dependence and population trajectory that limit its applicability in many situations. Among several uses, it has been widely employed in Europe in Environmental Impact Assessments (EIA), to examine the acceptability of potential effects of offshore wind farms on marine bird populations. As a case study, we use PBR to estimate the number of additional mortalities that a population with characteristics typical of a seabird population can theoretically sustain. We incorporated this level of additional mortality within Leslie matrix models to test assumptions within the PBR algorithm about density dependence and current population trajectory. Our analyses suggest that the PBR algorithm identifies levels of mortality which cause population declines for most population trajectories and forms of population regulation. Consequently, we recommend that practitioners do not use PBR in an EIA context for offshore wind energy developments. Rather than using simple algorithms that rely on potentially invalid implicit assumptions, we recommend use of Leslie matrix models for assessing the impact of additional mortality on a population, enabling the user to explicitly define assumptions and test their importance. Copyright © 2017 Elsevier Ltd. All rights reserved.
CADDIS Volume 2. Sources, Stressors and Responses: Temperature - Detailed Conceptual Diagram
Introduction to the temperature module, when to list temperature as a candidate cause, ways to measure temperature, simple and detailed conceptual diagrams for temperature, temperature module references and literature reviews.
CADDIS Volume 2. Sources, Stressors and Responses: Sediments - Detailed Conceptual Diagram
Introduction to the Sediments module, when to list Sediments as a candidate cause, ways to measure Sediments, simple and detailed conceptual diagrams for Sediments, Sediments module references and literature reviews.
CADDIS Volume 2. Sources, Stressors and Responses: Herbicides - Detailed Conceptual Diagram
Introduction to the herbicides module, when to list herbicides as a candidate cause, ways to measure herbicides, simple and detailed conceptual diagrams for herbicides, herbicides module references and literature reviews.
CADDIS Volume 2. Sources, Stressors and Responses: Insecticides - Detailed Conceptual Diagram
Introduction to the insecticides module, when to list insecticides as a candidate cause, ways to measure insecticides, simple and detailed conceptual diagrams for insecticides, insecticides module references and literature reviews.
NASA Technical Reports Server (NTRS)
Van Dyke, Michael B.
2014-01-01
During random vibration testing of electronic boxes there is often a desire to know the dynamic response of certain internal printed wiring boards (PWBs) for the purpose of monitoring the response of sensitive hardware or for post-test forensic analysis in support of anomaly investigation. Due to restrictions on internally mounted accelerometers for most flight hardware there is usually no means to empirically observe the internal dynamics of the unit, so one must resort to crude and highly uncertain approximations. One common practice is to apply Miles Equation, which does not account for the coupled response of the board in the chassis, resulting in significant over- or under-prediction. This paper explores the application of simple multiple-degree-of-freedom lumped parameter modeling to predict the coupled random vibration response of the PWBs in their fundamental modes of vibration. A simple tool using this approach could be used during or following a random vibration test to interpret vibration test data from a single external chassis measurement to deduce internal board dynamics by means of a rapid correlation analysis. Such a tool might also be useful in early design stages as a supplemental analysis to a more detailed finite element analysis to quickly prototype and analyze the dynamics of various design iterations. After developing the theoretical basis, a lumped parameter modeling approach is applied to an electronic unit for which both external and internal test vibration response measurements are available for direct comparison. Reasonable correlation of the results demonstrates the potential viability of such an approach. Further development of the preliminary approach presented in this paper will involve correlation with detailed finite element models and additional relevant test data.
Hypergeometric Equation in Modeling Relativistic Isotropic Sphere
NASA Astrophysics Data System (ADS)
Thirukkanesh, S.; Ragel, F. C.
2014-04-01
We study the Einstein system of equations in static spherically symmetric spacetimes. We obtained classes of exact solutions to the Einstein system by transforming the condition for pressure isotropy to a hypergeometric equation choosing a rational form for one of the gravitational potentials. The solutions are given in simple form that is a desirable requisite to study the behavior of relativistic compact objects in detail. A physical analysis indicate that our models satisfy all the fundamental requirements of realistic star and match smoothly with the exterior Schwarzschild metric. The derived masses and densities are consistent with the previously reported experimental and theoretical studies describing strange stars. The models satisfy the standard energy conditions required by normal matter.
An electronic implementation of amoeba anticipation
NASA Astrophysics Data System (ADS)
Ziegler, Martin; Ochs, Karlheinz; Hansen, Mirko; Kohlstedt, Hermann
2014-02-01
In nature, the capability of memorizing environmental changes and recalling past events can be observed in unicellular organisms like amoebas. Pershin and Di Ventra have shown that such learning behavior can be mimicked in a simple memristive circuit model consisting of an LC (inductance capacitance) contour and a memristive device. Here, we implement this model experimentally by using an Ag/TiO2- x /Al memristive device. A theoretical analysis of the circuit is used to gain insight into the functionality of this model and to give advice for the circuit implementation. In this respect, the transfer function, resonant frequency, and damping behavior for a varying resistance of the memristive device are discussed in detail.
Jorge-Botana, Guillermo; Olmos, Ricardo; Luzón, José M
2018-01-01
The aim of this paper is to describe and explain one useful computational methodology to model the semantic development of word representation: Word maturity. In particular, the methodology is based on the longitudinal word monitoring created by Kirylev and Landauer using latent semantic analysis for the representation of lexical units. The paper is divided into two parts. First, the steps required to model the development of the meaning of words are explained in detail. We describe the technical and theoretical aspects of each step. Second, we provide a simple example of application of this methodology with some simple tools that can be used by applied researchers. This paper can serve as a user-friendly guide for researchers interested in modeling changes in the semantic representations of words. Some current aspects of the technique and future directions are also discussed. WIREs Cogn Sci 2018, 9:e1457. doi: 10.1002/wcs.1457 This article is categorized under: Computer Science > Natural Language Processing Linguistics > Language Acquisition Psychology > Development and Aging. © 2017 Wiley Periodicals, Inc.
Quantitative proteomic analysis reveals a simple strategy of global resource allocation in bacteria
Hui, Sheng; Silverman, Josh M; Chen, Stephen S; Erickson, David W; Basan, Markus; Wang, Jilong; Hwa, Terence; Williamson, James R
2015-01-01
A central aim of cell biology was to understand the strategy of gene expression in response to the environment. Here, we study gene expression response to metabolic challenges in exponentially growing Escherichia coli using mass spectrometry. Despite enormous complexity in the details of the underlying regulatory network, we find that the proteome partitions into several coarse-grained sectors, with each sector's total mass abundance exhibiting positive or negative linear relations with the growth rate. The growth rate-dependent components of the proteome fractions comprise about half of the proteome by mass, and their mutual dependencies can be characterized by a simple flux model involving only two effective parameters. The success and apparent generality of this model arises from tight coordination between proteome partition and metabolism, suggesting a principle for resource allocation in proteome economy of the cell. This strategy of global gene regulation should serve as a basis for future studies on gene expression and constructing synthetic biological circuits. Coarse graining may be an effective approach to derive predictive phenomenological models for other ‘omics’ studies. PMID:25678603
Fish robotics and hydrodynamics
NASA Astrophysics Data System (ADS)
Lauder, George
2010-11-01
Studying the fluid dynamics of locomotion in freely-swimming fishes is challenging due to difficulties in controlling fish behavior. To provide better control over fish-like propulsive systems we have constructed a variety of fish-like robotic test platforms that range from highly biomimetic models of fins, to simple physical models of body movements during aquatic locomotion. First, we have constructed a series of biorobotic models of fish pectoral fins with 5 fin rays that allow detailed study of fin motion, forces, and fluid dynamics associated with fin-based locomotion. We find that by tuning fin ray stiffness and the imposed motion program we can produce thrust both on the fin outstroke and instroke. Second, we are using a robotic flapping foil system to study the self-propulsion of flexible plastic foils of varying stiffness, length, and trailing edge shape as a means of investigating the fluid dynamic effect of simple changes in the properties of undulating bodies moving through water. We find unexpected non-linear stiffness-dependent effects of changing foil length on self-propelled speed, and as well as significant effects of trailing edge shape on foil swimming speed.
TLS from fundamentals to practice
Urzhumtsev, Alexandre; Afonine, Pavel V.; Adams, Paul D.
2014-01-01
The Translation-Libration-Screw-rotation (TLS) model of rigid-body harmonic displacements introduced in crystallography by Schomaker & Trueblood (1968) is now a routine tool in macromolecular studies and is a feature of most modern crystallographic structure refinement packages. In this review we consider a number of simple examples that illustrate important features of the TLS model. Based on these examples simplified formulae are given for several special cases that may occur in structure modeling and refinement. The derivation of general TLS formulae from basic principles is also provided. This manuscript describes the principles of TLS modeling, as well as some select algorithmic details for practical application. An extensive list of applications references as examples of TLS in macromolecular crystallography refinement is provided. PMID:25249713
People adopt optimal policies in simple decision-making, after practice and guidance.
Evans, Nathan J; Brown, Scott D
2017-04-01
Organisms making repeated simple decisions are faced with a tradeoff between urgent and cautious strategies. While animals can adopt a statistically optimal policy for this tradeoff, findings about human decision-makers have been mixed. Some studies have shown that people can optimize this "speed-accuracy tradeoff", while others have identified a systematic bias towards excessive caution. These issues have driven theoretical development and spurred debate about the nature of human decision-making. We investigated a potential resolution to the debate, based on two factors that routinely differ between human and animal studies of decision-making: the effects of practice, and of longer-term feedback. Our study replicated the finding that most people, by default, are overly cautious. When given both practice and detailed feedback, people moved rapidly towards the optimal policy, with many participants reaching optimality with less than 1 h of practice. Our findings have theoretical implications for cognitive and neural models of simple decision-making, as well as methodological implications.
The Morphology and Uniformity of Circumstellar OH/H2O Masers around OH/IR Stars
NASA Astrophysics Data System (ADS)
Felli, Derek Sean
Even though low mass stars ( 8 solar masses), the more massive stars drive the chemical evolution of galaxies from which the next generation of stars and planets can form. Understanding mass loss of asymptotic giant branch stars contributes to our understanding of the chemical evolution of the galaxy, stellar populations, and star formation history. Stars with mass 8 solar masses go supernova. In both cases, these stars enrich their environments with elements heavier than simple hydrogen and helium molecules. While some general info about how stars die and form planetary nebulae are known, specific details are missing due to a lack of high-resolution observations and analysis of the intermediate stages. For example, we know that mass loss in stars creates morphologically diverse planetary nebulae, but we do not know the uniformity of these processes, and therefore lack detailed models to better predict how spherically symmetric stars form asymmetric nebulae. We have selected a specific group of late-stage stars and observed them at different scales to reveal the uniformity of mass loss through different layers close to the star. This includes observing nearby masers that trace the molecular shell structure around these stars. This study revealed detailed structure that was analyzed for uniformity to place constraints on how the mass loss processes behave in models. These results will feed into our ability to create more detailed models to better predict the chemical evolution of the next generation of stars and planets.
Optical systems integrated modeling
NASA Technical Reports Server (NTRS)
Shannon, Robert R.; Laskin, Robert A.; Brewer, SI; Burrows, Chris; Epps, Harlan; Illingworth, Garth; Korsch, Dietrich; Levine, B. Martin; Mahajan, Vini; Rimmer, Chuck
1992-01-01
An integrated modeling capability that provides the tools by which entire optical systems and instruments can be simulated and optimized is a key technology development, applicable to all mission classes, especially astrophysics. Many of the future missions require optical systems that are physically much larger than anything flown before and yet must retain the characteristic sub-micron diffraction limited wavefront accuracy of their smaller precursors. It is no longer feasible to follow the path of 'cut and test' development; the sheer scale of these systems precludes many of the older techniques that rely upon ground evaluation of full size engineering units. The ability to accurately model (by computer) and optimize the entire flight system's integrated structural, thermal, and dynamic characteristics is essential. Two distinct integrated modeling capabilities are required. These are an initial design capability and a detailed design and optimization system. The content of an initial design package is shown. It would be a modular, workstation based code which allows preliminary integrated system analysis and trade studies to be carried out quickly by a single engineer or a small design team. A simple concept for a detailed design and optimization system is shown. This is a linkage of interface architecture that allows efficient interchange of information between existing large specialized optical, control, thermal, and structural design codes. The computing environment would be a network of large mainframe machines and its users would be project level design teams. More advanced concepts for detailed design systems would support interaction between modules and automated optimization of the entire system. Technology assessment and development plans for integrated package for initial design, interface development for detailed optimization, validation, and modeling research are presented.
O’Brien, J. Patrick; Malvankar, Nikhil S.
2017-01-01
Anaerobic microorganisms play a central role in several environmental processes and regulate global biogeochemical cycling of nutrients and minerals. Many anaerobic microorganisms are important for the production of bioenergy and biofuels. However, the major hurdle in studying anaerobic microorganisms in the laboratory is the requirement for sophisticated and expensive gassing stations and glove boxes to create and maintain the anaerobic environment. This appendix presents a simple design for a gassing station that can be used readily by an inexperienced investigator for cultivation of anaerobic microorganisms. In addition, this appendix also details the low-cost assembly of bioelectrochemical systems and outlines a simplified procedure for cultivating and analyzing bacterial cell cultures and biofilms that produce electric current, using Geobacter sulfurreducens as a model organism. PMID:27858972
Born-Oppenheimer approximation for a singular system
NASA Astrophysics Data System (ADS)
Akbas, Haci; Turgut, O. Teoman
2018-01-01
We discuss a simple singular system in one dimension, two heavy particles interacting with a light particle via an attractive contact interaction and not interacting among themselves. It is natural to apply the Born-Oppenheimer approximation to this problem. We present a detailed discussion of this approach; the advantage of this simple model is that one can estimate the error terms self-consistently. Moreover, a Fock space approach to this problem is presented where an expansion can be proposed to get higher order corrections. A slight modification of the same problem in which the light particle is relativistic is discussed in a later section by neglecting pair creation processes. Here, the second quantized description is more challenging, but with some care, one can recover the first order expression exactly.
Lunar exploration for resource utilization
NASA Technical Reports Server (NTRS)
Duke, Michael B.
1992-01-01
The strategy for developing resources on the Moon depends on the stage of space industrialization. A case is made for first developing the resources needed to provide simple materials required in large quantities for space operations. Propellants, shielding, and structural materials fall into this category. As the enterprise grows, it will be feasible to develop additional sources - those more difficult to obtain or required in smaller quantities. Thus, the first materials processing on the Moon will probably take the abundant lunar regolith, extract from it major mineral or glass species, and do relatively simple chemical processing. We need to conduct a lunar remote sensing mission to determine the global distribution of features, geophysical properties, and composition of the Moon, information which will serve as the basis for detailed models of and engineering decisions about a lunar mine.
Design Through Manufacturing: The Solid Model - Finite Element Analysis Interface
NASA Technical Reports Server (NTRS)
Rubin, Carol
2003-01-01
State-of-the-art computer aided design (CAD) presently affords engineers the opportunity to create solid models of machine parts which reflect every detail of the finished product. Ideally, these models should fulfill two very important functions: (1) they must provide numerical control information for automated manufacturing of precision parts, and (2) they must enable analysts to easily evaluate the stress levels (using finite element analysis - FEA) for all structurally significant parts used in space missions. Today's state-of-the-art CAD programs perform function (1) very well, providing an excellent model for precision manufacturing. But they do not provide a straightforward and simple means of automating the translation from CAD to FEA models, especially for aircraft-type structures. The research performed during the fellowship period investigated the transition process from the solid CAD model to the FEA stress analysis model with the final goal of creating an automatic interface between the two. During the period of the fellowship a detailed multi-year program for the development of such an interface was created. The ultimate goal of this program will be the development of a fully parameterized automatic ProE/FEA translator for parts and assemblies, with the incorporation of data base management into the solution, and ultimately including computational fluid dynamics and thermal modeling in the interface.
Oakes, J M; Feldman, H A
2001-02-01
Nonequivalent controlled pretest-posttest designs are central to evaluation science, yet no practical and unified approach for estimating power in the two most widely used analytic approaches to these designs exists. This article fills the gap by presenting and comparing useful, unified power formulas for ANCOVA and change-score analyses, indicating the implications of each on sample-size requirements. The authors close with practical recommendations for evaluators. Mathematical details and a simple spreadsheet approach are included in appendices.
Entanglement renormalization and topological order.
Aguado, Miguel; Vidal, Guifré
2008-02-22
The multiscale entanglement renormalization ansatz (MERA) is argued to provide a natural description for topological states of matter. The case of Kitaev's toric code is analyzed in detail and shown to possess a remarkably simple MERA description leading to distillation of the topological degrees of freedom at the top of the tensor network. Kitaev states on an infinite lattice are also shown to be a fixed point of the renormalization group flow associated with entanglement renormalization. All of these results generalize to arbitrary quantum double models.
Detailed analysis of the self-discharge of supercapacitors
NASA Astrophysics Data System (ADS)
Kowal, Julia; Avaroglu, Esin; Chamekh, Fahmi; Šenfelds, Armands; Thien, Tjark; Wijaya, Dhanny; Sauer, Dirk Uwe
Self-discharge is an important performance factor when using supercapacitors. Voltage losses in the range of 5-60% occur over two weeks. Experiments show a dependency of the self-discharge rate on various parameters such as temperature, charge duration and short-term history. In this paper, self-discharge of three commercially available supercapacitors was measured under various conditions. Based on different measurements, the impact of the influence factors is identified. A simple model to explain parts of the voltage decay is presented.
Granular convection observed by magnetic resonance imaging.
Ehrichs, E E; Jaeger, H M; Karczmar, G S; Knight, J B; Kuperman, V Y; Nagel, S R
1995-03-17
Vibrations in a granular material can spontaneously produce convection rolls reminiscent of those seen in fluids. Magnetic resonance imaging provides a sensitive and noninvasive probe for the detection of these convection currents, which have otherwise been difficult to observe. A magnetic resonance imaging study of convection in a column of poppy seeds yielded data about the detailed shape of the convection rolls and the depth dependence of the convection velocity. The velocity was found to decrease exponentially with depth; a simple model for this behavior is presented here.
Low-temperature dependence of the thermomagnetic transport properties of the SrTiO3/LaAlO3 interface
NASA Astrophysics Data System (ADS)
Lerer, S.; Ben Shalom, M.; Deutscher, G.; Dagan, Y.
2011-08-01
Transport measurements are reported, including Hall, Seebeck, and Nernst effects. All of these transport properties exhibit anomalous field and temperature dependencies, with a change of behavior observed at H˜1.5 T and T˜15 K. The low-temperature, low-field behaviors of all transport properties were reconciled using a simple two-band analysis. A more detailed model is required in order to explain the high-magnetic-field regime.
CADDIS Volume 2. Sources, Stressors and Responses: Ammonia - Detailed Conceptual Diagram
Introduction to the ammonia module, when to list ammonia as a candidate cause, ways to measure ammonia, simple and detailed conceptual diagrams for ammonia, literature reviews and references for the ammonia module.
CADDIS Volume 2. Sources, Stressors and Responses: Nutrients - Detailed Conceptual Diagram (N)
Introduction to the nutrients module, when to list nutrients as a candidate cause, ways to measure nutrients, simple and detailed conceptual diagrams for nutrients, nutrients module references and literature reviews.
CADDIS Volume 2. Sources, Stressors and Responses: Nutrients - Detailed Conceptual Diagram (P)
Introduction to the nutrients module, when to list nutrients as a candidate cause, ways to measure nutrients, simple and detailed conceptual diagrams for nutrients, nutrients module references and literature reviews.
Hetherington, James P J; Warner, Anne; Seymour, Robert M
2006-04-22
Systems Biology requires that biological modelling is scaled up from small components to system level. This can produce exceedingly complex models, which obscure understanding rather than facilitate it. The successful use of highly simplified models would resolve many of the current problems faced in Systems Biology. This paper questions whether the conclusions of simple mathematical models of biological systems are trustworthy. The simplification of a specific model of calcium oscillations in hepatocytes is examined in detail, and the conclusions drawn from this scrutiny generalized. We formalize our choice of simplification approach through the use of functional 'building blocks'. A collection of models is constructed, each a progressively more simplified version of a well-understood model. The limiting model is a piecewise linear model that can be solved analytically. We find that, as expected, in many cases the simpler models produce incorrect results. However, when we make a sensitivity analysis, examining which aspects of the behaviour of the system are controlled by which parameters, the conclusions of the simple model often agree with those of the richer model. The hypothesis that the simplified model retains no information about the real sensitivities of the unsimplified model can be very strongly ruled out by treating the simplification process as a pseudo-random perturbation on the true sensitivity data. We conclude that sensitivity analysis is, therefore, of great importance to the analysis of simple mathematical models in biology. Our comparisons reveal which results of the sensitivity analysis regarding calcium oscillations in hepatocytes are robust to the simplifications necessarily involved in mathematical modelling. For example, we find that if a treatment is observed to strongly decrease the period of the oscillations while increasing the proportion of the cycle during which cellular calcium concentrations are rising, without affecting the inter-spike or maximum calcium concentrations, then it is likely that the treatment is acting on the plasma membrane calcium pump.
Anomalous evolution of Ar metastable density with electron density in high density Ar discharge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Min; Chang, Hong-Young; You, Shin-Jae
2011-10-15
Recently, an anomalous evolution of argon metastable density with plasma discharge power (electron density) was reported [A. M. Daltrini, S. A. Moshkalev, T. J. Morgan, R. B. Piejak, and W. G. Graham, Appl. Phys. Lett. 92, 061504 (2008)]. Although the importance of the metastable atom and its density has been reported in a lot of literature, however, a basic physics behind the anomalous evolution of metastable density has not been clearly understood yet. In this study, we investigated a simple global model to elucidate the underlying physics of the anomalous evolution of argon metastable density with the electron density. Onmore » the basis of the proposed simple model, we reproduced the anomalous evolution of the metastable density and disclosed the detailed physics for the anomalous result. Drastic changes of dominant mechanisms for the population and depopulation processes of Ar metastable atoms with electron density, which take place even in relatively low electron density regime, is the clue to understand the result.« less
Cellular reprogramming dynamics follow a simple 1D reaction coordinate
NASA Astrophysics Data System (ADS)
Teja Pusuluri, Sai; Lang, Alex H.; Mehta, Pankaj; Castillo, Horacio E.
2018-01-01
Cellular reprogramming, the conversion of one cell type to another, induces global changes in gene expression involving thousands of genes, and understanding how cells globally alter their gene expression profile during reprogramming is an ongoing problem. Here we reanalyze time-course data on cellular reprogramming from differentiated cell types to induced pluripotent stem cells (iPSCs) and show that gene expression dynamics during reprogramming follow a simple 1D reaction coordinate. This reaction coordinate is independent of both the time it takes to reach the iPSC state as well as the details of the experimental protocol used. Using Monte-Carlo simulations, we show that such a reaction coordinate emerges from epigenetic landscape models where cellular reprogramming is viewed as a ‘barrier-crossing’ process between cell fates. Overall, our analysis and model suggest that gene expression dynamics during reprogramming follow a canonical trajectory consistent with the idea of an ‘optimal path’ in gene expression space for reprogramming.
A sophisticated cad tool for the creation of complex models for electromagnetic interaction analysis
NASA Astrophysics Data System (ADS)
Dion, Marc; Kashyap, Satish; Louie, Aloisius
1991-06-01
This report describes the essential features of the MS-DOS version of DIDEC-DREO, an interactive program for creating wire grid, surface patch, and cell models of complex structures for electromagnetic interaction analysis. It uses the device-independent graphics library DIGRAF and the graphics kernel system HALO, and can be executed on systems with various graphics devices. Complicated structures can be created by direct alphanumeric keyboard entry, digitization of blueprints, conversion form existing geometric structure files, and merging of simple geometric shapes. A completed DIDEC geometric file may then be converted to the format required for input to a variety of time domain and frequency domain electromagnetic interaction codes. This report gives a detailed description of the program DIDEC-DREO, its installation, and its theoretical background. Each available interactive command is described. The associated program HEDRON which generates simple geometric shapes, and other programs that extract the current amplitude data from electromagnetic interaction code outputs, are also discussed.
Integrating individual movement behaviour into dispersal functions.
Heinz, Simone K; Wissel, Christian; Conradt, Larissa; Frank, Karin
2007-04-21
Dispersal functions are an important tool for integrating dispersal into complex models of population and metapopulation dynamics. Most approaches in the literature are very simple, with the dispersal functions containing only one or two parameters which summarise all the effects of movement behaviour as for example different movement patterns or different perceptual abilities. The summarising nature of these parameters makes assessing the effect of one particular behavioural aspect difficult. We present a way of integrating movement behavioural parameters into a particular dispersal function in a simple way. Using a spatial individual-based simulation model for simulating different movement behaviours, we derive fitting functions for the functional relationship between the parameters of the dispersal function and several details of movement behaviour. This is done for three different movement patterns (loops, Archimedean spirals, random walk). Additionally, we provide measures which characterise the shape of the dispersal function and are interpretable in terms of landscape connectivity. This allows an ecological interpretation of the relationships found.
Nonequilibrium Langevin dynamics: A demonstration study of shear flow fluctuations in a simple fluid
NASA Astrophysics Data System (ADS)
Belousov, Roman; Cohen, E. G. D.; Rondoni, Lamberto
2017-08-01
The present paper is based on a recent success of the second-order stochastic fluctuation theory in describing time autocorrelations of equilibrium and nonequilibrium physical systems. In particular, it was shown to yield values of the related deterministic parameters of the Langevin equation for a Couette flow in a microscopic molecular dynamics model of a simple fluid. In this paper we find all the remaining constants of the stochastic dynamics, which then is simulated numerically and compared directly with the original physical system. By using these data, we study in detail the accuracy and precision of a second-order Langevin model for nonequilibrium physical systems theoretically and computationally. We find an intriguing relation between an applied external force and cumulants of the resulting flow fluctuations. This is characterized by a linear dependence of an athermal cumulant ratio, an apposite quantity introduced here. In addition, we discuss how the order of a given Langevin dynamics can be raised systematically by introducing colored noise.
Spatial-temporal modeling of malware propagation in networks.
Chen, Zesheng; Ji, Chuanyi
2005-09-01
Network security is an important task of network management. One threat to network security is malware (malicious software) propagation. One type of malware is called topological scanning that spreads based on topology information. The focus of this work is on modeling the spread of topological malwares, which is important for understanding their potential damages, and for developing countermeasures to protect the network infrastructure. Our model is motivated by probabilistic graphs, which have been widely investigated in machine learning. We first use a graphical representation to abstract the propagation of malwares that employ different scanning methods. We then use a spatial-temporal random process to describe the statistical dependence of malware propagation in arbitrary topologies. As the spatial dependence is particularly difficult to characterize, the problem becomes how to use simple (i.e., biased) models to approximate the spatially dependent process. In particular, we propose the independent model and the Markov model as simple approximations. We conduct both theoretical analysis and extensive simulations on large networks using both real measurements and synthesized topologies to test the performance of the proposed models. Our results show that the independent model can capture temporal dependence and detailed topology information and, thus, outperforms the previous models, whereas the Markov model incorporates a certain spatial dependence and, thus, achieves a greater accuracy in characterizing both transient and equilibrium behaviors of malware propagation.
Multiscale Modeling of Mesoscale and Interfacial Phenomena
NASA Astrophysics Data System (ADS)
Petsev, Nikolai Dimitrov
With rapidly emerging technologies that feature interfaces modified at the nanoscale, traditional macroscopic models are pushed to their limits to explain phenomena where molecular processes can play a key role. Often, such problems appear to defy explanation when treated with coarse-grained continuum models alone, yet remain prohibitively expensive from a molecular simulation perspective. A prominent example is surface nanobubbles: nanoscopic gaseous domains typically found on hydrophobic surfaces that have puzzled researchers for over two decades due to their unusually long lifetimes. We show how an entirely macroscopic, non-equilibrium model explains many of their anomalous properties, including their stability and abnormally small gas-side contact angles. From this purely transport perspective, we investigate how factors such as temperature and saturation affect nanobubbles, providing numerous experimentally testable predictions. However, recent work also emphasizes the relevance of molecular-scale phenomena that cannot be described in terms of bulk phases or pristine interfaces. This is true for nanobubbles as well, whose nanoscale heights may require molecular detail to capture the relevant physics, in particular near the bubble three-phase contact line. Therefore, there is a clear need for general ways to link molecular granularity and behavior with large-scale continuum models in the treatment of many interfacial problems. In light of this, we have developed a general set of simulation strategies that couple mesoscale particle-based continuum models to molecular regions simulated through conventional molecular dynamics (MD). In addition, we derived a transport model for binary mixtures that opens the possibility for a wide range of applications in biological and drug delivery problems, and is readily reconciled with our hybrid MD-continuum techniques. Approaches that couple multiple length scales for fluid mixtures are largely absent in the literature, and we provide a novel and general framework for multiscale modeling of systems featuring one or more dissolved species. This makes it possible to retain molecular detail for parts of the problem that require it while using a simple, continuum description for parts where high detail is unnecessary, reducing the number of degrees of freedom (i.e. number of particles) dramatically. This opens the possibility for modeling ion transport in biological processes and biomolecule assembly in ionic solution, as well as electrokinetic phenomena at interfaces such as corrosion. The number of particles in the system is further reduced through an integrated boundary approach, which we apply to colloidal suspensions. In this thesis, we describe this general framework for multiscale modeling single- and multicomponent systems, provide several simple equilibrium and non-equilibrium case studies, and discuss future applications.
Open EFTs, IR effects & late-time resummations: systematic corrections in stochastic inflation
Burgess, C. P.; Holman, R.; Tasinato, G.
2016-01-26
Though simple inflationary models describe the CMB well, their corrections are often plagued by infrared effects that obstruct a reliable calculation of late-time behaviour. Here we adapt to cosmology tools designed to address similar issues in other physical systems with the goal of making reliable late-time inflationary predictions. The main such tool is Open EFTs which reduce in the inflationary case to Stochastic Inflation plus calculable corrections. We apply this to a simple inflationary model that is complicated enough to have dangerous IR behaviour yet simple enough to allow the inference of late-time behaviour. We find corrections to standard Stochasticmore » Inflationary predictions for the noise and drift, and we find these corrections ensure the IR finiteness of both these quantities. The late-time probability distribution, P(Φ), for super-Hubble field fluctuations are obtained as functions of the noise and drift and so these too are IR finite. We compare our results to other methods (such as large-N models) and find they agree when these models are reliable. In all cases we can explore in detail we find IR secular effects describe the slow accumulation of small perturbations to give a big effect: a significant distortion of the late-time probability distribution for the field. But the energy density associated with this is only of order H 4 at late times and so does not generate a dramatic gravitational back-reaction.« less
Open EFTs, IR effects & late-time resummations: systematic corrections in stochastic inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burgess, C. P.; Holman, R.; Tasinato, G.
Though simple inflationary models describe the CMB well, their corrections are often plagued by infrared effects that obstruct a reliable calculation of late-time behaviour. Here we adapt to cosmology tools designed to address similar issues in other physical systems with the goal of making reliable late-time inflationary predictions. The main such tool is Open EFTs which reduce in the inflationary case to Stochastic Inflation plus calculable corrections. We apply this to a simple inflationary model that is complicated enough to have dangerous IR behaviour yet simple enough to allow the inference of late-time behaviour. We find corrections to standard Stochasticmore » Inflationary predictions for the noise and drift, and we find these corrections ensure the IR finiteness of both these quantities. The late-time probability distribution, P(Φ), for super-Hubble field fluctuations are obtained as functions of the noise and drift and so these too are IR finite. We compare our results to other methods (such as large-N models) and find they agree when these models are reliable. In all cases we can explore in detail we find IR secular effects describe the slow accumulation of small perturbations to give a big effect: a significant distortion of the late-time probability distribution for the field. But the energy density associated with this is only of order H 4 at late times and so does not generate a dramatic gravitational back-reaction.« less
Can simple rules control development of a pioneer vertebrate neuronal network generating behavior?
Roberts, Alan; Conte, Deborah; Hull, Mike; Merrison-Hort, Robert; al Azad, Abul Kalam; Buhl, Edgar; Borisyuk, Roman; Soffe, Stephen R
2014-01-08
How do the pioneer networks in the axial core of the vertebrate nervous system first develop? Fundamental to understanding any full-scale neuronal network is knowledge of the constituent neurons, their properties, synaptic interconnections, and normal activity. Our novel strategy uses basic developmental rules to generate model networks that retain individual neuron and synapse resolution and are capable of reproducing correct, whole animal responses. We apply our developmental strategy to young Xenopus tadpoles, whose brainstem and spinal cord share a core vertebrate plan, but at a tractable complexity. Following detailed anatomical and physiological measurements to complete a descriptive library of each type of spinal neuron, we build models of their axon growth controlled by simple chemical gradients and physical barriers. By adding dendrites and allowing probabilistic formation of synaptic connections, we reconstruct network connectivity among up to 2000 neurons. When the resulting "network" is populated by model neurons and synapses, with properties based on physiology, it can respond to sensory stimulation by mimicking tadpole swimming behavior. This functioning model represents the most complete reconstruction of a vertebrate neuronal network that can reproduce the complex, rhythmic behavior of a whole animal. The findings validate our novel developmental strategy for generating realistic networks with individual neuron- and synapse-level resolution. We use it to demonstrate how early functional neuronal connectivity and behavior may in life result from simple developmental "rules," which lay out a scaffold for the vertebrate CNS without specific neuron-to-neuron recognition.
A critical examination of the validity of simplified models for radiant heat transfer analysis.
NASA Technical Reports Server (NTRS)
Toor, J. S.; Viskanta, R.
1972-01-01
Examination of the directional effects of the simplified models by comparing the experimental data with the predictions based on simple and more detailed models for the radiation characteristics of surfaces. Analytical results indicate that the constant property diffuse and specular models do not yield the upper and lower bounds on local radiant heat flux. In general, the constant property specular analysis yields higher values of irradiation than the constant property diffuse analysis. A diffuse surface in the enclosure appears to destroy the effect of specularity of the other surfaces. Semigray and gray analyses predict the irradiation reasonably well provided that the directional properties and the specularity of the surfaces are taken into account. The uniform and nonuniform radiosity diffuse models are in satisfactory agreement with each other.
An Empirical Model of the Variations of the Solar Lyman-Alpha Spectral Irradiance
NASA Astrophysics Data System (ADS)
Kretzschmar, M.; Snow, M. A.; Curdt, W.
2017-12-01
We propose a simple model that computes the spectral profile of the solar irradiance in the Hydrogen Lyman alpha line, H Ly-α (121.567nm), from 1947 to present. Such a model is relevant for the study of many astronomical environments, from planetary atmospheres to interplanetary medium, and can be used to improve the analysis of data from mission like MAVEN or GOES-16. This empirical model is based on the SOHO/SUMER observations of the Ly-α irradiance over solar cycle 23, which we analyze in details, and relies on the Ly-α integrated irradiance composite. The model reproduces the temporal variability of the spectral profile and matches the independent SORCE/SOSLTICE spectral observations from 2003 to 2007 with an accuracy better than 10%.
CADDIS Volume 2. Sources, Stressors and Responses: Ionic Strength - Simple Conceptual Diagram
Introduction to the ionic strength module, when to list ionic strength as a candidate cause, ways to measure ionic strength, simple and detailed conceptual diagrams for ionic strength, ionic strength module references and literature reviews.
CADDIS Volume 2. Sources, Stressors and Responses: Physical Habitat - Simple Conceptual Diagram
Introduction to the Physical Habitat module, when to list Physical Habitat as a candidate cause, ways to measure Physical Habitat, simple and detailed conceptual diagrams for Physical Habitat, Physical Habitat module references and literature reviews.
NASA Astrophysics Data System (ADS)
Wegehenkel, M.
In this paper, long-term effects of different afforestation scenarios on landscape wa- ter balance will be analyzed taking into account the results of a regional case study. This analysis is based on using a GIS-coupled simulation model for the the spatially distributed calculation of water balance.For this purpose, the modelling system THE- SEUS with a simple GIS-interface will be used. To take into account the special case of change in forest cover proportion, THESEUS was enhanced with a simple for- est growth model. In the regional case study, model runs will be performed using a detailed spatial data set from North-East Germany. This data set covers a mesoscale catchment located at the moraine landscape of North-East Germany. Based on this data set, the influence of the actual landuse and of different landuse change scenarios on water balance dynamics will be investigated taking into account the spatial distributed modelling results from THESEUS. The model was tested using different experimen- tal data sets from field plots as well as obsverded catchment discharge. Additionally to such convential validation techniques, remote sensing data were used to check the simulated regional distribution of water balance components like evapotranspiration in the catchment.
Analyzing inflammatory response as excitable media
NASA Astrophysics Data System (ADS)
Yde, Pernille; Høgh Jensen, Mogens; Trusina, Ala
2011-11-01
The regulatory system of the transcription factor NF-κB plays a great role in many cell functions, including inflammatory response. Interestingly, the NF-κB system is known to up-regulate production of its own triggering signal—namely, inflammatory cytokines such as TNF, IL-1, and IL-6. In this paper we investigate a previously presented model of the NF-κB, which includes both spatial effects and the positive feedback from cytokines. The model exhibits the properties of an excitable medium and has the ability to propagate waves of high cytokine concentration. These waves represent an optimal way of sending an inflammatory signal through the tissue as they create a chemotactic signal able to recruit neutrophils to the site of infection. The simple model displays three qualitatively different states; low stimuli leads to no or very little response. Intermediate stimuli leads to reoccurring waves of high cytokine concentration. Finally, high stimuli leads to a sustained high cytokine concentration, a scenario which is toxic for the tissue cells and corresponds to chronic inflammation. Due to the few variables of the simple model, we are able to perform a phase-space analysis leading to a detailed understanding of the functional form of the model and its limitations. The spatial effects of the model contribute to the robustness of the cytokine wave formation and propagation.
Simple rules govern the patterns of Arctic sea ice melt ponds
NASA Astrophysics Data System (ADS)
Popovic, P.; Cael, B. B.; Abbot, D. S.; Silber, M.
2017-12-01
Climate change, amplified in the far north, has led to a rapid sea ice decline in recent years. Melt ponds that form on the surface of Arctic sea ice in the summer significantly lower the ice albedo, thereby accelerating ice melt. Pond geometry controls the details of this crucial feedback. However, currently it is unclear how to model this intricate geometry. Here we show that an extremely simple model of voids surrounding randomly sized and placed overlapping circles reproduces the essential features of pond patterns. The model has only two parameters, circle scale and the fraction of the surface covered by voids, and we choose them by comparing the model to pond images. Using these parameters the void model robustly reproduces all of the examined pond features such as the ponds' area-perimeter relationship and the area-abundance relationship over nearly 7 orders of magnitude. By analyzing airborne photographs of sea ice, we also find that the typical pond scale is surprisingly constant across different years, regions, and ice types. These results demonstrate that the geometric and abundance patterns of Arctic melt ponds can be simply described, and can guide future models of Arctic melt ponds to improve predictions of how sea ice will respond to Arctic warming.
Susong, D.; Marks, D.; Garen, D.
1999-01-01
Topographically distributed energy- and water-balance models can accurately simulate both the development and melting of a seasonal snowcover in the mountain basins. To do this they require time-series climate surfaces of air temperature, humidity, wind speed, precipitation, and solar and thermal radiation. If data are available, these parameters can be adequately estimated at time steps of one to three hours. Unfortunately, climate monitoring in mountain basins is very limited, and the full range of elevations and exposures that affect climate conditions, snow deposition, and melt is seldom sampled. Detailed time-series climate surfaces have been successfully developed using limited data and relatively simple methods. We present a synopsis of the tools and methods used to combine limited data with simple corrections for the topographic controls to generate high temporal resolution time-series images of these climate parameters. Methods used include simulations, elevational gradients, and detrended kriging. The generated climate surfaces are evaluated at points and spatially to determine if they are reasonable approximations of actual conditions. Recommendations are made for the addition of critical parameters and measurement sites into routine monitoring systems in mountain basins.Topographically distributed energy- and water-balance models can accurately simulate both the development and melting of a seasonal snowcover in the mountain basins. To do this they require time-series climate surfaces of air temperature, humidity, wind speed, precipitation, and solar and thermal radiation. If data are available, these parameters can be adequately estimated at time steps of one to three hours. Unfortunately, climate monitoring in mountain basins is very limited, and the full range of elevations and exposures that affect climate conditions, snow deposition, and melt is seldom sampled. Detailed time-series climate surfaces have been successfully developed using limited data and relatively simple methods. We present a synopsis of the tools and methods used to combine limited data with simple corrections for the topographic controls to generate high temporal resolution time-series images of these climate parameters. Methods used include simulations, elevational gradients, and detrended kriging. The generated climate surfaces are evaluated at points and spatially to determine if they are reasonable approximations of actual conditions. Recommendations are made for the addition of critical parameters and measurement sites into routine monitoring systems in mountain basins.
Maui Space Surveillance System Satellite Categorization Laboratory
NASA Astrophysics Data System (ADS)
Deiotte, R.; Guyote, M.; Kelecy, T.; Hall, D.; Africano, J.; Kervin, P.
The MSSS satellite categorization laboratory is a fusion of robotics and digital imaging processes that aims to decompose satellite photometric characteristics and behavior in a controlled setting. By combining a robot, light source and camera to acquire non-resolved images of a model satellite, detailed photometric analyses can be performed to extract relevant information about shape features, elemental makeup, and ultimately attitude and function. Using the laboratory setting a detailed analysis can be done on any type of material or design and the results cataloged in a database that will facilitate object identification by "curve-fitting" individual elements in the basis set to observational data that might otherwise be unidentifiable. Currently the laboratory has created, an ST-Robotics five degree of freedom robotic arm, collimated light source and non-focused Apogee camera have all been integrated into a MATLAB based software package that facilitates automatic data acquisition and analysis. Efforts to date have been aimed at construction of the lab as well as validation and verification of simple geometric objects. Simple tests on spheres, cubes and simple satellites show promising results that could lead to a much better understanding of non-resolvable space object characteristics. This paper presents a description of the laboratory configuration and validation test results with emphasis on the non-resolved photometric characteristics for a variety of object shapes, spin dynamics and orientations. The future vision, utility and benefits of the laboratory to the SSA community as a whole are also discussed.
Simple Numerical Modelling for Gasdynamic Design of Wave Rotors
NASA Astrophysics Data System (ADS)
Okamoto, Koji; Nagashima, Toshio
The precise estimation of pressure waves generated in the passages is a crucial factor in wave rotor design. However, it is difficult to estimate the pressure wave analytically, e.g. by the method of characteristics, because the mechanism of pressure-wave generation and propagation in the passages is extremely complicated as compared to that in a shock tube. In this study, a simple numerical modelling scheme was developed to facilitate the design procedure. This scheme considers the three dominant factors in the loss mechanism —gradual passage opening, wall friction and leakage— for simulating the pressure waves precisely. The numerical scheme itself is based on the one-dimensional Euler equations with appropriate source terms to reduce the calculation time. The modelling of these factors was verified by comparing the results with those of a two-dimensional numerical simulation, which were previously validated by the experimental data in our previous study. Regarding wave rotor miniaturization, the leakage flow effect, which involves the interaction between adjacent cells, was investigated extensively. A port configuration principle was also examined and analyzed in detail to verify the applicability of the present numerical modelling scheme to the wave rotor design.
Use of system identification techniques for improving airframe finite element models using test data
NASA Technical Reports Server (NTRS)
Hanagud, Sathya V.; Zhou, Weiyu; Craig, James I.; Weston, Neil J.
1993-01-01
A method for using system identification techniques to improve airframe finite element models using test data was developed and demonstrated. The method uses linear sensitivity matrices to relate changes in selected physical parameters to changes in the total system matrices. The values for these physical parameters were determined using constrained optimization with singular value decomposition. The method was confirmed using both simple and complex finite element models for which pseudo-experimental data was synthesized directly from the finite element model. The method was then applied to a real airframe model which incorporated all of the complexities and details of a large finite element model and for which extensive test data was available. The method was shown to work, and the differences between the identified model and the measured results were considered satisfactory.
Spectral flow as a map between N = (2 , 0)-models
NASA Astrophysics Data System (ADS)
Athanasopoulos, P.; Faraggi, A. E.; Gepner, D.
2014-07-01
The space of (2 , 0) models is of particular interest among all heterotic-string models because it includes the models with the minimal SO (10) unification structure, which is well motivated by the Standard Model of particle physics data. The fermionic Z2 ×Z2 heterotic-string models revealed the existence of a new symmetry in the space of string configurations under the exchange of spinors and vectors of the SO (10) GUT group, dubbed spinor-vector duality. In this paper we generalize this idea to arbitrary internal rational conformal field theories (RCFTs). We explain how the spectral flow operator normally acting within a general (2 , 2) theory can be used as a map between (2 , 0) models. We describe the details, give an example and propose more simple currents that can be used in a similar way.
Simulation of semi-explicit mechanisms of SOA formation from glyoxal in a 3-D model
NASA Astrophysics Data System (ADS)
Knote, C.; Hodzic, A.; Jimenez, J. L.; Volkamer, R.; Orlando, J. J.; Baidar, S.; Brioude, J.; Fast, J.; Gentner, D. R.; Goldstein, A. H.; Hayes, P. L.; Knighton, W. B.; Oetjen, H.; Setyan, A.; Stark, H.; Thalman, R.; Tyndall, G.; Washenfelder, R.; Waxman, E.; Zhang, Q.
2013-10-01
New pathways to form secondary organic aerosols (SOA) have been postulated recently. Glyoxal, the smallest dicarbonyl, is one of the proposed precursors. It has both anthropogenic and biogenic sources, and readily partitions into the aqueous-phase of cloud droplets and deliquesced aerosols where it undergoes both reversible and irreversible chemistry. In this work we extend the regional scale chemistry transport model WRF-Chem to include a detailed gas-phase chemistry of glyoxal formation as well as a state-of-the-science module describing its partitioning and reactions in the aqueous-phase of aerosols. A comparison of several proposed mechanisms is performed to quantify the relative importance of different formation pathways and their regional variability. The CARES/CalNex campaigns over California in summer 2010 are used as case studies to evaluate the model against observations. In all simulations the LA basin was found to be the hotspot for SOA formation from glyoxal, which contributes between 1% and 15% of the model SOA depending on the mechanism used. Our results indicate that a mechanism based only on a simple uptake coefficient, as frequently employed in global modeling studies, leads to higher SOA contributions from glyoxal compared to a more detailed description that considers aerosol phase state and chemical composition. In the more detailed simulations, surface uptake is found to be the main contributor to SOA mass compared to a volume process and reversible formation. We find that contribution of the latter is limited by the availability of glyoxal in aerosol water, which is in turn controlled by an increase in the Henry's law constant depending on salt concentrations ("salting-in"). A kinetic limitation in this increase prevents substantial partitioning of glyoxal into aerosol water at high salt concentrations. If this limitation is removed, volume pathways contribute >20% of glyoxal SOA mass, and the total mass formed (5.8% of total SOA in the LA basin) is about a third of the simple uptake coefficient formulation without consideration of aerosol phase state and composition. All these model formulations are based on very limited and recent field or laboratory data and we conclude that the current uncertainty on glyoxal SOA formation spans a factor of 10 in this domain and time period.
NASA Astrophysics Data System (ADS)
Wang, Wenjing; Qiu, Rui; Ren, Li; Liu, Huan; Wu, Zhen; Li, Chunyan; Li, Junli
2017-09-01
Mean glandular dose (MGD) is not only determined by the compressed breast thickness (CBT) and the glandular content, but also by the distribution of glandular tissues in breast. Depth dose inside the breast in mammography has been widely concerned as glandular dose decreases rapidly with increasing depth. In this study, an experiment using thermo luminescent dosimeters (TLDs) was carried out to validate Monte Carlo simulations of mammography. Percent depth doses (PDDs) at different depth values were measured inside simple breast phantoms of different thicknesses. The experimental values were well consistent with the values calculated by Geant4. Then a detailed breast model with a CBT of 4 cm and a glandular content of 50%, which has been constructed in previous work, was used to study the effects of the distribution of glandular tissues in breast with Geant4. The breast model was reversed in direction of compression to get a reverse model with a different distribution of glandular tissues. Depth dose distributions and glandular tissue dose conversion coefficients were calculated. It revealed that the conversion coefficients were about 10% larger when the breast model was reversed, for glandular tissues in the reverse model are concentrated in the upper part of the model.
NASA Astrophysics Data System (ADS)
Eldridge, J. J.; Stanway, E. R.; Xiao, L.; McClelland, L. A. S.; Taylor, G.; Ng, M.; Greis, S. M. L.; Bray, J. C.
2017-11-01
The Binary Population and Spectral Synthesis suite of binary stellar evolution models and synthetic stellar populations provides a framework for the physically motivated analysis of both the integrated light from distant stellar populations and the detailed properties of those nearby. We present a new version 2.1 data release of these models, detailing the methodology by which Binary Population and Spectral Synthesis incorporates binary mass transfer and its effect on stellar evolution pathways, as well as the construction of simple stellar populations. We demonstrate key tests of the latest Binary Population and Spectral Synthesis model suite demonstrating its ability to reproduce the colours and derived properties of resolved stellar populations, including well-constrained eclipsing binaries. We consider observational constraints on the ratio of massive star types and the distribution of stellar remnant masses. We describe the identification of supernova progenitors in our models, and demonstrate a good agreement to the properties of observed progenitors. We also test our models against photometric and spectroscopic observations of unresolved stellar populations, both in the local and distant Universe, finding that binary models provide a self-consistent explanation for observed galaxy properties across a broad redshift range. Finally, we carefully describe the limitations of our models, and areas where we expect to see significant improvement in future versions.
Economic decision making and the application of nonparametric prediction models
Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.
2007-01-01
Sustained increases in energy prices have focused attention on gas resources in low permeability shale or in coals that were previously considered economically marginal. Daily well deliverability is often relatively small, although the estimates of the total volumes of recoverable resources in these settings are large. Planning and development decisions for extraction of such resources must be area-wide because profitable extraction requires optimization of scale economies to minimize costs and reduce risk. For an individual firm the decision to enter such plays depends on reconnaissance level estimates of regional recoverable resources and on cost estimates to develop untested areas. This paper shows how simple nonparametric local regression models, used to predict technically recoverable resources at untested sites, can be combined with economic models to compute regional scale cost functions. The context of the worked example is the Devonian Antrim shale gas play, Michigan Basin. One finding relates to selection of the resource prediction model to be used with economic models. Models which can best predict aggregate volume over larger areas (many hundreds of sites) may lose granularity in the distribution of predicted volumes at individual sites. This loss of detail affects the representation of economic cost functions and may affect economic decisions. Second, because some analysts consider unconventional resources to be ubiquitous, the selection and order of specific drilling sites may, in practice, be determined by extraneous factors. The paper also shows that when these simple prediction models are used to strategically order drilling prospects, the gain in gas volume over volumes associated with simple random site selection amounts to 15 to 20 percent. It also discusses why the observed benefit of updating predictions from results of new drilling, as opposed to following static predictions, is somewhat smaller. Copyright 2007, Society of Petroleum Engineers.
Detail, starpattern balustrade of north span, from northwest, showing row ...
Detail, star-pattern balustrade of north span, from northwest, showing row of four star-pattern railing slabs bracketed by simple molded concrete balusters - Horner Street Bridge, Horner Street over Stonycreek River, Johnstown, Cambria County, PA
Actin-based propulsion of a microswimmer.
Leshansky, A M
2006-07-01
A simple hydrodynamic model of actin-based propulsion of microparticles in dilute cell-free cytoplasmic extracts is presented. Under the basic assumption that actin polymerization at the particle surface acts as a force dipole, pushing apart the load and the free (nonanchored) actin tail, the propulsive velocity of the microparticle is determined as a function of the tail length, porosity, and particle shape. The anticipated velocities of the cargo displacement and the rearward motion of the tail are in good agreement with recently reported results of biomimetic experiments. A more detailed analysis of the particle-tail hydrodynamic interaction is presented and compared to the prediction of the simplified model.
Brenner, M H
1983-01-01
This paper discusses a first-stage analysis of the link of unemployment rates, as well as other economic, social and environmental health risk factors, to mortality rates in postwar Britain. The results presented represent part of an international study of the impact of economic change on mortality patterns in industrialized countries. The mortality patterns examined include total and infant mortality and (by cause) cardiovascular (total), cerebrovascular and heart disease, cirrhosis of the liver, and suicide, homicide and motor vehicle accidents. Among the most prominent factors that beneficially influence postwar mortality patterns in England/Wales and Scotland are economic growth and stability and health service availability. A principal detrimental factor to health is a high rate of unemployment. Additional factors that have an adverse influence on mortality rates are cigarette consumption and heavy alcohol use and unusually cold winter temperatures (especially in Scotland). The model of mortality that includes both economic changes and behavioral and environmental risk factors was successfully applied to infant mortality rates in the interwar period. In addition, the "simple" economic change model of mortality (using only economic indicators) was applied to other industrialized countries. In Canada, the United States, the United Kingdom, and Sweden, the simple version of the economic change model could be successfully applied only if the analysis was begun before World War II; for analysis beginning in the postwar era, the more sophisticated economic change model, including behavioral and environmental risk factors, was required. In France, West Germany, Italy, and Spain, by contrast, some success was achieved using the simple economic change model.
Modeling Long-Term Fluvial Incision : Shall we Care for the Details of Short-Term Fluvial Dynamics?
NASA Astrophysics Data System (ADS)
Lague, D.; Davy, P.
2008-12-01
Fluvial incision laws used in numerical models of coupled climate, erosion and tectonics systems are mainly based on the family of stream power laws for which the rate of local erosion E is a power function of the topographic slope S and the local mean discharge Q : E = K Qm Sn. The exponents m and n are generally taken as (0.35, 0.7) or (0.5, 1), and K is chosen such that the predicted topographic elevation given the prevailing rates of precipitation and tectonics stay within realistic values. The resulting topographies are reasonably realistic, and the coupled system dynamics behaves somehow as expected : more precipitation induces increased erosion and localization of the deformation. Yet, if we now focus on smaller scale fluvial dynamics (the reach scale), recent advances have suggested that discharge variability, channel width dynamics or sediment flux effects may play a significant role in controlling incision rates. These are not factored in the simple stream power law model. In this work, we study how these short- term details propagate into long-term incision dynamics within the framework of surface/tectonics coupled numerical models. To upscale the short term dynamics to geological timescales, we use a numerical model of a trapezoidal river in which vertical and lateral incision processes are computed from fluid shear stress at a daily timescale, sediment transport and protection effects are factored in, as well as a variable discharge. We show that the stream power law model might still be a valid model but that as soon as realistic effects are included such as a threshold for sediment transport, variable discharge and dynamic width the resulting exponents m and n can be as high as 2 and 4. This high non-linearity has a profound consequence on the sensitivity of fluvial relief to incision rate. We also show that additional complexity does not systematically translates into more non-linear behaviour. For instance, considering only a dynamical width without discharge variability does not induce a significant difference in the predicted long-term incision law and scaling of relief with incision rate at steady-state. We conclude that the simple stream power law models currently in use are false, and that details of short-term fluvial dynamics must make their way into long-term evolution models to avoid oversimplifying the coupled dynamics between erosion, tectonics and climate.
Is realistic neuronal modeling realistic?
Almog, Mara
2016-01-01
Scientific models are abstractions that aim to explain natural phenomena. A successful model shows how a complex phenomenon arises from relatively simple principles while preserving major physical or biological rules and predicting novel experiments. A model should not be a facsimile of reality; it is an aid for understanding it. Contrary to this basic premise, with the 21st century has come a surge in computational efforts to model biological processes in great detail. Here we discuss the oxymoronic, realistic modeling of single neurons. This rapidly advancing field is driven by the discovery that some neurons don't merely sum their inputs and fire if the sum exceeds some threshold. Thus researchers have asked what are the computational abilities of single neurons and attempted to give answers using realistic models. We briefly review the state of the art of compartmental modeling highlighting recent progress and intrinsic flaws. We then attempt to address two fundamental questions. Practically, can we realistically model single neurons? Philosophically, should we realistically model single neurons? We use layer 5 neocortical pyramidal neurons as a test case to examine these issues. We subject three publically available models of layer 5 pyramidal neurons to three simple computational challenges. Based on their performance and a partial survey of published models, we conclude that current compartmental models are ad hoc, unrealistic models functioning poorly once they are stretched beyond the specific problems for which they were designed. We then attempt to plot possible paths for generating realistic single neuron models. PMID:27535372
Mathematical models to characterize early epidemic growth: A Review
Chowell, Gerardo; Sattenspiel, Lisa; Bansal, Shweta; Viboud, Cécile
2016-01-01
There is a long tradition of using mathematical models to generate insights into the transmission dynamics of infectious diseases and assess the potential impact of different intervention strategies. The increasing use of mathematical models for epidemic forecasting has highlighted the importance of designing reliable models that capture the baseline transmission characteristics of specific pathogens and social contexts. More refined models are needed however, in particular to account for variation in the early growth dynamics of real epidemics and to gain a better understanding of the mechanisms at play. Here, we review recent progress on modeling and characterizing early epidemic growth patterns from infectious disease outbreak data, and survey the types of mathematical formulations that are most useful for capturing a diversity of early epidemic growth profiles, ranging from sub-exponential to exponential growth dynamics. Specifically, we review mathematical models that incorporate spatial details or realistic population mixing structures, including meta-population models, individual-based network models, and simple SIR-type models that incorporate the effects of reactive behavior changes or inhomogeneous mixing. In this process, we also analyze simulation data stemming from detailed large-scale agent-based models previously designed and calibrated to study how realistic social networks and disease transmission characteristics shape early epidemic growth patterns, general transmission dynamics, and control of international disease emergencies such as the 2009 A/H1N1 influenza pandemic and the 2014-15 Ebola epidemic in West Africa. PMID:27451336
Mathematical models to characterize early epidemic growth: A review
NASA Astrophysics Data System (ADS)
Chowell, Gerardo; Sattenspiel, Lisa; Bansal, Shweta; Viboud, Cécile
2016-09-01
There is a long tradition of using mathematical models to generate insights into the transmission dynamics of infectious diseases and assess the potential impact of different intervention strategies. The increasing use of mathematical models for epidemic forecasting has highlighted the importance of designing reliable models that capture the baseline transmission characteristics of specific pathogens and social contexts. More refined models are needed however, in particular to account for variation in the early growth dynamics of real epidemics and to gain a better understanding of the mechanisms at play. Here, we review recent progress on modeling and characterizing early epidemic growth patterns from infectious disease outbreak data, and survey the types of mathematical formulations that are most useful for capturing a diversity of early epidemic growth profiles, ranging from sub-exponential to exponential growth dynamics. Specifically, we review mathematical models that incorporate spatial details or realistic population mixing structures, including meta-population models, individual-based network models, and simple SIR-type models that incorporate the effects of reactive behavior changes or inhomogeneous mixing. In this process, we also analyze simulation data stemming from detailed large-scale agent-based models previously designed and calibrated to study how realistic social networks and disease transmission characteristics shape early epidemic growth patterns, general transmission dynamics, and control of international disease emergencies such as the 2009 A/H1N1 influenza pandemic and the 2014-2015 Ebola epidemic in West Africa.
Intro to the unspecified toxic chemicals module, when to list toxic chemicals as a candidate cause, ways to measure toxic chemicals, simple and detailed conceptual diagrams for toxic chemicals, toxic chemicals module references and literature reviews.
Andereggen, Lukas; Neuschmelting, Volker; von Gunten, Michael; Widmer, Hans Rudolf; Takala, Jukka; Jakob, Stephan M; Fandino, Javier; Marbacher, Serge
2014-10-02
Early brain injury and delayed cerebral vasospasm both contribute to unfavorable outcomes after subarachnoid hemorrhage (SAH). Reproducible and controllable animal models that simulate both conditions are presently uncommon. Therefore, new models are needed in order to mimic human pathophysiological conditions resulting from SAH. This report describes the technical nuances of a rabbit blood-shunt SAH model that enables control of intracerebral pressure (ICP). An extracorporeal shunt is placed between the arterial system and the subarachnoid space, which enables examiner-independent SAH in a closed cranium. Step-by-step procedural instructions and necessary equipment are described, as well as technical considerations to produce the model with minimal mortality and morbidity. Important details required for successful surgical creation of this robust, simple and consistent ICP-controlled SAH rabbit model are described.
Design Through Manufacturing: The Solid Model-Finite Element Analysis Interface
NASA Technical Reports Server (NTRS)
Rubin, Carol
2002-01-01
State-of-the-art computer aided design (CAD) presently affords engineers the opportunity to create solid models of machine parts reflecting every detail of the finished product. Ideally, in the aerospace industry, these models should fulfill two very important functions: (1) provide numerical. control information for automated manufacturing of precision parts, and (2) enable analysts to easily evaluate the stress levels (using finite element analysis - FEA) for all structurally significant parts used in aircraft and space vehicles. Today's state-of-the-art CAD programs perform function (1) very well, providing an excellent model for precision manufacturing. But they do not provide a straightforward and simple means of automating the translation from CAD to FEA models, especially for aircraft-type structures. Presently, the process of preparing CAD models for FEA consumes a great deal of the analyst's time.
A New Model of Jupiter's Magnetic Field From Juno's First Nine Orbits
NASA Astrophysics Data System (ADS)
Connerney, J. E. P.; Kotsiaros, S.; Oliversen, R. J.; Espley, J. R.; Joergensen, J. L.; Joergensen, P. S.; Merayo, J. M. G.; Herceg, M.; Bloxham, J.; Moore, K. M.; Bolton, S. J.; Levin, S. M.
2018-03-01
A spherical harmonic model of the magnetic field of Jupiter is obtained from vector magnetic field observations acquired by the Juno spacecraft during its first nine polar orbits about the planet. Observations acquired during eight of these orbits provide the first truly global coverage of Jupiter's magnetic field with a coarse longitudinal separation of 45° between perijoves. The magnetic field is represented with a degree 20 spherical harmonic model for the planetary ("internal") field, combined with a simple model of the magnetodisc for the field ("external") due to distributed magnetospheric currents. Partial solution of the underdetermined inverse problem using generalized inverse techniques yields a model ("Juno Reference Model through Perijove 9") of the planetary magnetic field with spherical harmonic coefficients well determined through degree and order 10, providing the first detailed view of a planetary dynamo beyond Earth.
CADDIS Volume 2. Sources, Stressors and Responses: Ionic Strength - Detailed Conceptual Diagram
Introduction to the ionic strength module, when to list ionic strength as a candidate cause, ways to measure ionic strength, simple and detailed conceptual diagrams for ionic strength, ionic strength module references and literature reviews.
CADDIS Volume 2. Sources, Stressors and Responses: Physical Habitat - Detailed Conceptual Diagram
Introduction to the Physical Habitat module, when to list Physical Habitat as a candidate cause, ways to measure Physical Habitat, simple and detailed conceptual diagrams for Physical Habitat, Physical Habitat module references and literature reviews.
Quadtree of TIN: a new algorithm of dynamic LOD
NASA Astrophysics Data System (ADS)
Zhang, Junfeng; Fei, Lifan; Chen, Zhen
2009-10-01
Currently, Real-time visualization of large-scale digital elevation model mainly employs the regular structure of GRID based on quadtree and triangle simplification methods based on irregular triangulated network (TIN). TIN is a refined means to express the terrain surface in the computer science, compared with GRID. However, the data structure of TIN model is complex, and is difficult to realize view-dependence representation of level of detail (LOD) quickly. GRID is a simple method to realize the LOD of terrain, but contains more triangle count. A new algorithm, which takes full advantage of the two methods' merit, is presented in this paper. This algorithm combines TIN with quadtree structure to realize the view-dependence LOD controlling over the irregular sampling point sets, and holds the details through the distance of viewpoint and the geometric error of terrain. Experiments indicate that this approach can generate an efficient quadtree triangulation hierarchy over any irregular sampling point sets and achieve dynamic and visual multi-resolution performance of large-scale terrain at real-time.
Tipping Points, Great and Small
NASA Astrophysics Data System (ADS)
Morrison, Foster
2010-12-01
The Forum by Jordan et al. [2010] addressed environmental problems of various scales in great detail, but getting the critical message through to the formulators of public policies requires going back to basics, namely, that exponential growth (of a population, an economy, or most anything else) is not sustainable. When have you heard any politician or economist from anywhere across the ideological spectrum say anything other than that more growth is essential? There is no need for computer models to demonstrate “limits to growth,” as was done in the 1960s. Of course, as one seeks more details, the complexity of modeling will rapidly outstrip the capabilities of both observation and computing. This is common with nonlinear systems, even simple ones. Thus, identifying all possible “tipping points,” as suggested by Jordan et al. [2010], and then stopping just short of them, is impractical if not impossible. The main thing needed to avoid environmental disasters is a bit of common sense.
Simulating the growth of an charge cloud for a microchannel plate detector
NASA Astrophysics Data System (ADS)
Siwal, Davinder; Wiggins, Blake; Desouza, Romualdo
2015-10-01
Position sensitive microchannel plate (MCP) detectors have a variety of applications in the fields of astronomy, medical imaging, neutron imaging, and ion beam tracking. Recently, a novel approach has been implemented to detect the position of an incident particle. The charge cloud produced by the MCP induces a signal on a wire harp placed between the MCP and an anode. On qualitative grounds it is clear that in this detector the induced signal shape depends on the size of the electron cloud. A detailed study has therefore been performed to investigate the size of the charge cloud within the MCP and its growth as it propagates from the MCP to the anode. A simple model has been developed to calculate the impact of charge repulsion on the growth of the electron cloud. Both the details of the model and its predictions will be presented. Supported by the US DOE NNSA under Award No. DE-NA0002012.
NASA Astrophysics Data System (ADS)
Rose, A.; McKee, J.; Weber, E.; Bhaduri, B. L.
2017-12-01
Leveraging decades of expertise in population modeling, and in response to growing demand for higher resolution population data, Oak Ridge National Laboratory is now generating LandScan HD at global scale. LandScan HD is conceived as a 90m resolution population distribution where modeling is tailored to the unique geography and data conditions of individual countries or regions by combining social, cultural, physiographic, and other information with novel geocomputation methods. Similarities among these areas are exploited in order to leverage existing training data and machine learning algorithms to rapidly scale development. Drawing on ORNL's unique set of capabilities, LandScan HD adapts highly mature population modeling methods developed for LandScan Global and LandScan USA, settlement mapping research and production in high-performance computing (HPC) environments, land use and neighborhood mapping through image segmentation, and facility-specific population density models. Adopting a flexible methodology to accommodate different geographic areas, LandScan HD accounts for the availability, completeness, and level of detail of relevant ancillary data. Beyond core population and mapped settlement inputs, these factors determine the model complexity for an area, requiring that for any given area, a data-driven model could support either a simple top-down approach, a more detailed bottom-up approach, or a hybrid approach.
Screening level risk assessment model for chemical fate and effects in the environment.
Arnot, Jon A; Mackay, Don; Webster, Eva; Southwood, Jeanette M
2006-04-01
A screening level risk assessment model is developed and described to assess and prioritize chemicals by estimating environmental fate and transport, bioaccumulation, and exposure to humans and wildlife for a unit emission rate. The most sensitive risk endpoint is identified and a critical emission rate is then calculated as a result of that endpoint being reached. Finally, this estimated critical emission rate is compared with the estimated actual emission rate as a risk assessment factor. This "back-tracking" process avoids the use of highly uncertain emission rate data as model input. The application of the model is demonstrated in detail for three diverse chemicals and in less detail for a group of 70 chemicals drawn from the Canadian Domestic Substances List. The simple Level II and the more complex Level III fate calculations are used to "bin" substances into categories of similar probable risk. The essential role of the model is to synthesize information on chemical and environmental properties within a consistent mass balance framework to yield an overall estimate of screening level risk with respect to the defined endpoint. The approach may be useful to identify and prioritize those chemicals of commerce that are of greatest potential concern and require more comprehensive modeling and monitoring evaluations in actual regional environments and food webs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bostani, Maryam, E-mail: mbostani@mednet.ucla.edu; McMillan, Kyle; Cagnon, Chris H.
2014-11-01
Purpose: Monte Carlo (MC) simulation methods have been widely used in patient dosimetry in computed tomography (CT), including estimating patient organ doses. However, most simulation methods have undergone a limited set of validations, often using homogeneous phantoms with simple geometries. As clinical scanning has become more complex and the use of tube current modulation (TCM) has become pervasive in the clinic, MC simulations should include these techniques in their methodologies and therefore should also be validated using a variety of phantoms with different shapes and material compositions to result in a variety of differently modulated tube current profiles. The purposemore » of this work is to perform the measurements and simulations to validate a Monte Carlo model under a variety of test conditions where fixed tube current (FTC) and TCM were used. Methods: A previously developed MC model for estimating dose from CT scans that models TCM, built using the platform of MCNPX, was used for CT dose quantification. In order to validate the suitability of this model to accurately simulate patient dose from FTC and TCM CT scan, measurements and simulations were compared over a wide range of conditions. Phantoms used for testing range from simple geometries with homogeneous composition (16 and 32 cm computed tomography dose index phantoms) to more complex phantoms including a rectangular homogeneous water equivalent phantom, an elliptical shaped phantom with three sections (where each section was a homogeneous, but different material), and a heterogeneous, complex geometry anthropomorphic phantom. Each phantom requires varying levels of x-, y- and z-modulation. Each phantom was scanned on a multidetector row CT (Sensation 64) scanner under the conditions of both FTC and TCM. Dose measurements were made at various surface and depth positions within each phantom. Simulations using each phantom were performed for FTC, detailed x–y–z TCM, and z-axis-only TCM to obtain dose estimates. This allowed direct comparisons between measured and simulated dose values under each condition of phantom, location, and scan to be made. Results: For FTC scans, the percent root mean square (RMS) difference between measurements and simulations was within 5% across all phantoms. For TCM scans, the percent RMS of the difference between measured and simulated values when using detailed TCM and z-axis-only TCM simulations was 4.5% and 13.2%, respectively. For the anthropomorphic phantom, the difference between TCM measurements and detailed TCM and z-axis-only TCM simulations was 1.2% and 8.9%, respectively. For FTC measurements and simulations, the percent RMS of the difference was 5.0%. Conclusions: This work demonstrated that the Monte Carlo model developed provided good agreement between measured and simulated values under both simple and complex geometries including an anthropomorphic phantom. This work also showed the increased dose differences for z-axis-only TCM simulations, where considerable modulation in the x–y plane was present due to the shape of the rectangular water phantom. Results from this investigation highlight details that need to be included in Monte Carlo simulations of TCM CT scans in order to yield accurate, clinically viable assessments of patient dosimetry.« less
The Phyre2 web portal for protein modelling, prediction and analysis
Kelley, Lawrence A; Mezulis, Stefans; Yates, Christopher M; Wass, Mark N; Sternberg, Michael JE
2017-01-01
Summary Phyre2 is a suite of tools available on the web to predict and analyse protein structure, function and mutations. The focus of Phyre2 is to provide biologists with a simple and intuitive interface to state-of-the-art protein bioinformatics tools. Phyre2 replaces Phyre, the original version of the server for which we previously published a protocol. In this updated protocol, we describe Phyre2, which uses advanced remote homology detection methods to build 3D models, predict ligand binding sites, and analyse the effect of amino-acid variants (e.g. nsSNPs) for a user’s protein sequence. Users are guided through results by a simple interface at a level of detail determined by them. This protocol will guide a user from submitting a protein sequence to interpreting the secondary and tertiary structure of their models, their domain composition and model quality. A range of additional available tools is described to find a protein structure in a genome, to submit large number of sequences at once and to automatically run weekly searches for proteins difficult to model. The server is available at http://www.sbg.bio.ic.ac.uk/phyre2. A typical structure prediction will be returned between 30mins and 2 hours after submission. PMID:25950237
Homeopathic potentization based on nanoscale domains.
Czerlinski, George; Ypma, Tjalling
2011-12-01
The objectives of this study were to present a simple descriptive and quantitative model of how high potencies in homeopathy arise. The model begins with the mechanochemical production of hydrogen and hydroxyl radicals from water and the electronic stabilization of the resulting nanodomains of water molecules. The life of these domains is initially limited to a few days, but may extend to years when the electromagnetic characteristic of a homeopathic agent is copied onto the domains. This information is transferred between the original agent and the nanodomains, and also between previously imprinted nanodomains and new ones. The differential equations previously used to describe these processes are replaced here by exponential expressions, corresponding to simplified model mechanisms. Magnetic stabilization is also involved, since these long-lived domains apparently require the presence of the geomagnetic field. Our model incorporates this factor in the formation of the long-lived compound. Numerical simulation and graphs show that the potentization mechanism can be described quantitatively by a very simplified mechanism. The omitted factors affect only the fine structure of the kinetics. Measurements of pH changes upon absorption of different electromagnetic frequencies indicate that about 400 nanodomains polymerize to form one cooperating unit. Singlet excited states of some compounds lead to dramatic changes in their hydrogen ion dissociation constant, explaining this pH effect and suggesting that homeopathic information is imprinted as higher singlet excited states. A simple description is provided of the process of potentization in homeopathic dilutions. With the exception of minor details, this simple model replicates the results previously obtained from a more complex model. While excited states are short lived in isolated molecules, they become long lived in nanodomains that form coherent cooperative aggregates controlled by the geomagnetic field. These domains either slowly emit biophotons or perform specific biochemical work at their target.
NASA Astrophysics Data System (ADS)
Ke, Haohao; Ondov, John M.; Rogge, Wolfgang F.
2013-12-01
Composite chemical profiles of motor vehicle emissions were extracted from ambient measurements at a near-road site in Baltimore during a windless traffic episode in November, 2002, using four independent approaches, i.e., simple peak analysis, windless model-based linear regression, PMF, and UNMIX. Although the profiles are in general agreement, the windless-model-based profile treatment more effectively removes interference from non-traffic sources and is deemed to be more accurate for many species. In addition to abundances of routine pollutants (e.g., NOx, CO, PM2.5, EC, OC, sulfate, and nitrate), 11 particle-bound metals and 51 individual traffic-related organic compounds (including n-alkanes, PAHs, oxy-PAHs, hopanes, alkylcyclohexanes, and others) were included in the modeling.
Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology.
Schaff, James C; Gao, Fei; Li, Ye; Novak, Igor L; Slepchenko, Boris M
2016-12-01
Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium 'sparks' as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell.
Teaching Einsteinian physics at schools: part 1, models and analogies for relativity
NASA Astrophysics Data System (ADS)
Kaur, Tejinder; Blair, David; Moschilla, John; Stannard, Warren; Zadnik, Marjan
2017-11-01
The Einstein-First project aims to change the paradigm of school science teaching through the introduction of modern Einsteinian concepts of space and time, gravity and quanta at an early age. These concepts are rarely taught to school students despite their central importance to modern science and technology. The key to implementing the Einstein-First curriculum is the development of appropriate models and analogies. This paper is the first part of a three-paper series. It presents the conceptual foundation of our approach, based on simple physical models and analogies, followed by a detailed description of the models and analogies used to teach concepts of general and special relativity. Two accompanying papers address the teaching of quantum physics (Part 2) and research outcomes (Part 3).
Value of the distant future: Model-independent results
NASA Astrophysics Data System (ADS)
Katz, Yuri A.
2017-01-01
This paper shows that the model-independent account of correlations in an interest rate process or a log-consumption growth process leads to declining long-term tails of discount curves. Under the assumption of an exponentially decaying memory in fluctuations of risk-free real interest rates, I derive the analytical expression for an apt value of the long run discount factor and provide a detailed comparison of the obtained result with the outcome of the benchmark risk-free interest rate models. Utilizing the standard consumption-based model with an isoelastic power utility of the representative economic agent, I derive the non-Markovian generalization of the Ramsey discounting formula. Obtained analytical results allowing simple calibration, may augment the rigorous cost-benefit and regulatory impact analysis of long-term environmental and infrastructure projects.
The cosmic gamma-ray background from Type Ia supernovae
NASA Technical Reports Server (NTRS)
The, Lih-Sin; Leising, Mark D.; Clayton, Donald D.
1993-01-01
We present an improved calculation of the cumulative gamma-ray spectrum of Type Ia supernovae during the history of the universe. We follow Clayton & Ward (1975) in using a few Friedmann models and two simple histories of the average galaxian nucleosynthesis rate, but we improve their calculation by modeling the gamma-ray scattering in detailed numerical models of SN Ia's. The results confirm that near 1 MeV the SN Ia background may dominate, and that it is potentially observable, with high scientific importance. A very accurate measurement of the cosmic background spectrum between 0.1 and 1.0 MeV may reveal the turn-on time and the evolution of the rate of Type Ia supernova nucleosynthesis in the universe.
Emergence of running dark energy from polynomial f( R) theory in Palatini formalism
NASA Astrophysics Data System (ADS)
Szydłowski, Marek; Stachowski, Aleksander; Borowiec, Andrzej
2017-09-01
We consider FRW cosmology in f(R)= R+ γ R^2+δ R^3 modified framework. The Palatini approach reduces its dynamics to the simple generalization of Friedmann equation. Thus we study the dynamics in two-dimensional phase space with some details. After reformulation of the model in the Einstein frame, it reduces to the FRW cosmological model with a homogeneous scalar field and vanishing kinetic energy term. This potential determines the running cosmological constant term as a function of the Ricci scalar. As a result we obtain the emergent dark energy parametrization from the covariant theory. We study also singularities of the model and demonstrate that in the Einstein frame some undesirable singularities disappear.
Replication of Cancellation Orders Using First-Passage Time Theory in Foreign Currency Market
NASA Astrophysics Data System (ADS)
Boilard, Jean-François; Kanazawa, Kiyoshi; Takayasu, Hideki; Takayasu, Misako
Our research focuses on the annihilation dynamics of limit orders in a spot foreign currency market for various currency pairs. We analyze the cancellation order distribution conditioned on the normalized distance from the mid-price; where the normalized distance is defined as the final distance divided by the initial distance. To reproduce real data, we introduce two simple models that assume the market price moves randomly and cancellation occurs either after fixed time t or following the Poisson process. Results of our model qualitatively reproduce basic statistical properties of cancellation orders of the data when limit orders are cancelled according to the Poisson process. We briefly discuss implication of our findings in the construction of more detailed microscopic models.
Gamma-ray bursts from internal shocks in a relativistic wind: a hydrodynamical study
NASA Astrophysics Data System (ADS)
Daigne, F.; Mochkovitch, R.
2000-06-01
The internal shock model for gamma-ray bursts involves shocks taking place in a relativistic wind with a very inhomogeneous initial distribution of the Lorentz factor. We have developed a 1D lagrangian hydrocode to follow the evolution of such a wind and the results we have obtained are compared to those of a simpler model presented in a recent paper (Daigne & Mochkovitch \\cite{Daigne2}) where all pressure waves are suppressed in the wind so that shells with different velocities only interact by direct collisions. The detailed hydrodynamical calculation essentially confirms the conclusion of the simple model: the main temporal and spectral properties of gamma-ray bursts can be reproduced by internal shocks in a relativistic wind.
NONMEMory: a run management tool for NONMEM.
Wilkins, Justin J
2005-06-01
NONMEM is an extremely powerful tool for nonlinear mixed-effect modelling and simulation of pharmacokinetic and pharmacodynamic data. However, it is a console-based application whose output does not lend itself to rapid interpretation or efficient management. NONMEMory has been created to be a comprehensive project manager for NONMEM, providing detailed summary, comparison and overview of the runs comprising a given project, including the display of output data, simple post-run processing, fast diagnostic plots and run output management, complementary to other available modelling aids. Analysis time ought not to be spent on trivial tasks, and NONMEMory's role is to eliminate these as far as possible by increasing the efficiency of the modelling process. NONMEMory is freely available from http://www.uct.ac.za/depts/pha/nonmemory.php.
The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems.
White, Andrew; Tolman, Malachi; Thames, Howard D; Withers, Hubert Rodney; Mason, Kathy A; Transtrum, Mark K
2016-12-01
We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model's discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system-a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model.
Intro to the unspecified toxic chemicals module, when to list toxic chemicals as a candidate cause, ways to measure toxic chemicals, simple and detailed conceptual diagrams for toxic chemicals, toxic chemicals module references and literature reviews.
NASA Astrophysics Data System (ADS)
Armstrong, Robert A.
2003-11-01
Phytoplankton species interact through competition for light and nutrients; they also interact through grazers they hold in common. Both interactions are expected to be size-dependent: smaller phytoplankton species will be at an advantage when nutrients are scarce due to surface/volume considerations, while species that are similar in size are more likely to be consumed by grazers held in common than are species that differ greatly in size. While phytoplankton competition for nutrients and light has been extensively characterized, size-based interaction through shared grazers has not been represented systematically. The latter situation is particularly unfortunate because small changes in community structure can give rise to large changes in ecosystem dynamics and, in inverse modeling, to large changes in estimated parameter values. A simple, systematic way to represent phytoplankton interaction through shared grazers, one resistant to unintended idiosyncrasy of model construction yet capable of representing scientifically justifiable idiosyncrasy, would aid greatly in the modeling process. Here I develop a model structure that allows systematic representation of plankton interaction. In this model, the zooplankton community is represented as a continuous size spectrum, while phytoplankton species can be represented individually. The mechanistic basis of the model is a shift in the zooplankton community from carnivory to omnivory to herbivory as phytoplankton density increases. I discuss two limiting approximations in some detail, and fit both to data from the IronEx II experiment. The first limiting case represents a community with no grazer-based interaction among phytoplankton species; this approximation illuminates the general structure of the model. In particular, the zooplankton spectrum can be viewed as the analog of a control rod in a nuclear reactor, which prevents (or fails to prevent) an exponential bloom of phytoplankton. A second, more complex limiting case allows more general interaction of phytoplankton species along a size axis. This latter case would be suitable for describing competition among species with distinct biogeochemical roles, or between species that cause harmful algal blooms and those that do not. The model structure as a whole is therefore simple enough to guide thinking, yet detailed enough to allow quantitative prediction.
The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems
Tolman, Malachi; Thames, Howard D.; Mason, Kathy A.
2016-01-01
We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model’s discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system–a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model. PMID:27923060
NASA Astrophysics Data System (ADS)
Nelson, Jonathan M.; Shimizu, Yasuyuki; Abe, Takaaki; Asahi, Kazutake; Gamou, Mineyuki; Inoue, Takuya; Iwasaki, Toshiki; Kakinuma, Takaharu; Kawamura, Satomi; Kimura, Ichiro; Kyuka, Tomoko; McDonald, Richard R.; Nabi, Mohamed; Nakatsugawa, Makoto; Simões, Francisco R.; Takebayashi, Hiroshi; Watanabe, Yasunori
2016-07-01
This paper describes a new, public-domain interface for modeling flow, sediment transport and morphodynamics in rivers and other geophysical flows. The interface is named after the International River Interface Cooperative (iRIC), the group that constructed the interface and many of the current solvers included in iRIC. The interface is entirely free to any user and currently houses thirteen models ranging from simple one-dimensional models through three-dimensional large-eddy simulation models. Solvers are only loosely coupled to the interface so it is straightforward to modify existing solvers or to introduce other solvers into the system. Six of the most widely-used solvers are described in detail including example calculations to serve as an aid for users choosing what approach might be most appropriate for their own applications. The example calculations range from practical computations of bed evolution in natural rivers to highly detailed predictions of the development of small-scale bedforms on an initially flat bed. The remaining solvers are also briefly described. Although the focus of most solvers is coupled flow and morphodynamics, several of the solvers are also specifically aimed at providing flood inundation predictions over large spatial domains. Potential users can download the application, solvers, manuals, and educational materials including detailed tutorials at www.-i-ric.org. The iRIC development group encourages scientists and engineers to use the tool and to consider adding their own methods to the iRIC suite of tools.
Nelson, Jonathan M.; Shimizu, Yasuyuki; Abe, Takaaki; Asahi, Kazutake; Gamou, Mineyuki; Inoue, Takuya; Iwasaki, Toshiki; Kakinuma, Takaharu; Kawamura, Satomi; Kimura, Ichiro; Kyuka, Tomoko; McDonald, Richard R.; Nabi, Mohamed; Nakatsugawa, Makoto; Simoes, Francisco J.; Takebayashi, Hiroshi; Watanabe, Yasunori
2016-01-01
This paper describes a new, public-domain interface for modeling flow, sediment transport and morphodynamics in rivers and other geophysical flows. The interface is named after the International River Interface Cooperative (iRIC), the group that constructed the interface and many of the current solvers included in iRIC. The interface is entirely free to any user and currently houses thirteen models ranging from simple one-dimensional models through three-dimensional large-eddy simulation models. Solvers are only loosely coupled to the interface so it is straightforward to modify existing solvers or to introduce other solvers into the system. Six of the most widely-used solvers are described in detail including example calculations to serve as an aid for users choosing what approach might be most appropriate for their own applications. The example calculations range from practical computations of bed evolution in natural rivers to highly detailed predictions of the development of small-scale bedforms on an initially flat bed. The remaining solvers are also briefly described. Although the focus of most solvers is coupled flow and morphodynamics, several of the solvers are also specifically aimed at providing flood inundation predictions over large spatial domains. Potential users can download the application, solvers, manuals, and educational materials including detailed tutorials at www.-i-ric.org. The iRIC development group encourages scientists and engineers to use the tool and to consider adding their own methods to the iRIC suite of tools.
Evaluation of the CEAS model for barley yields in North Dakota and Minnesota
NASA Technical Reports Server (NTRS)
Barnett, T. L. (Principal Investigator)
1981-01-01
The CEAS yield model is based upon multiple regression analysis at the CRD and state levels. For the historical time series, yield is regressed on a set of variables derived from monthly mean temperature and monthly precipitation. Technological trend is represented by piecewise linear and/or quadriatic functions of year. Indicators of yield reliability obtained from a ten-year bootstrap test (1970-79) demonstrated that biases are small and performance as indicated by the root mean square errors are acceptable for intended application, however, model response for individual years particularly unusual years, is not very reliable and shows some large errors. The model is objective, adequate, timely, simple and not costly. It considers scientific knowledge on a broad scale but not in detail, and does not provide a good current measure of modeled yield reliability.
Optimisation of a Generic Ionic Model of Cardiac Myocyte Electrical Activity
Guo, Tianruo; Al Abed, Amr; Lovell, Nigel H.; Dokos, Socrates
2013-01-01
A generic cardiomyocyte ionic model, whose complexity lies between a simple phenomenological formulation and a biophysically detailed ionic membrane current description, is presented. The model provides a user-defined number of ionic currents, employing two-gate Hodgkin-Huxley type kinetics. Its generic nature allows accurate reconstruction of action potential waveforms recorded experimentally from a range of cardiac myocytes. Using a multiobjective optimisation approach, the generic ionic model was optimised to accurately reproduce multiple action potential waveforms recorded from central and peripheral sinoatrial nodes and right atrial and left atrial myocytes from rabbit cardiac tissue preparations, under different electrical stimulus protocols and pharmacological conditions. When fitted simultaneously to multiple datasets, the time course of several physiologically realistic ionic currents could be reconstructed. Model behaviours tend to be well identified when extra experimental information is incorporated into the optimisation. PMID:23710254
Global dynamics in a stoichiometric food chain model with two limiting nutrients.
Chen, Ming; Fan, Meng; Kuang, Yang
2017-07-01
Ecological stoichiometry studies the balance of energy and multiple chemical elements in ecological interactions to establish how the nutrient content affect food-web dynamics and nutrient cycling in ecosystems. In this study, we formulate a food chain with two limiting nutrients in the form of a stoichiometric population model. A comprehensive global analysis of the rich dynamics of the targeted model is explored both analytically and numerically. Chaotic dynamic is observed in this simple stoichiometric food chain model and is compared with traditional model without stoichiometry. The detailed comparison reveals that stoichiometry can reduce the parameter space for chaotic dynamics. Our findings also show that decreasing producer production efficiency may have only a small effect on the consumer growth but a more profound impact on the top predator growth. Copyright © 2017 Elsevier Inc. All rights reserved.
Dynamic modeling of wheeled planetary rovers: A model based on the pseudo-coordiates approach
NASA Astrophysics Data System (ADS)
Chen, Feng; Genta, Giancarlo
2012-12-01
The paper deals with the dynamic modeling of wheeled planetary rovers operating on rough terrain. The dedicated model here presented, although kept as simple as possible, includes the effect of nonlinearities and models the suspensions in a realistic, albeit simplified, way. It can be interfaced with a model of the control system so that different control strategies can be studied in detail and, in case of teleoperated rovers, it can be used as a simulator for training the operators. Different implementations, with different degrees of complexity, are presented and compared with each other so that the user can simulate the dynamics of the rover making a tradeoff between simulation accuracy and computer time. The model allows to study the effects of the terrain characteristics, of the ground irregularities and the operating speed on the behavior of the rover. Some examples dealing with rovers with different configurations conclude the paper.
A Coarse-Grained Protein Model in a Water-like Solvent
NASA Astrophysics Data System (ADS)
Sharma, Sumit; Kumar, Sanat K.; Buldyrev, Sergey V.; Debenedetti, Pablo G.; Rossky, Peter J.; Stanley, H. Eugene
2013-05-01
Simulations employing an explicit atom description of proteins in solvent can be computationally expensive. On the other hand, coarse-grained protein models in implicit solvent miss essential features of the hydrophobic effect, especially its temperature dependence, and have limited ability to capture the kinetics of protein folding. We propose a free space two-letter protein (``H-P'') model in a simple, but qualitatively accurate description for water, the Jagla model, which coarse-grains water into an isotropically interacting sphere. Using Monte Carlo simulations, we design protein-like sequences that can undergo a collapse, exposing the ``Jagla-philic'' monomers to the solvent, while maintaining a ``hydrophobic'' core. This protein-like model manifests heat and cold denaturation in a manner that is reminiscent of proteins. While this protein-like model lacks the details that would introduce secondary structure formation, we believe that these ideas represent a first step in developing a useful, but computationally expedient, means of modeling proteins.
Thermal performance modeling of NASA s scientific balloons
NASA Astrophysics Data System (ADS)
Franco, H.; Cathey, H.
The flight performance of a scientific balloon is highly dependant on the interaction between the balloon and its environment. The balloon is a thermal vehicle. Modeling a scientific balloon's thermal performance has proven to be a difficult analytical task. Most previous thermal models have attempted these analyses by using either a bulk thermal model approach, or by simplified representations of the balloon. These approaches to date have provided reasonable, but not very accurate results. Improvements have been made in recent years using thermal analysis tools developed for the thermal modeling of spacecraft and other sophisticated heat transfer problems. These tools, which now allow for accurate modeling of highly transmissive materials, have been applied to the thermal analysis of NASA's scientific balloons. A research effort has been started that utilizes the "Thermal Desktop" addition to AUTO CAD. This paper will discuss the development of thermal models for both conventional and Ultra Long Duration super-pressure balloons. This research effort has focused on incremental analysis stages of development to assess the accuracy of the tool and the required model resolution to produce usable data. The first stage balloon thermal analyses started with simple spherical balloon models with a limited number of nodes, and expanded the number of nodes to determine required model resolution. These models were then modified to include additional details such as load tapes. The second stage analyses looked at natural shaped Zero Pressure balloons. Load tapes were then added to these shapes, again with the goal of determining the required modeling accuracy by varying the number of gores. The third stage, following the same steps as the Zero Pressure balloon efforts, was directed at modeling super-pressure pumpkin shaped balloons. The results were then used to develop analysis guidelines and an approach for modeling balloons for both simple first order estimates and detailed full models. The development of the radiative environment and program input files, the development of the modeling techniques for balloons, and the development of appropriate data output handling techniques for both the raw data and data plots will be discussed. A general guideline to match predicted balloon performance with known flight data will also be presented. One long-term goal of this effort is to develop simplified approaches and techniques to include results in performance codes being developed.
A Simple and Resource-efficient Setup for the Computer-aided Drug Design Laboratory.
Moretti, Loris; Sartori, Luca
2016-10-01
Undertaking modelling investigations for Computer-Aided Drug Design (CADD) requires a proper environment. In principle, this could be done on a single computer, but the reality of a drug discovery program requires robustness and high-throughput computing (HTC) to efficiently support the research. Therefore, a more capable alternative is needed but its implementation has no widespread solution. Here, the realization of such a computing facility is discussed, from general layout to technical details all aspects are covered. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Qiu, T.; Wu, X. L.; Mei, Y. F.; Chu, P. K.; Siu, G. G.
2005-09-01
Unique silver dendritic nanostructures, with stems, branches, and leaves, were synthesized with self-organization via a simple electroless metal deposition method in a conventional autoclave containing aqueous HF and AgNO3 solution. Their growth mechanisms are discussed in detail on the basis of a self-assembled localized microscopic electrochemical cell model. A process of diffusion-limited aggregation is suggested for the formation of the silver dendritic nanostructures. This nanostructured material is of great potential to be building blocks for assembling mini-functional devices of the next generation.
Manufacturing of diamond windows for synchrotron radiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schildkamp, W.; Nikitina, L.
2012-09-15
A new diamond window construction is presented and explicit manufacturing details are given. This window will increase the power dissipation by about a factor of 4 over present day state of the art windows to absorb 600 W of power. This power will be generated by in-vacuum undulators with the storage ring ALBA operating at a design current of 400 mA. Extensive finite element (FE) calculations are included to predict the windows behavior accompanied by explanations for the chosen boundary conditions. A simple linear model was used to cross-check the FE calculations.
NASA Astrophysics Data System (ADS)
Hanna, James; Chakrabarti, Brato
2015-11-01
Slender structures live in fluid flows across many scales, from towed instruments to plant blades to microfluidic valves. The present work details a simple model of a flexible structure in a uniform flow. We present analytical solutions for the translating, axially flowing equilibria of strings subjected to a uniform body force and linear drag forces. This is an extension of the classical catenaries to a five-parameter family of solutions, represented as trajectories in angle-curvature ``phase space.'' Limiting cases include neutrally buoyant towed cables and freely sedimenting flexible filaments. Now at University of California, San Diego.
G-Jitter Effects in Protein Crystal Growth - A Numerical Study
NASA Technical Reports Server (NTRS)
Ramachandran, N.; Baugher, C. R.
1995-01-01
The impact of spacecraft acceleration environment on Protein Crystal Growth (PCG) is studied. A brief overview of the Space Shuttle acceleration environment is provided followed by a simple scaling procedure used to obtain estimates of the flow and concentration field characteristics in PCG. A detailed two-dimensional numerical model is then used to simulate the PCG system response to different disturbance scenarios; viz. residual g effects, impulse type disturbances and oscillatory inputs. The results show that PCG is susceptible to g-jitter and is a good candidate for vibration isolation.
NASA Astrophysics Data System (ADS)
Filinov, A.; Bonitz, M.; Loffhagen, D.
2018-06-01
A new combination of first principle molecular dynamics (MD) simulations with a rate equation model presented in the preceding paper (paper I) is applied to analyze in detail the scattering of argon atoms from a platinum (111) surface. The combined model is based on a classification of all atom trajectories according to their energies into trapped, quasi-trapped and scattering states. The number of particles in each of the three classes obeys coupled rate equations. The coefficients in the rate equations are the transition probabilities between these states which are obtained from MD simulations. While these rates are generally time-dependent, after a characteristic time scale t E of several tens of picoseconds they become stationary allowing for a rather simple analysis. Here, we investigate this time scale by analyzing in detail the temporal evolution of the energy distribution functions of the adsorbate atoms. We separately study the energy loss distribution function of the atoms and the distribution function of in-plane and perpendicular energy components. Further, we compute the sticking probability of argon atoms as a function of incident energy, angle and lattice temperature. Our model is important for plasma-surface modeling as it allows to extend accurate simulations to longer time scales.
Automated adaptive inference of phenomenological dynamical models
NASA Astrophysics Data System (ADS)
Daniels, Bryan
Understanding the dynamics of biochemical systems can seem impossibly complicated at the microscopic level: detailed properties of every molecular species, including those that have not yet been discovered, could be important for producing macroscopic behavior. The profusion of data in this area has raised the hope that microscopic dynamics might be recovered in an automated search over possible models, yet the combinatorial growth of this space has limited these techniques to systems that contain only a few interacting species. We take a different approach inspired by coarse-grained, phenomenological models in physics. Akin to a Taylor series producing Hooke's Law, forgoing microscopic accuracy allows us to constrain the search over dynamical models to a single dimension. This makes it feasible to infer dynamics with very limited data, including cases in which important dynamical variables are unobserved. We name our method Sir Isaac after its ability to infer the dynamical structure of the law of gravitation given simulated planetary motion data. Applying the method to output from a microscopically complicated but macroscopically simple biological signaling model, it is able to adapt the level of detail to the amount of available data. Finally, using nematode behavioral time series data, the method discovers an effective switch between behavioral attractors after the application of a painful stimulus.
The Top 10 List of Gravitational Lens Candidates from the HUBBLE SPACE TELESCOPE Medium Deep Survey
NASA Astrophysics Data System (ADS)
Ratnatunga, Kavan U.; Griffiths, Richard E.; Ostrander, Eric J.
1999-05-01
A total of 10 good candidates for gravitational lensing have been discovered in the WFPC2 images from the Hubble Space Telescope (HST) Medium Deep Survey (MDS) and archival primary observations. These candidate lenses are unique HST discoveries, i.e., they are faint systems with subarcsecond separations between the lensing objects and the lensed source images. Most of them are difficult objects for ground-based spectroscopic confirmation or for measurement of the lens and source redshifts. Seven are ``strong lens'' candidates that appear to have multiple images of the source. Three are cases in which the single image of the source galaxy has been significantly distorted into an arc. The first two quadruply lensed candidates were reported by Ratnatunga et al. We report on the subsequent eight candidates and describe them with simple models based on the assumption of singular isothermal potentials. Residuals from the simple models for some of the candidates indicate that a more complex model for the potential will probably be required to explain the full structural detail of the observations once they are confirmed to be lenses. We also discuss the effective survey area that was searched for these candidate lens objects.
A Simple Exploration of Complexity at the Climate-Weather-Social-Conflict Nexus
NASA Astrophysics Data System (ADS)
Shaw, M.
2017-12-01
The conceptualization, exploration, and prediction of interplay between climate, weather, important resources, and social and economic - so political - human behavior is cast, and analyzed, in terms familiar from statistical physics and nonlinear dynamics. A simple threshold toy model is presented which emulates human tendencies to either actively engage in responses deriving, in part, from environmental circumstances or to maintain some semblance of status quo, formulated based on efforts drawn from the sociophysics literature - more specifically vis a vis a model akin to spin glass depictions of human behavior - with threshold/switching of individual and collective dynamics influenced by relatively more detailed weather and land surface model (hydrological) analyses via a land data assimilation system (a custom rendition of the NASA GSFC Land Information System). Parameters relevant to human systems' - e.g., individual and collective switching - sensitivity to hydroclimatology are explored towards investigation of overall system behavior; i.e., fixed points/equilibria, oscillations, and bifurcations of systems composed of human interactions and responses to climate and weather through, e.g., agriculture. We discuss implications in terms of conceivable impacts of climate change and associated natural disasters on socioeconomics, politics, and power transfer, drawing from relatively recent literature concerning human conflict.
A simple phenomenological model for grain clustering in turbulence
NASA Astrophysics Data System (ADS)
Hopkins, Philip F.
2016-01-01
We propose a simple model for density fluctuations of aerodynamic grains, embedded in a turbulent, gravitating gas disc. The model combines a calculation for the behaviour of a group of grains encountering a single turbulent eddy, with a hierarchical approximation of the eddy statistics. This makes analytic predictions for a range of quantities including: distributions of grain densities, power spectra and correlation functions of fluctuations, and maximum grain densities reached. We predict how these scale as a function of grain drag time ts, spatial scale, grain-to-gas mass ratio tilde{ρ }, strength of turbulence α, and detailed disc properties. We test these against numerical simulations with various turbulence-driving mechanisms. The simulations agree well with the predictions, spanning ts Ω ˜ 10-4-10, tilde{ρ }˜ 0{-}3, α ˜ 10-10-10-2. Results from `turbulent concentration' simulations and laboratory experiments are also predicted as a special case. Vortices on a wide range of scales disperse and concentrate grains hierarchically. For small grains this is most efficient in eddies with turnover time comparable to the stopping time, but fluctuations are also damped by local gas-grain drift. For large grains, shear and gravity lead to a much broader range of eddy scales driving fluctuations, with most power on the largest scales. The grain density distribution has a log-Poisson shape, with fluctuations for large grains up to factors ≳1000. We provide simple analytic expressions for the predictions, and discuss implications for planetesimal formation, grain growth, and the structure of turbulence.
The noisy edge of traveling waves
Hallatschek, Oskar
2011-01-01
Traveling waves are ubiquitous in nature and control the speed of many important dynamical processes, including chemical reactions, epidemic outbreaks, and biological evolution. Despite their fundamental role in complex systems, traveling waves remain elusive because they are often dominated by rare fluctuations in the wave tip, which have defied any rigorous analysis so far. Here, we show that by adjusting nonlinear model details, noisy traveling waves can be solved exactly. The moment equations of these tuned models are closed and have a simple analytical structure resembling the deterministic approximation supplemented by a nonlocal cutoff term. The peculiar form of the cutoff shapes the noisy edge of traveling waves and is critical for the correct prediction of the wave speed and its fluctuations. Our approach is illustrated and benchmarked using the example of fitness waves arising in simple models of microbial evolution, which are highly sensitive to number fluctuations. We demonstrate explicitly how these models can be tuned to account for finite population sizes and determine how quickly populations adapt as a function of population size and mutation rates. More generally, our method is shown to apply to a broad class of models, in which number fluctuations are generated by branching processes. Because of this versatility, the method of model tuning may serve as a promising route toward unraveling universal properties of complex discrete particle systems. PMID:21187435
Simple Rules Govern the Patterns of Arctic Sea Ice Melt Ponds
NASA Astrophysics Data System (ADS)
Popović, Predrag; Cael, B. B.; Silber, Mary; Abbot, Dorian S.
2018-04-01
Climate change, amplified in the far north, has led to rapid sea ice decline in recent years. In the summer, melt ponds form on the surface of Arctic sea ice, significantly lowering the ice reflectivity (albedo) and thereby accelerating ice melt. Pond geometry controls the details of this crucial feedback; however, a reliable model of pond geometry does not currently exist. Here we show that a simple model of voids surrounding randomly sized and placed overlapping circles reproduces the essential features of pond patterns. The only two model parameters, characteristic circle radius and coverage fraction, are chosen by comparing, between the model and the aerial photographs of the ponds, two correlation functions which determine the typical pond size and their connectedness. Using these parameters, the void model robustly reproduces the ponds' area-perimeter and area-abundance relationships over more than 6 orders of magnitude. By analyzing the correlation functions of ponds on several dates, we also find that the pond scale and the connectedness are surprisingly constant across different years and ice types. Moreover, we find that ponds resemble percolation clusters near the percolation threshold. These results demonstrate that the geometry and abundance of Arctic melt ponds can be simply described, which can be exploited in future models of Arctic melt ponds that would improve predictions of the response of sea ice to Arctic warming.
Simple Rules Govern the Patterns of Arctic Sea Ice Melt Ponds.
Popović, Predrag; Cael, B B; Silber, Mary; Abbot, Dorian S
2018-04-06
Climate change, amplified in the far north, has led to rapid sea ice decline in recent years. In the summer, melt ponds form on the surface of Arctic sea ice, significantly lowering the ice reflectivity (albedo) and thereby accelerating ice melt. Pond geometry controls the details of this crucial feedback; however, a reliable model of pond geometry does not currently exist. Here we show that a simple model of voids surrounding randomly sized and placed overlapping circles reproduces the essential features of pond patterns. The only two model parameters, characteristic circle radius and coverage fraction, are chosen by comparing, between the model and the aerial photographs of the ponds, two correlation functions which determine the typical pond size and their connectedness. Using these parameters, the void model robustly reproduces the ponds' area-perimeter and area-abundance relationships over more than 6 orders of magnitude. By analyzing the correlation functions of ponds on several dates, we also find that the pond scale and the connectedness are surprisingly constant across different years and ice types. Moreover, we find that ponds resemble percolation clusters near the percolation threshold. These results demonstrate that the geometry and abundance of Arctic melt ponds can be simply described, which can be exploited in future models of Arctic melt ponds that would improve predictions of the response of sea ice to Arctic warming.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vermaas, Josh V.; Petridis, Loukas; Qi, Xianghong
The conversion of plant biomass to ethanol via enzymatic cellulose hydrolysis offers a potentially sustainable route to biofuel production. However, the inhibition of enzymatic activity in pretreated biomass by lignin severely limits the efficiency of this process. By performing atomic-detail molecular dynamics simulation of a biomass model containing cellulose, lignin, and cellulases (TrCel7A), we elucidate detailed lignin inhibition mechanisms. We find that lignin binds preferentially both to the elements of cellulose to which the cellulases also preferentially bind (the hydrophobic faces) and also to the specific residues on the cellulose-binding module of the cellulase that are critical for cellulose bindingmore » of TrCel7A (Y466, Y492, and Y493). In conclusion, lignin thus binds exactly where for industrial purposes it is least desired, providing a simple explanation of why hydrolysis yields increase with lignin removal.« less
Digit replacement: A generic map for nonlinear dynamical systems.
García-Morales, Vladimir
2016-09-01
A simple discontinuous map is proposed as a generic model for nonlinear dynamical systems. The orbit of the map admits exact solutions for wide regions in parameter space and the method employed (digit manipulation) allows the mathematical design of useful signals, such as regular or aperiodic oscillations with specific waveforms, the construction of complex attractors with nontrivial properties as well as the coexistence of different basins of attraction in phase space with different qualitative properties. A detailed analysis of the dynamical behavior of the map suggests how the latter can be used in the modeling of complex nonlinear dynamics including, e.g., aperiodic nonchaotic attractors and the hierarchical deposition of grains of different sizes on a surface.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lappi, T.; Schenke, B.; Schlichting, S.
Here we examine the origins of azimuthal correlations observed in high energy proton-nucleus collisions by considering the simple example of the scattering of uncorrelated partons off color fields in a large nucleus. We demonstrate how the physics of fluctuating color fields in the color glass condensate (CGC) effective theory generates these azimuthal multiparticle correlations and compute the corresponding Fourier coefficients v n within different CGC approximation schemes. We discuss in detail the qualitative and quantitative differences between the different schemes. Lastly, we will show how a recently introduced color field domain model that captures key features of the observed azimuthalmore » correlations can be understood in the CGC effective theory as a model of non-Gaussian correlations in the target nucleus.« less
Molecular clouds and galactic spiral structure
NASA Technical Reports Server (NTRS)
Dame, T. M.
1984-01-01
Galactic CO line emission at 115 GHz was surveyed in order to study the distribution of molecular clouds in the inner galaxy. Comparison of this survey with similar H1 data reveals a detailed correlation with the most intense 21 cm features. To each of the classical 21 cm H1 spiral arms of the inner galaxy there corresponds a CO molecular arm which is generally more clearly defined and of higher contrast. A simple model is devised for the galactic distribution of molecular clouds. The modeling results suggest that molecular clouds are essentially transient objects, existing for 15 to 40 million years after their formation in a spiral arm, and are largely confined to spiral features about 300 pc wide.
Thermodynamics of Resonant Scalars in AdS/CFT and implications for QCD
NASA Astrophysics Data System (ADS)
Megías, Eugenio; Valle, Manuel
2016-11-01
We explore the thermodynamics of a simple 5D Einstein-dilaton gravity model with a massive scalar field, with asymptotically AdS behavior in the UV. The holographic renormalization is addressed in details, and analytical results are obtained at high temperatures. We study the power corrections predicted by the model, and compare with lattice data in the deconfined phase of gluodynamics. Finally, it is discussed the role played by the conformal anomaly for integer values of the dimension of the condensate dual to the scalar field. Talk given by E. Megías at the QCD@Work: International Workshop on QCD, 27-30 June 2016, Martina Franca, Italy.
NASA Technical Reports Server (NTRS)
Henderson, R. A.; Schrag, R. L.
1986-01-01
A summary of modeling the electrical system aspects of a coil and metal target configuration resembling a practical electro-impulse deicing (EIDI) installation, and a simple circuit for providing energy to the coil, was presented. The model was developed in sufficient theoretical detail to allow the generation of computer algorithms for the current in the coil, the magnetic induction on both surfaces of the target, the force between the coil and target, and the impulse delivered to the target. These algorithms were applied to a specific prototype EIDI test system for which the current, magnetic fields near the target surfaces, and impulse were previously measured.
NASA Technical Reports Server (NTRS)
Sekanina, Zdenek
1991-01-01
One of the more attractive among the plausible scenarios for the major emission event recently observed on Comet Halley at a heliocentric distance of 14.3 AU is activation of a source of ejecta driven by an icy substance much more volatile than water. As prerequisite for the forthcoming detailed analysis of the imaging observations of this event, a simple model is proposed that yields the sublimation rate versus time at any location on the surface of a rotating cometary nucleus for two candidate ices: carbon monoxide and carbon dioxide. The model's variable parameters are the comet's heliocentric distance r and the Sun's instantaneous zenith angle z.
Coagulation-Fragmentation Model for Animal Group-Size Statistics
NASA Astrophysics Data System (ADS)
Degond, Pierre; Liu, Jian-Guo; Pego, Robert L.
2017-04-01
We study coagulation-fragmentation equations inspired by a simple model proposed in fisheries science to explain data for the size distribution of schools of pelagic fish. Although the equations lack detailed balance and admit no H-theorem, we are able to develop a rather complete description of equilibrium profiles and large-time behavior, based on recent developments in complex function theory for Bernstein and Pick functions. In the large-population continuum limit, a scaling-invariant regime is reached in which all equilibria are determined by a single scaling profile. This universal profile exhibits power-law behavior crossing over from exponent -2/3 for small size to -3/2 for large size, with an exponential cutoff.
Complex Geometric Models of Diffusion and Relaxation in Healthy and Damaged White Matter
Farrell, Jonathan A.D.; Smith, Seth A.; Reich, Daniel S.; Calabresi, Peter A.; van Zijl, Peter C.M.
2010-01-01
Which aspects of tissue microstructure affect diffusion weighted MRI signals? Prior models, many of which use Monte-Carlo simulations, have focused on relatively simple models of the cellular microenvironment and have not considered important anatomic details. With the advent of higher-order analysis models for diffusion imaging, such as high-angular-resolution diffusion imaging (HARDI), more realistic models are necessary. This paper presents and evaluates the reproducibility of simulations of diffusion in complex geometries. Our framework is quantitative, does not require specialized hardware, is easily implemented with little programming experience, and is freely available as open-source software. Models may include compartments with different diffusivities, permeabilities, and T2 time constants using both parametric (e.g., spheres and cylinders) and arbitrary (e.g., mesh-based) geometries. Three-dimensional diffusion displacement-probability functions are mapped with high reproducibility, and thus can be readily used to assess reproducibility of diffusion-derived contrasts. PMID:19739233
Troutman, Brent M.
1982-01-01
Errors in runoff prediction caused by input data errors are analyzed by treating precipitation-runoff models as regression (conditional expectation) models. Independent variables of the regression consist of precipitation and other input measurements; the dependent variable is runoff. In models using erroneous input data, prediction errors are inflated and estimates of expected storm runoff for given observed input variables are biased. This bias in expected runoff estimation results in biased parameter estimates if these parameter estimates are obtained by a least squares fit of predicted to observed runoff values. The problems of error inflation and bias are examined in detail for a simple linear regression of runoff on rainfall and for a nonlinear U.S. Geological Survey precipitation-runoff model. Some implications for flood frequency analysis are considered. A case study using a set of data from Turtle Creek near Dallas, Texas illustrates the problems of model input errors.
Dynamic self-assembly of charged colloidal strings and walls in simple fluid flows.
Abe, Yu; Zhang, Bo; Gordillo, Leonardo; Karim, Alireza Mohammad; Francis, Lorraine F; Cheng, Xiang
2017-02-22
Colloidal particles can self-assemble into various ordered structures in fluid flows that have potential applications in biomedicine, materials synthesis and encryption. These dynamic processes are also of fundamental interest for probing the general principles of self-assembly under non-equilibrium conditions. Here, we report a simple microfluidic experiment, where charged colloidal particles self-assemble into flow-aligned 1D strings with regular particle spacing near a solid boundary. Using high-speed confocal microscopy, we systematically investigate the influence of flow rates, electrostatics and particle polydispersity on the observed string structures. By studying the detailed dynamics of stable flow-driven particle pairs, we quantitatively characterize interparticle interactions. Based on the results, we construct a simple model that explains the intriguing non-equilibrium self-assembly process. Our study shows that the colloidal strings arise from a delicate balance between attractive hydrodynamic coupling and repulsive electrostatic interaction between particles. Finally, we demonstrate that, with the assistance of transverse electric fields, a similar mechanism also leads to the formation of 2D colloidal walls.
Chianese, Giuseppina; Persico, Marco; Yang, Fan; Lin, Hou-Wen; Guo, Yue-Wei; Basilico, Nicoletta; Parapini, Silvia; Taramelli, Donatella; Taglialatela-Scafati, Orazio; Fattorusso, Caterina
2014-09-01
Chemical investigation of the organic extract obtained from the sponge Plakortis simplex collected in the South China Sea afforded five new polyketide endoperoxides (2 and 4-7), along with two known analogues (1 and 3). The stereostructures of these metabolites have been deduced on the basis of spectroscopic analysis and chemical conversion. The isolated endoperoxide derivatives have been tested for their in vitro antimalarial activity against Plasmodium falciparum strains, showing IC50 values in the low micromolar range. The structure-activity relationships were analyzed by means of a detailed computational investigation and rationalized in the light of the mechanism of action proposed for this class of simple antimalarials. The relative orientation of the atoms involved in the putative radical generation and transfer reaction was demonstrated to have a great impact on the antimalarial activity. The resulting 3D pharmacophoric model can be a useful guide to design simple and effective antimalarial lead compounds belonging to the class of 1,2-dioxanes. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Miller, Christopher J.
2011-01-01
A model reference nonlinear dynamic inversion control law has been developed to provide a baseline controller for research into simple adaptive elements for advanced flight control laws. This controller has been implemented and tested in a hardware-in-the-loop simulation and in flight. The flight results agree well with the simulation predictions and show good handling qualities throughout the tested flight envelope with some noteworthy deficiencies highlighted both by handling qualities metrics and pilot comments. Many design choices and implementation details reflect the requirements placed on the system by the nonlinear flight environment and the desire to keep the system as simple as possible to easily allow the addition of the adaptive elements. The flight-test results and how they compare to the simulation predictions are discussed, along with a discussion about how each element affected pilot opinions. Additionally, aspects of the design that performed better than expected are presented, as well as some simple improvements that will be suggested for follow-on work.
Drewniak, Elizabeth I.; Jay, Gregory D.; Fleming, Braden C.; Crisco, Joseph J.
2009-01-01
In attempts to better understand the etiology of osteoarthritis, a debilitating joint disease that results in the degeneration of articular cartilage in synovial joints, researchers have focused on joint tribology, the study of joint friction, lubrication, and wear. Several different approaches have been used to investigate the frictional properties of articular cartilage. In this study, we examined two analysis methods for calculating the coefficient of friction (μ) using a simple pendulum system and BL6 murine knee joints (n=10) as the fulcrum. A Stanton linear decay model (Lin μ) and an exponential model that accounts for viscous damping (Exp μ) were fit to the decaying pendulum oscillations. Root mean square error (RMSE), asymptotic standard error (ASE), and coefficient of variation (CV) were calculated to evaluate the fit and measurement precision of each model. This investigation demonstrated that while Lin μ was more repeatable, based on CV (5.0% for Lin μ; 18% for Exp μ), Exp μ provided a better fitting model, based on RMSE (0.165° for Exp μ; 0.391° for Lin μ) and ASE (0.033 for Exp μ; 0.185 for Lin μ), and had a significantly lower coefficient of friction value (0.022±0.007 for Exp μ; 0.042±0.016 for Lin μ) (p=0.001). This study details the use of a simple pendulum for examining cartilage properties in situ that will have applications investigating cartilage mechanics in a variety of species. The Exp μ model provided a more accurate fit to the experimental data for predicting the frictional properties of intact joints in pendulum systems. PMID:19632680
Design and Construction of Simple, Nitrogen-Laser-Pumped, Tunable Dye Lasers
ERIC Educational Resources Information Center
Hilborn, Robert C.
1978-01-01
The basic physical principles of dye lasers are discussed and used to analyze the design and operation of tunable dye lasers pumped by pulsed nitrogen lasers. Details of the design and construction of these dye lasers are presented. Some simple demonstration experiments are described. (BB)
NASA Astrophysics Data System (ADS)
Obara, Shin'ya
Plant shoot configurations evolve so that maximum sunlight may be obtained. The objective of this study is to develop a compact light-condensing system mimicking a plant shoot configuration that is applicable to a light source from a large area. In this paper, the relationship between the position of a light source (the sun) and the rate at which light is absorbed by each leaf was investigated in detail for plant shoot models of a dogwood (simple leaf) and a ginkgo tree (lobed leaf). The rate of light quantum received in each leaf model is reported to an analysis program that uses cross entropy (CE). The analyses showed that the peak amount of light received in the plant-shoot-light-condensing system was during February (vernal equinox) and October (autumnal equinox). Similarly, the rate of light quantum received in each leaf was measured with the CE. The results found that the plant-shoot-light-condensing system that maximizes the amount of light received has differences in the light received in each leaf. Furthermore, the light-condensing characteristics of the ginkgo biloba model are better than the dogwood model. The light-condensing characteristics of a leaf are influenced by the size, a lobe, shape, and the length of the branch.
Microscopic motion of particles flowing through a porous medium
NASA Astrophysics Data System (ADS)
Lee, Jysoo; Koplik, Joel
1999-01-01
Stokesian dynamics simulations are used to study the microscopic motion of particles suspended in fluids passing through porous media. Model porous media with fixed spherical particles are constructed, and mobile ones move through this fixed bed under the action of an ambient velocity field. The pore scale motion of individual suspended particles at pore junctions are first considered. The relative particle flux into different possible directions exiting from a single pore, for two- and three-dimensional model porous media is found to approximately equal the corresponding fractional channel width or area. Next the waiting time distribution for particles which are delayed in a junction due to a stagnation point caused by a flow bifurcation is considered. The waiting times are found to be controlled by two-particle interactions, and the distributions take the same form in model porous media as in two-particle systems. A simple theoretical estimate of the waiting time is consistent with the simulations. It is found that perturbing such a slow-moving particle by another nearby one leads to rather complicated behavior. Finally, the stability of geometrically trapped particles is studied. For simple model traps, it is found that particles passing nearby can "relaunch" the trapped particle through its hydrodynamic interaction, although the conditions for relaunching depend sensitively on the details of the trap and its surroundings.
A systematic description of shocks in gamma-ray bursts - I. Formulation
NASA Astrophysics Data System (ADS)
Ziaeepour, Houri
2009-07-01
Since the suggestion of relativistic shocks as the origin of gamma-ray bursts (GRBs) in the early 1990s, the mathematical formulation of this process has stayed at a phenomenological level. One of the reasons for the slow development of theoretical works has been the simple power-law behaviour of the afterglows hours or days after the prompt gamma-ray emission. It was believed that they could be explained with these formulations. Nowadays, with the launch of the Swift satellite and implementation of robotic ground follow-ups, GRBs and their afterglow can be observed at multi-wavelengths from a few tens of seconds after trigger onwards. These observations have led to the discovery of features unexplainable by the simple formulation of the shocks and emission processes used up to now. Some of these features can be inherent in the nature and activities of the GRBs' central engines which are not yet well understood. On the other hand, the devil is in the detail and others may be explained with a more detailed formulation of these phenomena and without ad hoc addition of new processes. Such a formulation is the goal of this work. We present a consistent formulation of the kinematics and dynamics of the collision between two spherical relativistic shells, their energy dissipation and their coalescence. It can be applied to both internal and external shocks. Notably, we propose two phenomenological models for the evolution of the emitting region during the collision. One of these models is more suitable for the prompt/internal shocks and late external shocks, and the other for the afterglow/external collisions as well as the onset of internal shocks. We calculate a number of observables such as flux, lag between energy bands and hardness ratios. One of our aims has been a formulation complex enough to include the essential processes, but simple enough such that the data can be directly compared with the theory to extract the value and evolution of physical quantities. To accomplish this goal, we also suggest a procedure for extracting parameters of the model from data. In a companion paper, we numerically calculate the evolution of some simulated models and compare their features with the properties of the observed GRBs.
NASA Astrophysics Data System (ADS)
Reid, Lucas; Kittlaus, Steffen; Scherer, Ulrike
2015-04-01
For large areas without highly detailed data the empirical Universal Soil Loss Equation (USLE) is widely used to quantify soil loss. The problem though is usually the quantification of actual sediment influx into the rivers. As the USLE provides long-term mean soil loss rates, it is often combined with spatially lumped models to estimate the sediment delivery ratio (SDR). But it gets difficult with spatially lumped approaches in large catchment areas where the geographical properties have a wide variance. In this study we developed a simple but spatially distributed approach to quantify the sediment delivery ratio by considering the characteristics of the flow paths in the catchments. The sediment delivery ratio was determined using an empirical approach considering the slope, morphology and land use properties along the flow path as an estimation of travel time of the eroded particles. The model was tested against suspended solids measurements in selected sub-basins of the River Inn catchment area in Germany and Austria, ranging from the high alpine south to the Molasse basin in the northern part.
Deep-down ionization of protoplanetary discs
NASA Astrophysics Data System (ADS)
Glassgold, A. E.; Lizano, S.; Galli, D.
2017-12-01
The possible occurrence of dead zones in protoplanetary discs subject to the magneto-rotational instability highlights the importance of disc ionization. We present a closed-form theory for the deep-down ionization by X-rays at depths below the disc surface dominated by far-ultraviolet radiation. Simple analytic solutions are given for the major ion classes, electrons, atomic ions, molecular ions and negatively charged grains. In addition to the formation of molecular ions by X-ray ionization of H2 and their destruction by dissociative recombination, several key processes that operate in this region are included, e.g. charge exchange of molecular ions and neutral atoms and destruction of ions by grains. Over much of the inner disc, the vertical decrease in ionization with depth into the disc is described by simple power laws, which can easily be included in more detailed modelling of magnetized discs. The new ionization theory is used to illustrate the non-ideal magnetohydrodynamic effects of Ohmic, Hall and Ambipolar diffusion for a magnetic model of a T Tauri star disc using the appropriate Elsasser numbers.
Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology
Gao, Fei; Li, Ye; Novak, Igor L.; Slepchenko, Boris M.
2016-01-01
Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium ‘sparks’ as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell. PMID:27959915
Statefinder diagnostic for modified Chaplygin gas cosmology in f(R,T) gravity with particle creation
NASA Astrophysics Data System (ADS)
Singh, J. K.; Nagpal, Ritika; Pacif, S. K. J.
In this paper, we have studied flat Friedmann-Lemaître-Robertson-Walker (FLRW) model with modified Chaplygin gas (MCG) having equation of state pm = Aρ ‑ B ργ, where 0 ≤ A ≤ 1, 0 ≤ γ ≤ 1 and B is any positive constant in f(R,T) gravity with particle creation. We have considered a simple parametrization of the Hubble parameter H in order to solve the field equations and discussed the time evolution of different cosmological parameters for some obtained models showing unique behavior of scale factor. We have also discussed the statefinder diagnostic pair {r,s} that characterizes the evolution of obtained models and explore their stability. The physical consequences of the models and their kinematic behaviors have also been scrutinized here in some detail.
Evaluation of the Williams-type model for barley yields in North Dakota and Minnesota
NASA Technical Reports Server (NTRS)
Barnett, T. L. (Principal Investigator)
1981-01-01
The Williams-type yield model is based on multiple regression analysis of historial time series data at CRD level pooled to regional level (groups of similar CRDs). Basic variables considered in the analysis include USDA yield, monthly mean temperature, monthly precipitation, soil texture and topographic information, and variables derived from these. Technologic trend is represented by piecewise linear and/or quadratic functions of year. Indicators of yield reliability obtained from a ten-year bootstrap test (1970-1979) demonstrate that biases are small and performance based on root mean square appears to be acceptable for the intended AgRISTARS large area applications. The model is objective, adequate, timely, simple, and not costly. It consideres scientific knowledge on a broad scale but not in detail, and does not provide a good current measure of modeled yield reliability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vriens, L.; Smeets, A.H.M.
1980-09-01
For electron-induced ionization, excitation, and de-excitation, mainly from excited atomic states, a detailed analysis is presented of the dependence of the cross sections and rate coefficients on electron energy and temperature, and on atomic parameters. A wide energy range is covered, including sudden as well as adiabatic collisions. By combining the available experimental and theoretical information, a set of simple analytical formulas is constructed for the cross sections and rate coefficients of the processes mentioned, for the total depopulation, and for three-body recombination. The formulas account for large deviations from classical and semiclassical scaling, as found for excitation. They agreemore » with experimental data and with the theories in their respective ranges of validity, but have a wider range of validity than the separate theories. The simple analytical form further facilitates the application in plasma modeling.« less
Computational assignment of redox states to Coulomb blockade diamonds.
Olsen, Stine T; Arcisauskaite, Vaida; Hansen, Thorsten; Kongsted, Jacob; Mikkelsen, Kurt V
2014-09-07
With the advent of molecular transistors, electrochemistry can now be studied at the single-molecule level. Experimentally, the redox chemistry of the molecule manifests itself as features in the observed Coulomb blockade diamonds. We present a simple theoretical method for explicit construction of the Coulomb blockade diamonds of a molecule. A combined quantum mechanical/molecular mechanical method is invoked to calculate redox energies and polarizabilities of the molecules, including the screening effect of the metal leads. This direct approach circumvents the need for explicit modelling of the gate electrode. From the calculated parameters the Coulomb blockade diamonds are constructed using simple theory. We offer a theoretical tool for assignment of Coulomb blockade diamonds to specific redox states in particular, and a study of chemical details in the diamonds in general. With the ongoing experimental developments in molecular transistor experiments, our tool could find use in molecular electronics, electrochemistry, and electrocatalysis.
The cognitive domain of a glider in the game of life.
Beer, Randall D
2014-01-01
This article examines in some technical detail the application of Maturana and Varela's biology of cognition to a simple concrete model: a glider in the game of Life cellular automaton. By adopting an autopoietic perspective on a glider, the set of possible perturbations to it can be divided into destructive and nondestructive subsets. From a glider's reaction to each nondestructive perturbation, its cognitive domain is then mapped. In addition, the structure of a glider's possible knowledge of its immediate environment, and the way in which that knowledge is grounded in its constitution, are fully described. The notion of structural coupling is then explored by characterizing the paths of mutual perturbation that a glider and its environment can undergo. Finally, a simple example of a communicative interaction between two gliders is given. The article concludes with a discussion of the potential implications of this analysis for the enactive approach to cognition.
Universality classes of fluctuation dynamics in hierarchical complex systems
NASA Astrophysics Data System (ADS)
Macêdo, A. M. S.; González, Iván R. Roa; Salazar, D. S. P.; Vasconcelos, G. L.
2017-03-01
A unified approach is proposed to describe the statistics of the short-time dynamics of multiscale complex systems. The probability density function of the relevant time series (signal) is represented as a statistical superposition of a large time-scale distribution weighted by the distribution of certain internal variables that characterize the slowly changing background. The dynamics of the background is formulated as a hierarchical stochastic model whose form is derived from simple physical constraints, which in turn restrict the dynamics to only two possible classes. The probability distributions of both the signal and the background have simple representations in terms of Meijer G functions. The two universality classes for the background dynamics manifest themselves in the signal distribution as two types of tails: power law and stretched exponential, respectively. A detailed analysis of empirical data from classical turbulence and financial markets shows excellent agreement with the theory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vega-Carrillo, Hector Rene; Manzanares-Acuna, Eduardo; Hernandez-Davila, Victor Martin
The use of 131I is widely used in diagnostic and treatment of patients. If the patient is pregnant the 131I presence in the thyroid it becomes a source of constant exposition to other organs and the fetus. In this study the absorbed dose in the uterus of a 3 months pregnant woman with 131I in her thyroid gland has been calculated. The dose was determined using Monte Carlo methods in which a detailed model of the woman has been developed. The dose was also calculated using a simple procedure that was refined including the photons' attenuation in the woman organsmore » and body. To verify these results an experiment was carried out using a neck phantom with 131I. Comparing the results it was found that the simple calculation tend to overestimate the absorbed dose, by doing the corrections due to body and organs photon attenuation the dose is 0.14 times the Monte Carlo estimation.« less
van Rhee, Henk; Hak, Tony
2017-01-01
We present a new tool for meta‐analysis, Meta‐Essentials, which is free of charge and easy to use. In this paper, we introduce the tool and compare its features to other tools for meta‐analysis. We also provide detailed information on the validation of the tool. Although free of charge and simple, Meta‐Essentials automatically calculates effect sizes from a wide range of statistics and can be used for a wide range of meta‐analysis applications, including subgroup analysis, moderator analysis, and publication bias analyses. The confidence interval of the overall effect is automatically based on the Knapp‐Hartung adjustment of the DerSimonian‐Laird estimator. However, more advanced meta‐analysis methods such as meta‐analytical structural equation modelling and meta‐regression with multiple covariates are not available. In summary, Meta‐Essentials may prove a valuable resource for meta‐analysts, including researchers, teachers, and students. PMID:28801932
Mobile Modelling for Crowdsourcing Building Interior Data
NASA Astrophysics Data System (ADS)
Rosser, J.; Morley, J.; Jackson, M.
2012-06-01
Indoor spatial data forms an important foundation to many ubiquitous computing applications. It gives context to users operating location-based applications, provides an important source of documentation of buildings and can be of value to computer systems where an understanding of environment is required. Unlike external geographic spaces, no centralised body or agency is charged with collecting or maintaining such information. Widespread deployment of mobile devices provides a potential tool that would allow rapid model capture and update by a building's users. Here we introduce some of the issues involved in volunteering building interior data and outline a simple mobile tool for capture of indoor models. The nature of indoor data is inherently private; however in-depth analysis of this issue and legal considerations are not discussed in detail here.
Plate and butt-weld stresses beyond elastic limit, material and structural modeling
NASA Technical Reports Server (NTRS)
Verderaime, V.
1991-01-01
Ultimate safety factors of high performance structures depend on stress behavior beyond the elastic limit, a region not too well understood. An analytical modeling approach was developed to gain fundamental insights into inelastic responses of simple structural elements. Nonlinear material properties were expressed in engineering stresses and strains variables and combined with strength of material stress and strain equations similar to numerical piece-wise linear method. Integrations are continuous which allows for more detailed solutions. Included with interesting results are the classical combined axial tension and bending load model and the strain gauge conversion to stress beyond the elastic limit. Material discontinuity stress factors in butt-welds were derived. This is a working-type document with analytical methods and results applicable to all industries of high reliability structures.
A Simple Method to Estimate Photosynthetic Radiation Use Efficiency of Canopies
ROSATI, A.; METCALF, S. G.; LAMPINEN, B. D.
2004-01-01
• Background and Aims Photosynthetic radiation use efficiency (PhRUE) over the course of a day has been shown to be constant for leaves throughout a general canopy where nitrogen content (and thus photosynthetic properties) of leaves is distributed in relation to the light gradient. It has been suggested that this daily PhRUE can be calculated simply from the photosynthetic properties of a leaf at the top of the canopy and from the PAR incident on the canopy, which can be obtained from weather‐station data. The objective of this study was to investigate whether this simple method allows estimation of PhRUE of different crops and with different daily incident PAR, and also during the growing season. • Methods The PhRUE calculated with this simple method was compared with that calculated with a more detailed model, for different days in May, June and July in California, on almond (Prunus dulcis) and walnut (Juglans regia) trees. Daily net photosynthesis of 50 individual leaves was calculated as the daylight integral of the instantaneous photosynthesis. The latter was estimated for each leaf from its photosynthetic response to PAR and from the PAR incident on the leaf during the day. • Key Results Daily photosynthesis of individual leaves of both species was linearly related to the daily PAR incident on the leaves (which implies constant PhRUE throughout the canopy), but the slope (i.e. the PhRUE) differed between the species, over the growing season due to changes in photosynthetic properties of the leaves, and with differences in daily incident PAR. When PhRUE was estimated from the photosynthetic light response curve of a leaf at the top of the canopy and from the incident radiation above the canopy, obtained from weather‐station data, the values were within 5 % of those calculated with the more detailed model, except in five out of 34 cases. • Conclusions The simple method of estimating PhRUE is valuable as it simplifies calculation of canopy photosynthesis to a multiplication between the PAR intercepted by the canopy, which can be obtained with remote sensing, and the PhRUE calculated from incident PAR, obtained from standard weather‐station data, and from the photosynthetic properties of leaves at the top of the canopy. The latter properties are the sole crop parameters needed. While being simple, this method describes the differences in PhRUE related to crop, season, nutrient status and daily incident PAR. PMID:15044212
Comparison of CEAS and Williams-type models for spring wheat yields in North Dakota and Minnesota
NASA Technical Reports Server (NTRS)
Barnett, T. L. (Principal Investigator)
1982-01-01
The CEAS and Williams-type yield models are both based on multiple regression analysis of historical time series data at CRD level. The CEAS model develops a separate relation for each CRD; the Williams-type model pools CRD data to regional level (groups of similar CRDs). Basic variables considered in the analyses are USDA yield, monthly mean temperature, monthly precipitation, and variables derived from these. The Williams-type model also used soil texture and topographic information. Technological trend is represented in both by piecewise linear functions of year. Indicators of yield reliability obtained from a ten-year bootstrap test of each model (1970-1979) demonstrate that the models are very similar in performance in all respects. Both models are about equally objective, adequate, timely, simple, and inexpensive. Both consider scientific knowledge on a broad scale but not in detail. Neither provides a good current measure of modeled yield reliability. The CEAS model is considered very slightly preferable for AgRISTARS applications.
An implicit divalent counterion force field for RNA molecular dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henke, Paul S.; Mak, Chi H., E-mail: cmak@usc.edu; Center of Applied Mathematical Sciences, University of Southern California, Los Angeles, California 90089
How to properly account for polyvalent counterions in a molecular dynamics simulation of polyelectrolytes such as nucleic acids remains an open question. Not only do counterions such as Mg{sup 2+} screen electrostatic interactions, they also produce attractive intrachain interactions that stabilize secondary and tertiary structures. Here, we show how a simple force field derived from a recently reported implicit counterion model can be integrated into a molecular dynamics simulation for RNAs to realistically reproduce key structural details of both single-stranded and base-paired RNA constructs. This divalent counterion model is computationally efficient. It works with existing atomistic force fields, or coarse-grainedmore » models may be tuned to work with it. We provide optimized parameters for a coarse-grained RNA model that takes advantage of this new counterion force field. Using the new model, we illustrate how the structural flexibility of RNA two-way junctions is modified under different salt conditions.« less
MODEL CORRELATION STUDY OF A RETRACTABLE BOOM FOR A SOLAR SAIL SPACECRAFT
NASA Technical Reports Server (NTRS)
Adetona, O.; Keel, L. H.; Oakley, J. D.; Kappus, K.; Whorton, M. S.; Kim, Y. K.; Rakpczy, J. M.
2005-01-01
To realize design concepts, predict dynamic behavior and develop appropriate control strategies for high performance operation of a solar-sail spacecraft, we developed a simple analytical model that represents dynamic behavior of spacecraft with various sizes. Since motion of the vehicle is dominated by retractable booms that support the structure, our study concentrates on developing and validating a dynamic model of a long retractable boom. Extensive tests with various configurations were conducted for the 30 Meter, light-weight, retractable, lattice boom at NASA MSFC that is structurally and dynamically similar to those of a solar-sail spacecraft currently under construction. Experimental data were then compared with the corresponding response of the analytical model. Though mixed results were obtained, the analytical model emulates several key characteristics of the boom. The paper concludes with a detailed discussion of issues observed during the study.
On-line Model Structure Selection for Estimation of Plasma Boundary in a Tokamak
NASA Astrophysics Data System (ADS)
Škvára, Vít; Šmídl, Václav; Urban, Jakub
2015-11-01
Control of the plasma field in the tokamak requires reliable estimation of the plasma boundary. The plasma boundary is given by a complex mathematical model and the only available measurements are responses of induction coils around the plasma. For the purpose of boundary estimation the model can be reduced to simple linear regression with potentially infinitely many elements. The number of elements must be selected manually and this choice significantly influences the resulting shape. In this paper, we investigate the use of formal model structure estimation techniques for the problem. Specifically, we formulate a sparse least squares estimator using the automatic relevance principle. The resulting algorithm is a repetitive evaluation of the least squares problem which could be computed in real time. Performance of the resulting algorithm is illustrated on simulated data and evaluated with respect to a more detailed and computationally costly model FREEBIE.
A model for seasonal phytoplankton blooms.
Huppert, Amit; Blasius, Bernd; Olinky, Ronen; Stone, Lewi
2005-10-07
We analyse a generic bottom-up nutrient phytoplankton model to help understand the dynamics of seasonally recurring algae blooms. The deterministic model displays a wide spectrum of dynamical behaviours, from simple cyclical blooms which trigger annually, to irregular chaotic blooms in which both the time between outbreaks and their magnitudes are erratic. Unusually, despite the persistent seasonal forcing, it is extremely difficult to generate blooms that are both annually recurring and also chaotic or irregular (i.e. in amplitude) even though this characterizes many real time-series. Instead the model has a tendency to 'skip' with outbreaks often being suppressed from 1 year to the next. This behaviour is studied in detail and we develop analytical expressions to describe the model's flow in phase space, yielding insights into the mechanism of the bloom recurrence. We also discuss how modifications to the equations through the inclusion of appropriate functional forms can generate more realistic dynamics.
Mathematical modeling of spinning elastic bodies for modal analysis.
NASA Technical Reports Server (NTRS)
Likins, P. W.; Barbera, F. J.; Baddeley, V.
1973-01-01
The problem of modal analysis of an elastic appendage on a rotating base is examined to establish the relative advantages of various mathematical models of elastic structures and to extract general inferences concerning the magnitude and character of the influence of spin on the natural frequencies and mode shapes of rotating structures. In realization of the first objective, it is concluded that except for a small class of very special cases the elastic continuum model is devoid of useful results, while for constant nominal spin rate the distributed-mass finite-element model is quite generally tractable, since in the latter case the governing equations are always linear, constant-coefficient, ordinary differential equations. Although with both of these alternatives the details of the formulation generally obscure the essence of the problem and permit very little engineering insight to be gained without extensive computation, this difficulty is not encountered when dealing with simple concentrated mass models.
>From individual choice to group decision-making
NASA Astrophysics Data System (ADS)
Galam, Serge; Zucker, Jean-Daniel
2000-12-01
Some universal features are independent of both the social nature of the individuals making the decision and the nature of the decision itself. On this basis a simple magnet like model is built. Pair interactions are introduced to measure the degree of exchange among individuals while discussing. An external uniform field is included to account for a possible pressure from outside. Individual biases with respect to the issue at stake are also included using local random fields. A unique postulate of minimum conflict is assumed. The model is then solved with emphasis on its psycho-sociological implications. Counter-intuitive results are obtained. At this stage no new physical technicality is involved. Instead the full psycho-sociological implications of the model are drawn. Few cases are then detailed to enlight them. In addition, several numerical experiments based on our model are shown to give both an insight on the dynamics of the model and suggest further research directions.
NASA Astrophysics Data System (ADS)
Hvizdoš, Dávid; Váňa, Martin; Houfek, Karel; Greene, Chris H.; Rescigno, Thomas N.; McCurdy, C. William; Čurík, Roman
2018-02-01
We present a simple two-dimensional model of the indirect dissociative recombination process. The model has one electronic and one nuclear degree of freedom and it can be solved to high precision, without making any physically motivated approximations, by employing the exterior complex scaling method together with the finite-elements method and discrete variable representation. The approach is applied to solve a model for dissociative recombination of H2 + in the singlet ungerade channels, and the results serve as a benchmark to test validity of several physical approximations commonly used in the computational modeling of dissociative recombination for real molecular targets. The second, approximate, set of calculations employs a combination of multichannel quantum defect theory and frame transformation into a basis of Siegert pseudostates. The cross sections computed with the two methods are compared in detail for collision energies from 0 to 2 eV.
Some Applications of the Model of the Partion Points on a One Dimensional Lattice
NASA Astrophysics Data System (ADS)
Mejdani, R.; Huseini, H.
1996-02-01
We have shown that by using a model of gas of partition points on a one-dimensional lattice, we can find some results about the saturation curves for enzyme kinetics or the average domain-size, which we have obtained before by using a correlated walks' theory or a probabilistic (combinatoric) way. We have studied, using the same model and the same technique, the denaturation process, i.e., the breaking of the hydrogen bonds connecting the two strands, under treatment by heat. Also, we have discussed, without entering in details, the problem related to the spread of an infections disease and the stochastic model of partition points. We think that this model, being simple and mathematically transparent, can be advantageous for the other theoratical investigations in chemistry or modern biology. PACS NOS.: 05.50. + q; 05.70.Ce; 64.10.+h; 87.10. +e; 87.15.Rn
Numerical Modeling of Ablation Heat Transfer
NASA Technical Reports Server (NTRS)
Ewing, Mark E.; Laker, Travis S.; Walker, David T.
2013-01-01
A unique numerical method has been developed for solving one-dimensional ablation heat transfer problems. This paper provides a comprehensive description of the method, along with detailed derivations of the governing equations. This methodology supports solutions for traditional ablation modeling including such effects as heat transfer, material decomposition, pyrolysis gas permeation and heat exchange, and thermochemical surface erosion. The numerical scheme utilizes a control-volume approach with a variable grid to account for surface movement. This method directly supports implementation of nontraditional models such as material swelling and mechanical erosion, extending capabilities for modeling complex ablation phenomena. Verifications of the numerical implementation are provided using analytical solutions, code comparisons, and the method of manufactured solutions. These verifications are used to demonstrate solution accuracy and proper error convergence rates. A simple demonstration of a mechanical erosion (spallation) model is also provided to illustrate the unique capabilities of the method.
The outflow structure of GW170817 from late-time broad-band observations
NASA Astrophysics Data System (ADS)
Troja, E.; Piro, L.; Ryan, G.; van Eerten, H.; Ricci, R.; Wieringa, M. H.; Lotti, S.; Sakamoto, T.; Cenko, S. B.
2018-07-01
We present our broad-band study of GW170817 from radio to hard X-rays, including NuSTAR and Chandra observations up to 165 d after the merger, and a multimessenger analysis including LIGO constraints. The data are compared with predictions from a wide range of models, providing the first detailed comparison between non-trivial cocoon and jet models. Homogeneous and power-law shaped jets, as well as simple cocoon models are ruled out by the data, while both a Gaussian shaped jet and a cocoon with energy injection can describe the current data set for a reasonable range of physical parameters, consistent with the typical values derived from short GRB afterglows. We propose that these models can be unambiguously discriminated by future observations measuring the post-peak behaviour, with Fν ∝ t˜-1.0 for the cocoon and Fν∝ t˜-2.5 for the jet model.
The outflow structure of GW170817 from late time broadband observations
NASA Astrophysics Data System (ADS)
Troja, E.; Piro, L.; Ryan, G.; van Eerten, H.; Ricci, R.; Wieringa, M.; Lotti, S.; Sakamoto, T.; Cenko, S. B.
2018-04-01
We present our broadband study of GW170817 from radio to hard X-rays, including NuSTAR and Chandra observations up to 165 days after the merger, and a multi-messenger analysis including LIGO constraints. The data are compared with predictions from a wide range of models, providing the first detailed comparison between non-trivial cocoon and jet models. Homogeneous and power-law shaped jets, as well as simple cocoon models are ruled out by the data, while both a Gaussian shaped jet and a cocoon with energy injection can describe the current dataset for a reasonable range of physical parameters, consistent with the typical values derived from short GRB afterglows. We propose that these models can be unambiguously discriminated by future observations measuring the post-peak behaviour, with Fν∝t˜-1.0 for the cocoon and Fν∝t˜-2.5 for the jet model.
Histology. Notes for Students of Animal Husbandry.
ERIC Educational Resources Information Center
Price, Charles J.; Reed, Josephine E.
This document approaches the subject of Histology by way of simple independent unicellular organisms through the lower levels of cell organization and specialization to a detailed study of the highly complex tissues of vertebrate animals. Emphasis is placed on structure, but function is explained in some detail. The relationships between tissues…
Mullins, Christina Susanne; Schneider, Björn; Stockhammer, Florian; Krohn, Mathias; Classen, Carl Friedrich; Linnebacher, Michael
2013-01-01
Background Development of clinically relevant tumor model systems for glioblastoma multiforme (GBM) is important for advancement of basic and translational biology. High molecular heterogeneity of GBM tumors is well recognized, forming the rationale for molecular tests required before administration of several of the novel therapeutics rapidly entering the clinics. One model that has gained wide acceptance is the primary cell culture model. The laborious and time consuming process is rewarded with a relative high success rate (about 60%). We here describe and evaluate a very simple cryopreservation procedure for GBM tissue prior to model establishment that will considerably reduce the logistic complexity. Methods Twenty-seven GBM samples collected ad hoc were prepared for primary cell culture freshly from surgery (#1) and after cryopreservation (#2). Results Take rates after cryopreservation (59%) were as satisfactory as from fresh tissue (63%; p = 1.000). We did not observe any relevant molecular or phenotypic differences between cell lines established from fresh or vitally frozen tissue. Further, sensitivity both towards standard chemotherapeutic agents (Temozolomide, BCNU and Vincristine) and novel agents like the receptor tyrosine kinase inhibitor Imatinib did not differ. Conclusions Our simple cryopreservation procedure facilitates collection, long-time storage and propagation (modeling) of clinical GBM specimens (potentially also from distant centers) for basic research, (pre-) clinical studies of novel therapies and individual response prediction. PMID:23951083
Nishiura, Hiroshi
2011-02-16
Real-time forecasting of epidemics, especially those based on a likelihood-based approach, is understudied. This study aimed to develop a simple method that can be used for the real-time epidemic forecasting. A discrete time stochastic model, accounting for demographic stochasticity and conditional measurement, was developed and applied as a case study to the weekly incidence of pandemic influenza (H1N1-2009) in Japan. By imposing a branching process approximation and by assuming the linear growth of cases within each reporting interval, the epidemic curve is predicted using only two parameters. The uncertainty bounds of the forecasts are computed using chains of conditional offspring distributions. The quality of the forecasts made before the epidemic peak appears largely to depend on obtaining valid parameter estimates. The forecasts of both weekly incidence and final epidemic size greatly improved at and after the epidemic peak with all the observed data points falling within the uncertainty bounds. Real-time forecasting using the discrete time stochastic model with its simple computation of the uncertainty bounds was successful. Because of the simplistic model structure, the proposed model has the potential to additionally account for various types of heterogeneity, time-dependent transmission dynamics and epidemiological details. The impact of such complexities on forecasting should be explored when the data become available as part of the disease surveillance.
Ability of matrix models to explain the past and predict the future of plant populations.
McEachern, Kathryn; Crone, Elizabeth E.; Ellis, Martha M.; Morris, William F.; Stanley, Amanda; Bell, Timothy; Bierzychudek, Paulette; Ehrlen, Johan; Kaye, Thomas N.; Knight, Tiffany M.; Lesica, Peter; Oostermeijer, Gerard; Quintana-Ascencio, Pedro F.; Ticktin, Tamara; Valverde, Teresa; Williams, Jennifer I.; Doak, Daniel F.; Ganesan, Rengaian; Thorpe, Andrea S.; Menges, Eric S.
2013-01-01
Uncertainty associated with ecological forecasts has long been recognized, but forecast accuracy is rarely quantified. We evaluated how well data on 82 populations of 20 species of plants spanning 3 continents explained and predicted plant population dynamics. We parameterized stage-based matrix models with demographic data from individually marked plants and determined how well these models forecast population sizes observed at least 5 years into the future. Simple demographic models forecasted population dynamics poorly; only 40% of observed population sizes fell within our forecasts' 95% confidence limits. However, these models explained population dynamics during the years in which data were collected; observed changes in population size during the data-collection period were strongly positively correlated with population growth rate. Thus, these models are at least a sound way to quantify population status. Poor forecasts were not associated with the number of individual plants or years of data. We tested whether vital rates were density dependent and found both positive and negative density dependence. However, density dependence was not associated with forecast error. Forecast error was significantly associated with environmental differences between the data collection and forecast periods. To forecast population fates, more detailed models, such as those that project how environments are likely to change and how these changes will affect population dynamics, may be needed. Such detailed models are not always feasible. Thus, it may be wiser to make risk-averse decisions than to expect precise forecasts from models.
Ability of matrix models to explain the past and predict the future of plant populations.
Crone, Elizabeth E; Ellis, Martha M; Morris, William F; Stanley, Amanda; Bell, Timothy; Bierzychudek, Paulette; Ehrlén, Johan; Kaye, Thomas N; Knight, Tiffany M; Lesica, Peter; Oostermeijer, Gerard; Quintana-Ascencio, Pedro F; Ticktin, Tamara; Valverde, Teresa; Williams, Jennifer L; Doak, Daniel F; Ganesan, Rengaian; McEachern, Kathyrn; Thorpe, Andrea S; Menges, Eric S
2013-10-01
Uncertainty associated with ecological forecasts has long been recognized, but forecast accuracy is rarely quantified. We evaluated how well data on 82 populations of 20 species of plants spanning 3 continents explained and predicted plant population dynamics. We parameterized stage-based matrix models with demographic data from individually marked plants and determined how well these models forecast population sizes observed at least 5 years into the future. Simple demographic models forecasted population dynamics poorly; only 40% of observed population sizes fell within our forecasts' 95% confidence limits. However, these models explained population dynamics during the years in which data were collected; observed changes in population size during the data-collection period were strongly positively correlated with population growth rate. Thus, these models are at least a sound way to quantify population status. Poor forecasts were not associated with the number of individual plants or years of data. We tested whether vital rates were density dependent and found both positive and negative density dependence. However, density dependence was not associated with forecast error. Forecast error was significantly associated with environmental differences between the data collection and forecast periods. To forecast population fates, more detailed models, such as those that project how environments are likely to change and how these changes will affect population dynamics, may be needed. Such detailed models are not always feasible. Thus, it may be wiser to make risk-averse decisions than to expect precise forecasts from models. © 2013 Society for Conservation Biology.
Kirkilionis, Markus; Janus, Ulrich; Sbano, Luca
2011-09-01
We model in detail a simple synthetic genetic clock that was engineered in Atkinson et al. (Cell 113(5):597-607, 2003) using Escherichia coli as a host organism. Based on this engineered clock its theoretical description uses the modelling framework presented in Kirkilionis et al. (Theory Biosci. doi: 10.1007/s12064-011-0125-0 , 2011, this volume). The main goal of this accompanying article was to illustrate that parts of the modelling process can be algorithmically automatised once the model framework we called 'average dynamics' is accepted (Sbano and Kirkilionis, WMI Preprint 7/2007, 2008c; Kirkilionis and Sbano, Adv Complex Syst 13(3):293-326, 2010). The advantage of the 'average dynamics' framework is that system components (especially in genetics) can be easier represented in the model. In particular, if once discovered and characterised, specific molecular players together with their function can be incorporated. This means that, for example, the 'gene' concept becomes more clear, for example, in the way the genetic component would react under different regulatory conditions. Using the framework it has become a realistic aim to link mathematical modelling to novel tools of bioinformatics in the future, at least if the number of regulatory units can be estimated. This should hold in any case in synthetic environments due to the fact that the different synthetic genetic components are simply known (Elowitz and Leibler, Nature 403(6767):335-338, 2000; Gardner et al., Nature 403(6767):339-342, 2000; Hasty et al., Nature 420(6912):224-230, 2002). The paper illustrates therefore as a necessary first step how a detailed modelling of molecular interactions with known molecular components leads to a dynamic mathematical model that can be compared to experimental results on various levels or scales. The different genetic modules or components are represented in different detail by model variants. We explain how the framework can be used for investigating other more complex genetic systems in terms of regulation and feedback.
The Hydraulic Jump: Finding Complexity in Turbulent Water
ERIC Educational Resources Information Center
Vondracek, Mark
2013-01-01
Students who do not progress to more advanced science disciplines in college generally do not realize that seemingly simple physical systems are--when studied in detail--more complex than one might imagine. This article presents one such phenomenon--the hydraulic jump--as a way to help students see the complexity behind the seemingly simple, and…
Determining Salinity by Simple Means.
ERIC Educational Resources Information Center
Schlenker, Richard M.
This paper describes the construction and use of a simple salinometer. The salinometer is composed, mainly, of a milliammeter and a battery and uses the measurement of current flow to determine the salinity of water. A complete list of materials is given, as are details of construction and operation of the equipment. The use of the salinometer in…
NASA Astrophysics Data System (ADS)
Prasanna, V.
2018-01-01
This study makes use of temperature and precipitation from CMIP5 climate model output for climate change application studies over the Indian region during the summer monsoon season (JJAS). Bias correction of temperature and precipitation from CMIP5 GCM simulation results with respect to observation is discussed in detail. The non-linear statistical bias correction is a suitable bias correction method for climate change data because it is simple and does not add up artificial uncertainties to the impact assessment of climate change scenarios for climate change application studies (agricultural production changes) in the future. The simple statistical bias correction uses observational constraints on the GCM baseline, and the projected results are scaled with respect to the changing magnitude in future scenarios, varying from one model to the other. Two types of bias correction techniques are shown here: (1) a simple bias correction using a percentile-based quantile-mapping algorithm and (2) a simple but improved bias correction method, a cumulative distribution function (CDF; Weibull distribution function)-based quantile-mapping algorithm. This study shows that the percentile-based quantile mapping method gives results similar to the CDF (Weibull)-based quantile mapping method, and both the methods are comparable. The bias correction is applied on temperature and precipitation variables for present climate and future projected data to make use of it in a simple statistical model to understand the future changes in crop production over the Indian region during the summer monsoon season. In total, 12 CMIP5 models are used for Historical (1901-2005), RCP4.5 (2005-2100), and RCP8.5 (2005-2100) scenarios. The climate index from each CMIP5 model and the observed agricultural yield index over the Indian region are used in a regression model to project the changes in the agricultural yield over India from RCP4.5 and RCP8.5 scenarios. The results revealed a better convergence of model projections in the bias corrected data compared to the uncorrected data. The study can be extended to localized regional domains aimed at understanding the changes in the agricultural productivity in the future with an agro-economy or a simple statistical model. The statistical model indicated that the total food grain yield is going to increase over the Indian region in the future, the increase in the total food grain yield is approximately 50 kg/ ha for the RCP4.5 scenario from 2001 until the end of 2100, and the increase in the total food grain yield is approximately 90 kg/ha for the RCP8.5 scenario from 2001 until the end of 2100. There are many studies using bias correction techniques, but this study applies the bias correction technique to future climate scenario data from CMIP5 models and applied it to crop statistics to find future crop yield changes over the Indian region.
A neuronal model of predictive coding accounting for the mismatch negativity.
Wacongne, Catherine; Changeux, Jean-Pierre; Dehaene, Stanislas
2012-03-14
The mismatch negativity (MMN) is thought to index the activation of specialized neural networks for active prediction and deviance detection. However, a detailed neuronal model of the neurobiological mechanisms underlying the MMN is still lacking, and its computational foundations remain debated. We propose here a detailed neuronal model of auditory cortex, based on predictive coding, that accounts for the critical features of MMN. The model is entirely composed of spiking excitatory and inhibitory neurons interconnected in a layered cortical architecture with distinct input, predictive, and prediction error units. A spike-timing dependent learning rule, relying upon NMDA receptor synaptic transmission, allows the network to adjust its internal predictions and use a memory of the recent past inputs to anticipate on future stimuli based on transition statistics. We demonstrate that this simple architecture can account for the major empirical properties of the MMN. These include a frequency-dependent response to rare deviants, a response to unexpected repeats in alternating sequences (ABABAA…), a lack of consideration of the global sequence context, a response to sound omission, and a sensitivity of the MMN to NMDA receptor antagonists. Novel predictions are presented, and a new magnetoencephalography experiment in healthy human subjects is presented that validates our key hypothesis: the MMN results from active cortical prediction rather than passive synaptic habituation.
Shahaf, Goded; Pratt, Hillel
2013-01-01
In this work we demonstrate the principles of a systematic modeling approach of the neurophysiologic processes underlying a behavioral function. The modeling is based upon a flexible simulation tool, which enables parametric specification of the underlying neurophysiologic characteristics. While the impact of selecting specific parameters is of interest, in this work we focus on the insights, which emerge from rather accepted assumptions regarding neuronal representation. We show that harnessing of even such simple assumptions enables the derivation of significant insights regarding the nature of the neurophysiologic processes underlying behavior. We demonstrate our approach in some detail by modeling the behavioral go/no-go task. We further demonstrate the practical significance of this simplified modeling approach in interpreting experimental data - the manifestation of these processes in the EEG and ERP literature of normal and abnormal (ADHD) function, as well as with comprehensive relevant ERP data analysis. In-fact we show that from the model-based spatiotemporal segregation of the processes, it is possible to derive simple and yet effective and theory-based EEG markers differentiating normal and ADHD subjects. We summarize by claiming that the neurophysiologic processes modeled for the go/no-go task are part of a limited set of neurophysiologic processes which underlie, in a variety of combinations, any behavioral function with measurable operational definition. Such neurophysiologic processes could be sampled directly from EEG on the basis of model-based spatiotemporal segregation.
Initial Observations on the Burning of an Ethanol Droplet in Microgravity
NASA Technical Reports Server (NTRS)
Kazakov, Andrei; Urban, Bradley; Conley, Jordan; Dryer, Frederick L.; Ferkul, Paul (Technical Monitor)
1999-01-01
Combustion of liquid ethanol represents an important system both from fundamental and practical points of view, Ethanol is currently being used as an additive to gasoline in order to reduce carbon monoxide and particulate emissions as well as to improve the fuel octane rating. A detailed physical understanding of liquid ethanol combustion is therefore necessary to achieve an optimal performance of such fuel blends in practical conditions. Ethanol is also a relatively simple model compound suitable for investigation of important combustion characteristics typical of more complex fuels. In particular, ethanol has been proposed for studies of sooting behavior during droplet burning. The sooting nature of ethanol has pressure sensitivities similar to that of n-heptane, but shifted to a higher range of pressures (1-3 atm). Additionally, liquid ethanol is miscible with water produced during its combustion forming mixtures with azeotropic behavior, a phenomenon important for understanding multi-component, liquid fuel combustion. In this work, we present initial results obtained in a series of recent space-based experiments and develop a detailed model describing the burning of ethanol droplet in microgravity.
Comparisons of dense-plasma-focus kinetic simulations with experimental measurements.
Schmidt, A; Link, A; Welch, D; Ellsworth, J; Falabella, S; Tang, V
2014-06-01
Dense-plasma-focus (DPF) Z-pinch devices are sources of copious high-energy electrons and ions, x rays, and neutrons. The mechanisms through which these physically simple devices generate such high-energy beams in a relatively short distance are not fully understood and past optimization efforts of these devices have been largely empirical. Previously we reported on fully kinetic simulations of a DPF and compared them with hybrid and fluid simulations of the same device. Here we present detailed comparisons between fully kinetic simulations and experimental data on a 1.2 kJ DPF with two electrode geometries, including neutron yield and ion beam energy distributions. A more intensive third calculation is presented which examines the effects of a fully detailed pulsed power driver model. We also compare simulated electromagnetic fluctuations with direct measurement of radiofrequency electromagnetic fluctuations in a DPF plasma. These comparisons indicate that the fully kinetic model captures the essential physics of these plasmas with high fidelity, and provide further evidence that anomalous resistivity in the plasma arises due to a kinetic instability near the lower hybrid frequency.
Bayes Forest: a data-intensive generator of morphological tree clones
Järvenpää, Marko; Åkerblom, Markku; Raumonen, Pasi; Kaasalainen, Mikko
2017-01-01
Abstract Detailed and realistic tree form generators have numerous applications in ecology and forestry. For example, the varying morphology of trees contributes differently to formation of landscapes, natural habitats of species, and eco-physiological characteristics of the biosphere. Here, we present an algorithm for generating morphological tree “clones” based on the detailed reconstruction of the laser scanning data, statistical measure of similarity, and a plant growth model with simple stochastic rules. The algorithm is designed to produce tree forms, i.e., morphological clones, similar (and not identical) in respect to tree-level structure, but varying in fine-scale structural detail. Although we opted for certain choices in our algorithm, individual parts may vary depending on the application, making it a general adaptable pipeline. Namely, we showed that a specific multipurpose procedural stochastic growth model can be algorithmically adjusted to produce the morphological clones replicated from the target experimentally measured tree. For this, we developed a statistical measure of similarity (structural distance) between any given pair of trees, which allows for the comprehensive comparing of the tree morphologies by means of empirical distributions describing the geometrical and topological features of a tree. Finally, we developed a programmable interface to manipulate data required by the algorithm. Our algorithm can be used in a variety of applications for exploration of the morphological potential of the growth models (both theoretical and experimental), arising in all sectors of plant science research. PMID:29020742
Fu, Min; Wu, Wenming; Hong, Xiafei; Liu, Qiuhua; Jiang, Jialin; Ou, Yaobin; Zhao, Yupei; Gong, Xinqi
2018-04-24
Efficient computational recognition and segmentation of target organ from medical images are foundational in diagnosis and treatment, especially about pancreas cancer. In practice, the diversity in appearance of pancreas and organs in abdomen, makes detailed texture information of objects important in segmentation algorithm. According to our observations, however, the structures of previous networks, such as the Richer Feature Convolutional Network (RCF), are too coarse to segment the object (pancreas) accurately, especially the edge. In this paper, we extend the RCF, proposed to the field of edge detection, for the challenging pancreas segmentation, and put forward a novel pancreas segmentation network. By employing multi-layer up-sampling structure replacing the simple up-sampling operation in all stages, the proposed network fully considers the multi-scale detailed contexture information of object (pancreas) to perform per-pixel segmentation. Additionally, using the CT scans, we supply and train our network, thus get an effective pipeline. Working with our pipeline with multi-layer up-sampling model, we achieve better performance than RCF in the task of single object (pancreas) segmentation. Besides, combining with multi scale input, we achieve the 76.36% DSC (Dice Similarity Coefficient) value in testing data. The results of our experiments show that our advanced model works better than previous networks in our dataset. On the other words, it has better ability in catching detailed contexture information. Therefore, our new single object segmentation model has practical meaning in computational automatic diagnosis.
NASA Astrophysics Data System (ADS)
Ghil, M.; Spyratos, V.; Bourgeron, P. S.
2007-12-01
The late summer of 2007 has seen again a large number of catastrophic forest fires in the Western United States and Southern Europe. These fires arose in or spread to human habitats at the so-called wildland-urban interface (WUI). Within the conterminous United States alone, the WUI occupies just under 10 percent of the surface and contains almost 40 percent of all housing units. Recent dry spells associated with climate variability and climate change make the impact of such catastrophic fires a matter of urgency for decision makers, scientists and the general public. In order to explore the qualitative influence of the presence of houses on fire spread, we considered only uniform landscapes and fire spread as a simple percolation process, with given house densities d and vegetation flammabilities p. Wind, topography, fuel heterogeneities, firebrands and weather affect actual fire spread. The present theoretical results would therefore, need to be integrated into more detailed fire models before practical, quantitative applications of the present results. Our simple fire-spread model, along with housing and vegetation data, shows that fire-size probability distributions can be strongly modified by the density d and flammability of houses. We highlight a sharp transition zone in the parameter space of vegetation flammability p and house density d. The sharpness of this transition is related to the critical thresholds that arise in percolation theory for an infinite domain; it is their translation into our model's finite-area domain, which is a more realistic representation of actual fire landscapes. Many actual fire landscapes in the United States appear to have spreading properties close to this transition zone. Hence, and despite having neglected additional complexities, our idealized model's results indicate that more detailed models used for assessing fire risk in the WUI should integrate the density and flammability of houses in these areas. Furthermore, our results imply that fire proofing houses and their immediate surroundings within the WUI would not only reduce the houses' flammability and increase the security of the inhabitants, but also reduce fire risk for the entire landscape.
NASA Astrophysics Data System (ADS)
Baumgart, M.; Druml, N.; Consani, M.
2018-05-01
This paper presents a simulation approach for Time-of-Flight cameras to estimate sensor performance and accuracy, as well as to help understanding experimentally discovered effects. The main scope is the detailed simulation of the optical signals. We use a raytracing-based approach and use the optical path length as the master parameter for depth calculations. The procedure is described in detail with references to our implementation in Zemax OpticStudio and Python. Our simulation approach supports multiple and extended light sources and allows accounting for all effects within the geometrical optics model. Especially multi-object reflection/scattering ray-paths, translucent objects, and aberration effects (e.g. distortion caused by the ToF lens) are supported. The optical path length approach also enables the implementation of different ToF senor types and transient imaging evaluations. The main features are demonstrated on a simple 3D test scene.
Aging in complex interdependency networks.
Vural, Dervis C; Morrison, Greg; Mahadevan, L
2014-02-01
Although species longevity is subject to a diverse range of evolutionary forces, the mortality curves of a wide variety of organisms are rather similar. Here we argue that qualitative and quantitative features of aging can be reproduced by a simple model based on the interdependence of fault-prone agents on one other. In addition to fitting our theory to the empiric mortality curves of six very different organisms, we establish the dependence of lifetime and aging rate on initial conditions, damage and repair rate, and system size. We compare the size distributions of disease and death and see that they have qualitatively different properties. We show that aging patterns are independent of the details of interdependence network structure, which suggests that aging is a many-body effect, and that the qualitative and quantitative features of aging are not sensitively dependent on the details of dependency structure or its formation.
NASA Astrophysics Data System (ADS)
Ross, Graham G.; Germán, Gabriel; Vázquez, J. Alberto
2016-05-01
We construct two simple effective field theory versions of Hybrid Natural Inflation (HNI) that illustrate the range of its phenomenological implications. The resulting inflationary sector potential, V = Δ4(1 + acos( ϕ/f)), arises naturally, with the inflaton field a pseudo-Nambu-Goldstone boson. The end of inflation is triggered by a waterfall field and the conditions for this to happen are determined. Also of interest is the fact that the slow-roll parameter ɛ (and hence the tensor r) is a non-monotonic function of the field with a maximum where observables take universal values that determines the maximum possible tensor to scalar ratio r. In one of the models the inflationary scale can be as low as the electroweak scale. We explore in detail the associated HNI phenomenology, taking account of the constraints from Black Hole production, and perform a detailed fit to the Planck 2015 temperature and polarisation data.
Idealized gas turbine combustor for performance research and validation of large eddy simulations.
Williams, Timothy C; Schefer, Robert W; Oefelein, Joseph C; Shaddix, Christopher R
2007-03-01
This paper details the design of a premixed, swirl-stabilized combustor that was designed and built for the express purpose of obtaining validation-quality data for the development of large eddy simulations (LES) of gas turbine combustors. The combustor features nonambiguous boundary conditions, a geometrically simple design that retains the essential fluid dynamics and thermochemical processes that occur in actual gas turbine combustors, and unrestrictive access for laser and optical diagnostic measurements. After discussing the design detail, a preliminary investigation of the performance and operating envelope of the combustor is presented. With the combustor operating on premixed methane/air, both the equivalence ratio and the inlet velocity were systematically varied and the flame structure was recorded via digital photography. Interesting lean flame blowout and resonance characteristics were observed. In addition, the combustor exhibited a large region of stable, acoustically clean combustion that is suitable for preliminary validation of LES models.
Mechanism of lignin inhibition of enzymatic biomass deconstruction
Vermaas, Josh V.; Petridis, Loukas; Qi, Xianghong; ...
2015-12-01
The conversion of plant biomass to ethanol via enzymatic cellulose hydrolysis offers a potentially sustainable route to biofuel production. However, the inhibition of enzymatic activity in pretreated biomass by lignin severely limits the efficiency of this process. By performing atomic-detail molecular dynamics simulation of a biomass model containing cellulose, lignin, and cellulases (TrCel7A), we elucidate detailed lignin inhibition mechanisms. We find that lignin binds preferentially both to the elements of cellulose to which the cellulases also preferentially bind (the hydrophobic faces) and also to the specific residues on the cellulose-binding module of the cellulase that are critical for cellulose bindingmore » of TrCel7A (Y466, Y492, and Y493). In conclusion, lignin thus binds exactly where for industrial purposes it is least desired, providing a simple explanation of why hydrolysis yields increase with lignin removal.« less
Accretion onto stellar mass black holes
NASA Astrophysics Data System (ADS)
Deegan, Patrick
2009-12-01
I present work on the accretion onto stellar mass black holes in several scenarios. Due to dynamical friction stellar mass black holes are expected to form high density cusps in the inner parsec of our Galaxy. These compact remnants may be accreting cold dense gas present there, and give rise to potentially observable X-ray emission. I build a simple but detailed time-dependent model of such emission. Future observations of the distribution and orbits of the gas in the inner parsec of Sgr A* will put tighter constraints on the cusp of compact remnants. GRS 1915+105 is an LMXB, whose large orbital period implies a very large accretion disc and explains the extraordinary duration of its current outburst. I present smoothed particle hydrodynamic simulations of the accretion disc. The models includes the thermo-viscous instability, irradiation from the central object and wind loss. I find that the outburst of GRS 1915+105 should last a minimum of 20 years and up to ˜ 100 years if the irradiation is playing a significant role in this system. The predicted recurrence times are of the order of 104 years, making the duty cycle of GRS 1915+105 to be a few 0.1%. I present a simple analytical method to describe the observable behaviour of long period black hole LMXBs, similar to GRS 1915+105. Constructing two simple models for the surface density in the disc, outburst and quiescence times are calculated as a function of orbital period. LMXBs are an important constituent of the X-ray light function (XLF) of giant elliptical galaxies. I find that the duty cycle can vary considerably with orbital period, with implications for modelling the XLF.
Das, Payel; Matysiak, Silvina; Clementi, Cecilia
2005-01-01
Coarse-grained models have been extremely valuable in promoting our understanding of protein folding. However, the quantitative accuracy of existing simplified models is strongly hindered either from the complete removal of frustration (as in the widely used Gō-like models) or from the compromise with the minimal frustration principle and/or realistic protein geometry (as in the simple on-lattice models). We present a coarse-grained model that “naturally” incorporates sequence details and energetic frustration into an overall minimally frustrated folding landscape. The model is coupled with an optimization procedure to design the parameters of the protein Hamiltonian to fold into a desired native structure. The application to the study of src-Src homology 3 domain shows that this coarse-grained model contains the main physical-chemical ingredients that are responsible for shaping the folding landscape of this protein. The results illustrate the importance of nonnative interactions and energetic heterogeneity for a quantitative characterization of folding mechanisms. PMID:16006532
Improved Analysis of Earth System Models and Observations using Simple Climate Models
NASA Astrophysics Data System (ADS)
Nadiga, B. T.; Urban, N. M.
2016-12-01
Earth system models (ESM) are the most comprehensive tools we have to study climate change and develop climate projections. However, the computational infrastructure required and the cost incurred in running such ESMs precludes direct use of such models in conjunction with a wide variety of tools that can further our understanding of climate. Here we are referring to tools that range from dynamical systems tools that give insight into underlying flow structure and topology to tools that come from various applied mathematical and statistical techniques and are central to quantifying stability, sensitivity, uncertainty and predictability to machine learning tools that are now being rapidly developed or improved. Our approach to facilitate the use of such models is to analyze output of ESM experiments (cf. CMIP) using a range of simpler models that consider integral balances of important quantities such as mass and/or energy in a Bayesian framework.We highlight the use of this approach in the context of the uptake of heat by the world oceans in the ongoing global warming. Indeed, since in excess of 90% of the anomalous radiative forcing due greenhouse gas emissions is sequestered in the world oceans, the nature of ocean heat uptake crucially determines the surface warming that is realized (cf. climate sensitivity). Nevertheless, ESMs themselves are never run long enough to directly assess climate sensitivity. So, we consider a range of models based on integral balances--balances that have to be realized in all first-principles based models of the climate system including the most detailed state-of-the art climate simulations. The models range from simple models of energy balance to those that consider dynamically important ocean processes such as the conveyor-belt circulation (Meridional Overturning Circulation, MOC), North Atlantic Deep Water (NADW) formation, Antarctic Circumpolar Current (ACC) and eddy mixing. Results from Bayesian analysis of such models using both ESM experiments and actual observations are presented. One such result points to the importance of direct sequestration of heat below 700 m, a process that is not allowed for in the simple models that have been traditionally used to deduce climate sensitivity.
Procedures for determining MATMOD-4V material constants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lowe, T.C.
1993-11-01
The MATMOD-4V constitutive relations were developed from the original MATMOD model to extend the range of nonelastic deformation behaviors represented to include transient phenomena such as strain softening. Improvements in MATMOD-4V increased the number of independent material constants and the difficulty in determining their values. Though the constitutive relations are conceptually simple, their form and procedures for obtaining their constants can be complex. This paper reviews in detail the experiments, numerical procedures, and assumptions that have been used to determine a complete set of MATMOD-4V constants for high purity aluminum.
Improved numerical methods for turbulent viscous flows aerothermal modeling program, phase 2
NASA Technical Reports Server (NTRS)
Karki, K. C.; Patankar, S. V.; Runchal, A. K.; Mongia, H. C.
1988-01-01
The details of a study to develop accurate and efficient numerical schemes to predict complex flows are described. In this program, several discretization schemes were evaluated using simple test cases. This assessment led to the selection of three schemes for an in-depth evaluation based on two-dimensional flows. The scheme with the superior overall performance was incorporated in a computer program for three-dimensional flows. To improve the computational efficiency, the selected discretization scheme was combined with a direct solution approach in which the fluid flow equations are solved simultaneously rather than sequentially.
Radiative contribution to thermal conductance in animal furs and other woolly insulators.
Simonis, Priscilla; Rattal, Mourad; Oualim, El Mostafa; Mouhse, Azeddine; Vigneron, Jean-Pol
2014-01-27
This paper deals with radiation's contribution to thermal insulation. The mechanism by which a stack of absorbers limits radiative heat transfer is examined in detail both for black-body shields and grey-body shields. It shows that radiation energy transfer rates should be much faster than conduction rates. It demonstrates that, for opaque screens, increased reflectivity will dramatically reduce the rate of heat transfer, improving thermal insulation. This simple model is thought to contribute to the understanding of how animal furs, human clothes, rockwool insulators, thermo-protective containers, and many other passive energy-saving devices operate.
Minami, Atsushi; Oguri, Hiroki; Watanabe, Kenji; Oikawa, Hideaki
2013-08-01
Diversity of natural polycyclic polyethers originated from very simple yet versatile strategy consisting of epoxidation of linear polyene followed by epoxide opening cascade. To understand two-step enzymatic transformations at molecular basis, a flavin containing monooxygenase (EPX) Lsd18 and an epoxide hydrolase (EH) Lsd19 were selected as model enzymes for extensive investigation on substrate specificity, catalytic mechanism, cofactor requirement and crystal structure. This pioneering study on prototypical lasalocid EPX and EH provides insight into detailed mechanism of ionophore polyether assembly machinery and clarified remaining issues for polyether biosynthesis. Copyright © 2013 Elsevier Ltd. All rights reserved.
Transient upset models in computer systems
NASA Technical Reports Server (NTRS)
Mason, G. M.
1983-01-01
Essential factors for the design of transient upset monitors for computers are discussed. The upset is a system level event that is software dependent. It can occur in the program flow, the opcode set, the opcode address domain, the read address domain, and the write address domain. Most upsets are in the program flow. It is shown that simple, external monitors functioning transparently relative to the system operations can be built if a detailed accounting is made of the characteristics of the faults that can happen. Sample applications are provided for different states of the Z-80 and 8085 based system.
NASA Technical Reports Server (NTRS)
Thacker, B. H.; Mcclung, R. C.; Millwater, H. R.
1990-01-01
An eigenvalue analysis of a typical space propulsion system turbopump blade is presented using an approximate probabilistic analysis methodology. The methodology was developed originally to investigate the feasibility of computing probabilistic structural response using closed-form approximate models. This paper extends the methodology to structures for which simple closed-form solutions do not exist. The finite element method will be used for this demonstration, but the concepts apply to any numerical method. The results agree with detailed analysis results and indicate the usefulness of using a probabilistic approximate analysis in determining efficient solution strategies.
Theoretical Analysis of a Pulse Tube Regenerator
NASA Technical Reports Server (NTRS)
Roach, Pat R.; Kashani, Ali; Lee, J. M.; Cheng, Pearl L. (Technical Monitor)
1995-01-01
A theoretical analysis of the behavior of a typical pulse tube regenerator has been carried out. Assuming simple sinusoidal oscillations, the static and oscillatory pressures, velocities and temperatures have been determined for a model that includes a compressible gas and imperfect thermal contact between the gas and the regenerator matrix. For realistic material parameters, the analysis reveals that the pressure and, velocity oscillations are largely independent of details of the thermal contact between the gas and the solid matrix. Only the temperature oscillations depend on this contact. Suggestions for optimizing the design of a regenerator are given.
Deploying Server-side File System Monitoring at NERSC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uselton, Andrew
2009-05-01
The Franklin Cray XT4 at the NERSC center was equipped with the server-side I/O monitoring infrastructure Cerebro/LMT, which is described here in detail. Insights gained from the data produced include a better understanding of instantaneous data rates during file system testing, file system behavior during regular production time, and long-term average behaviors. Information and insights gleaned from this monitoring support efforts to proactively manage the I/O infrastructure on Franklin. A simple model for I/O transactions is introduced and compared with the 250 million observations sent to the LMT database from August 2008 to February 2009.
Transformations between Jordan and Einstein frames: Bounces, antigravity, and crossing singularities
NASA Astrophysics Data System (ADS)
Kamenshchik, Alexander Yu.; Pozdeeva, Ekaterina O.; Vernov, Sergey Yu.; Tronconi, Alessandro; Venturi, Giovanni
2016-09-01
We study the relation between the Jordan-Einstein frame transition and the possible description of the crossing of singularities in flat Friedmann universes, using the fact that the regular evolution in one frame can correspond to crossing singularities in the other frame. We show that some interesting effects arise in simple models such as one with a massless scalar field or another wherein the potential is constant in the Einstein frame. The dynamics in these models and in their conformally coupled counterparts are described in detail, and a method for the continuation of such cosmological evolutions beyond the singularity is developed. We compare our approach with some other, recently developed, approaches to the problem of the crossing of singularities.
Finite Element Analysis of Doorframe Structure of Single Oblique Pole Type in Container Crane
NASA Astrophysics Data System (ADS)
Cheng, X. F.; Wu, F. Q.; Tang, G.; Hu, X.
2017-07-01
Compared with the composite type, the single oblique pole type has more advantages, such as simple structure, thrift steel and high safe overhead clearance. The finite element model of the single oblique pole type is established in nodes by ANSYS, and more details are considered when the model is simplified, such as the section of Girder and Boom, torque in Girder and Boom occurred by Machinery house and Trolley, density according to the way of simplification etc. The stress and deformation of ten observation points are compared and analyzed, when the trolley is in nine dangerous positions. Based on the result of analysis, six dangerous points are selected to provide reference for the detection and evaluation of container crane.
Tracing the origin of azimuthal gluon correlations in the color glass condensate
NASA Astrophysics Data System (ADS)
Lappi, T.; Schenke, B.; Schlichting, S.; Venugopalan, R.
2016-01-01
We examine the origins of azimuthal correlations observed in high energy proton-nucleus collisions by considering the simple example of the scattering of uncorrelated partons off color fields in a large nucleus. We demonstrate how the physics of fluctuating color fields in the color glass condensate (CGC) effective theory generates these azimuthal multiparticle correlations and compute the corresponding Fourier coefficients v n within different CGC approximation schemes. We discuss in detail the qualitative and quantitative differences between the different schemes. We will show how a recently introduced color field domain model that captures key features of the observed azimuthal correlations can be understood in the CGC effective theory as a model of non-Gaussian correlations in the target nucleus.
NASA Technical Reports Server (NTRS)
Shia, Run-Lie; Ha, Yuk Lung; Wen, Jun-Shan; Yung, Yuk L.
1990-01-01
Extensive testing of the advective scheme proposed by Prather (1986) has been carried out in support of the California Institute of Technology-Jet Propulsion Laboratory two-dimensional model of the middle atmosphere. The original scheme is generalized to include higher-order moments. In addition, it is shown how well the scheme works in the presence of chemistry as well as eddy diffusion. Six types of numerical experiments including simple clock motion and pure advection in two dimensions have been investigated in detail. By comparison with analytic solutions, it is shown that the new algorithm can faithfully preserve concentration profiles, has essentially no numerical diffusion, and is superior to a typical fourth-order finite difference scheme.
An NPARC Turbulence Module with Wall Functions
NASA Technical Reports Server (NTRS)
Zhu, J.; Shih, T.-H.
1997-01-01
The turbulence module recently developed for the NPARC code has been extended to include wall functions. The Van Driest transformation is used so that the wall functions can be applied to both incompressible and compressible flows. The module is equipped with three two-equation K-epsilon turbulence models: Chien, Shih-Lumley and CMOTR models. Details of the wall functions as well as their numerical implementation are reported. It is shown that the inappropriate artificial viscosity in the near-wall region has a big influence on the solution of the wall function approach. A simple way to eliminate this influence is proposed, which gives satisfactory results during the code validation. The module can be easily linked to the NPARC code for practical applications.
From Cannibalism to Active Motion of Groups
NASA Astrophysics Data System (ADS)
Romanczuk, Pawel; Schimansky-Geier, Lutz
2008-03-01
The detailed mechanisms leading to collective dynamics in groups of animals and insect are still poorly understood. A recent study by Simpson et. al. suggests cannibalism as a driving mechanism for coordinated migration of mormon crickets [1]. Based on this result we propose a simple generic model of brownian particles interacting by asymmetric, non-conservative collisions accounting for cannibalistic behavior and the corresponding avoidance strategy. We discuss our model in one and two dimensions and show that a certain type of collisions drives the system out of equilibrium and leads to coordinated active motion of groups.[1] Stephen J. Simpson, Gregory A. Sword, Patrick D. Lorch and Iain D. Couzin: Cannibal crickets on a forced march for protein and salt, PNAS, 103:4152-4156, 2006
Numerical implementation of the S-matrix algorithm for modeling of relief diffraction gratings
NASA Astrophysics Data System (ADS)
Yaremchuk, Iryna; Tamulevičius, Tomas; Fitio, Volodymyr; Gražulevičiūte, Ieva; Bobitski, Yaroslav; Tamulevičius, Sigitas
2013-11-01
A new numerical implementation is developed to calculate the diffraction efficiency of relief diffraction gratings. In the new formulation, vectors containing the expansion coefficients of electric and magnetic fields on boundaries of the grating layer are expressed by additional constants. An S-matrix algorithm has been systematically described in detail and adapted to a simple matrix form. This implementation is suitable for the study of optical characteristics of periodic structures by using modern object-oriented programming languages and different standard mathematical software. The modeling program has been developed on the basis of this numerical implementation and tested by comparison with other commercially available programs and experimental data. Numerical examples are given to show the usefulness of the new implementation.
Availability of health data: requirements and solutions.
Espinosa, A L
1998-03-01
There is an increasing recognition of the importance of the health data available for the corporate healthcare system model with the electronic patient record as the central unit of the healthcare information systems. There is also increasing recognition of the importance of developing simple international standards for record components, including clinical and administrative requirements. Aspects of security and confidentiality have to be reviewed in detail. The advantages of having health data available when and where it is required will modify healthcare delivery and support cost control with economies of scale and sharing of resources. The infrastructure necessary to make this model a reality is being developed through different international initiatives, which have to be integrated and co-ordinated to have common disaster planning strategies and better funding alternatives.
NASA Astrophysics Data System (ADS)
Gerhard, Christoph; Adams, Geoff
2015-10-01
Geometric optics is at the heart of optics teaching. Some of us may remember using pins and string to test the simple lens equation at school. Matters get more complex at undergraduate/postgraduate levels as we are introduced to paraxial rays, real rays, wavefronts, aberration theory and much more. Software is essential for the later stages, and the right software can profitably be used even at school. We present two free PC programs, which have been widely used in optics teaching, and have been further developed in close cooperation with lecturers/professors in order to address the current content of the curricula for optics, photonics and lasers in higher education. PreDesigner is a single thin lens modeller. It illustrates the simple lens law with construction rays and then allows the user to include field size and aperture. Sliders can be used to adjust key values with instant graphical feedback. This tool thus represents a helpful teaching medium for the visualization of basic interrelations in optics. WinLens3DBasic can model multiple thin or thick lenses with real glasses. It shows the system focii, principal planes, nodal points, gives paraxial ray trace values, details the Seidel aberrations, offers real ray tracing and many forms of analysis. It is simple to reverse lenses and model tilts and decenters. This tool therefore provides a good base for learning lens design fundamentals. Much work has been put into offering these features in ways that are easy to use, and offer opportunities to enhance the student's background understanding.
Mass and Environment as Drivers of Galaxy Evolution: Simplicity and its Consequences
NASA Astrophysics Data System (ADS)
Peng, Yingjie
2012-01-01
The galaxy population appears to be composed of infinitely complex different types and properties at first sight, however, when large samples of galaxies are studied, it appears that the vast majority of galaxies just follow simple scaling relations and similar evolutional modes while the outliers represent some minority. The underlying simplicities of the interrelationships among stellar mass, star formation rate and environment are seen in SDSS and zCOSMOS. We demonstrate that the differential effects of mass and environment are completely separable to z 1, indicating that two distinct physical processes are operating, namely the "mass quenching" and "environment quenching". These two simple quenching processes, plus some additional quenching due to merging, then naturally produce the Schechter form of the galaxy stellar mass functions and make quantitative predictions for the inter-relationships between the Schechter parameters of star-forming and passive galaxies in different environments. All of these detailed quantitative relationships are indeed seen, to very high precision, in SDSS, lending strong support to our simple empirically-based model. The model also offers qualitative explanations for the "anti-hierarchical" age-mass relation and the alpha-enrichment patterns for passive galaxies and makes some other testable predictions such as the mass function of the population of transitory objects that are in the process of being quenched, the galaxy major- and minor-merger rates, the galaxy stellar mass assembly history, star formation history and etc. Although still purely phenomenological, the model makes clear what the evolutionary characteristics of the relevant physical processes must in fact be.
How Much Detail Needs to Be Elucidated in Self-Harm Research?
ERIC Educational Resources Information Center
Stanford, Sarah; Jones, Michael P.
2010-01-01
Assessing self-harm through brief multiple choice items is simple and less invasive than more detailed methods of assessment. However, there is currently little validation for brief methods of self-harm assessment. This study evaluates the extent to which adolescents' perceptions of self-harm agree with definitions in the literature, and what…
Zhao, Jingbo; McMahon, Barry; Fox, Mark; Gregersen, Hans
2018-06-10
Esophageal diseases are highly prevalent and carry significant socioeconomic burden. Despite the apparently simple function of the esophagus, we still struggle to better understand its physiology and pathophysiology. The assessment of large data sets and application of multiscale mathematical organ models have gained attention as part of the Physiome Project. This has long been recognized in cardiology but has only recently gained attention for the gastrointestinal(GI) tract. The term "esophagiome" implies a holistic assessment of esophageal function, from cellular and muscle physiology to the mechanical responses that transport and mix fluid contents. These anatomical, mechanical, and physiological models underlie the development of a "virtual esophagus" modeling framework to characterize and analyze function and disease. Functional models incorporate anatomical details with sensory-motor responses, especially related to biomechanical functions such as bolus transport. Our review builds on previous reviews and focuses on assessment of detailed anatomical and geometric data using advanced imaging technology for evaluation of gastro-esophageal reflux disease (GERD), and on esophageal mechanophysiology assessed using technologies that distend the esophagus. Integration of mechanics- and physiology-based analysis is a useful characteristic of the esophagiome. Experimental data on pressures and geometric characteristics are useful for the validation of mathematical and computer models of the esophagus that may provide predictions of novel endoscopic, surgical, and pharmaceutical treatment options. © 2018 New York Academy of Sciences.
A Simple and Affordable TTL Processor for the Classroom
ERIC Educational Resources Information Center
Feinberg, Dave
2007-01-01
This paper presents a simple 4 bit computer processor design that may be built using TTL chips for less than $65. In addition to describing the processor itself in detail, we discuss our experience using the laboratory kit and its associated machine instruction set to teach computer architecture to high school students. (Contains 3 figures and 5…
Sound propagation from a simple source in a wind tunnel
NASA Technical Reports Server (NTRS)
Cole, J. E., III
1975-01-01
The nature of the acoustic field of a simple source in a wind tunnel under flow conditions was examined theoretically and experimentally. The motivation of the study was to establish aspects of the theoretical framework for interpreting acoustic data taken (in wind) tunnels using in wind microphones. Three distinct investigations were performed and are described in detail.
Layer-Based Approach for Image Pair Fusion.
Son, Chang-Hwan; Zhang, Xiao-Ping
2016-04-20
Recently, image pairs, such as noisy and blurred images or infrared and noisy images, have been considered as a solution to provide high-quality photographs under low lighting conditions. In this paper, a new method for decomposing the image pairs into two layers, i.e., the base layer and the detail layer, is proposed for image pair fusion. In the case of infrared and noisy images, simple naive fusion leads to unsatisfactory results due to the discrepancies in brightness and image structures between the image pair. To address this problem, a local contrast-preserving conversion method is first proposed to create a new base layer of the infrared image, which can have visual appearance similar to another base layer such as the denoised noisy image. Then, a new way of designing three types of detail layers from the given noisy and infrared images is presented. To estimate the noise-free and unknown detail layer from the three designed detail layers, the optimization framework is modeled with residual-based sparsity and patch redundancy priors. To better suppress the noise, an iterative approach that updates the detail layer of the noisy image is adopted via a feedback loop. This proposed layer-based method can also be applied to fuse another noisy and blurred image pair. The experimental results show that the proposed method is effective for solving the image pair fusion problem.
Scavenging and recombination kinetics in a radiation spur: The successive ordered scavenging events
NASA Astrophysics Data System (ADS)
Al-Samra, Eyad H.; Green, Nicholas J. B.
2018-03-01
This study describes stochastic models to investigate the successive ordered scavenging events in a spur of four radicals, a model system based on a radiation spur. Three simulation models have been developed to obtain the probabilities of the ordered scavenging events: (i) a Monte Carlo random flight (RF) model, (ii) hybrid simulations in which the reaction rate coefficient is used to generate scavenging times for the radicals and (iii) the independent reaction times (IRT) method. The results of these simulations are found to be in agreement with one another. In addition, a detailed master equation treatment is also presented, and used to extract simulated rate coefficients of the ordered scavenging reactions from the RF simulations. These rate coefficients are transient, the rate coefficients obtained for subsequent reactions are effectively equal, and in reasonable agreement with the simple correction for competition effects that has recently been proposed.
2017-01-01
We study the G-strand equations that are extensions of the classical chiral model of particle physics in the particular setting of broken symmetries described by symmetric spaces. These equations are simple field theory models whose configuration space is a Lie group, or in this case a symmetric space. In this class of systems, we derive several models that are completely integrable on finite dimensional Lie group G, and we treat in more detail examples with symmetric space SU(2)/S1 and SO(4)/SO(3). The latter model simplifies to an apparently new integrable nine-dimensional system. We also study the G-strands on the infinite dimensional group of diffeomorphisms, which gives, together with the Sobolev norm, systems of 1+2 Camassa–Holm equations. The solutions of these equations on the complementary space related to the Witt algebra decomposition are the odd function solutions. PMID:28413343
Queuing theory models for computer networks
NASA Technical Reports Server (NTRS)
Galant, David C.
1989-01-01
A set of simple queuing theory models which can model the average response of a network of computers to a given traffic load has been implemented using a spreadsheet. The impact of variations in traffic patterns and intensities, channel capacities, and message protocols can be assessed using them because of the lack of fine detail in the network traffic rates, traffic patterns, and the hardware used to implement the networks. A sample use of the models applied to a realistic problem is included in appendix A. Appendix B provides a glossary of terms used in this paper. This Ames Research Center computer communication network is an evolving network of local area networks (LANs) connected via gateways and high-speed backbone communication channels. Intelligent planning of expansion and improvement requires understanding the behavior of the individual LANs as well as the collection of networks as a whole.
Metallurgical Plant Optimization Through the use of Flowsheet Simulation Modelling
NASA Astrophysics Data System (ADS)
Kennedy, Mark William
Modern metallurgical plants typically have complex flowsheets and operate on a continuous basis. Real time interactions within such processes can be complex and the impacts of streams such as recycles on process efficiency and stability can be highly unexpected prior to actual operation. Current desktop computing power, combined with state-of-the-art flowsheet simulation software like Metsim, allow for thorough analysis of designs to explore the interaction between operating rate, heat and mass balances and in particular the potential negative impact of recycles. Using plant information systems, it is possible to combine real plant data with simple steady state models, using dynamic data exchange links to allow for near real time de-bottlenecking of operations. Accurate analytical results can also be combined with detailed unit operations models to allow for feed-forward model-based-control. This paper will explore some examples of the application of Metsim to real world engineering and plant operational issues.
NASA Technical Reports Server (NTRS)
Dash, S. M.; Pergament, H. S.
1978-01-01
The development of a computational model (BOAT) for calculating nearfield jet entrainment, and its incorporation in an existing methodology for the prediction of nozzle boattail pressures, is discussed. The model accounts for the detailed turbulence and thermochemical processes occurring in the mixing layer formed between a jet exhaust and surrounding external stream while interfacing with the inviscid exhaust and external flowfield regions in an overlaid, interactive manner. The ability of the BOAT model to analyze simple free shear flows is assessed by comparisons with fundamental laboratory data. The overlaid procedure for incorporating variable pressures into BOAT and the entrainment correction employed to yield an effective plume boundary for the inviscid external flow are demonstrated. This is accomplished via application of BOAT in conjunction with the codes comprising the NASA/LRC patched viscous/inviscid methodology for determining nozzle boattail drag for subsonic/transonic external flows.
NASA Astrophysics Data System (ADS)
Betta, R. M.; Peres, G.; Reale, F.; Serio, S.
2001-12-01
We revisit a well-studied solar flare whose X-ray emission originating from a simple loop structure was observed by most of the instruments on board SMM on November 12, 1980. The X-ray emission of this flare, as observed with the XRP, was successfully modeled previously. Here we include a detailed modeling of the transition region and we compare the hydrodynamic results with the UVSP observations in two EUV lines, measured in areas smaller than the XRP rasters, covering only some portions of the flaring loop (the top and the foot-points). The single loop hydrodynamic model, which fits well the evolution of coronal lines (those observed with the XRP and the Fe XXI 1354.1 Å line observed with the UVSP) fails to model the flux level and evolution of the O V 1371.3 Åline.
A scalable multi-process model of root nitrogen uptake
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walker, Anthony P.
This article is a Commentary on McMurtrie & Näsholm et al., 218: 119–130. Roots are represented in Terrestrial Ecosystem Models (TEMs) in much less detail than their equivalent above-ground resource acquisition organs – leaves. Often roots in TEMs are simply resource sinks, and below-ground resource acquisition is commonly simulated without any relationship to root dynamics at all, though there are exceptions (e.g. Zaehle & Friend, 2010). The representation of roots as carbon (C) and nitrogen (N) sinks without complementary source functions can lead to strange sensitivities in a model. For example, reducing root lifespans in the Community Land Model (versionmore » 4.5) increases plant production as N cycles more rapidly through the ecosystem without loss of plant function (D. M. Ricciuto, unpublished). The primary reasons for the poorer representation of roots compared with leaves in TEMs are three-fold: (1) data are much harder won, especially in the field; (2) no simple mechanistic models of root function are available; and (3) scaling root function from an individual root to a root system lags behind methods of scaling leaf function to a canopy. Here in this issue of New Phytologist, McMurtrie & Näsholm (pp. 119–130) develop a relatively simple model for root N uptake that mechanistically accounts for processes of N supply (mineralization and transport by diffusion and mass flow) and N demand (root uptake and microbial immobilization).« less
A scalable multi-process model of root nitrogen uptake
Walker, Anthony P.
2018-02-28
This article is a Commentary on McMurtrie & Näsholm et al., 218: 119–130. Roots are represented in Terrestrial Ecosystem Models (TEMs) in much less detail than their equivalent above-ground resource acquisition organs – leaves. Often roots in TEMs are simply resource sinks, and below-ground resource acquisition is commonly simulated without any relationship to root dynamics at all, though there are exceptions (e.g. Zaehle & Friend, 2010). The representation of roots as carbon (C) and nitrogen (N) sinks without complementary source functions can lead to strange sensitivities in a model. For example, reducing root lifespans in the Community Land Model (versionmore » 4.5) increases plant production as N cycles more rapidly through the ecosystem without loss of plant function (D. M. Ricciuto, unpublished). The primary reasons for the poorer representation of roots compared with leaves in TEMs are three-fold: (1) data are much harder won, especially in the field; (2) no simple mechanistic models of root function are available; and (3) scaling root function from an individual root to a root system lags behind methods of scaling leaf function to a canopy. Here in this issue of New Phytologist, McMurtrie & Näsholm (pp. 119–130) develop a relatively simple model for root N uptake that mechanistically accounts for processes of N supply (mineralization and transport by diffusion and mass flow) and N demand (root uptake and microbial immobilization).« less
Approaches to the structural modelling of insect wings.
Wootton, R J; Herbert, R C; Young, P G; Evans, K E
2003-01-01
Insect wings lack internal muscles, and the orderly, necessary deformations which they undergo in flight and folding are in part remotely controlled, in part encoded in their structure. This factor is crucial in understanding their complex, extremely varied morphology. Models have proved particularly useful in clarifying the facilitation and control of wing deformation. Their development has followed a logical sequence from conceptual models through physical and simple analytical to numerical models. All have value provided their limitations are realized and constant comparisons made with the properties and mechanical behaviour of real wings. Numerical modelling by the finite element method is by far the most time-consuming approach, but has real potential in analysing the adaptive significance of structural details and interpreting evolutionary trends. Published examples are used to review the strengths and weaknesses of each category of model, and a summary is given of new work using finite element modelling to investigate the vibration properties and response to impact of hawkmoth wings. PMID:14561349
GCSS/WGNE Pacific Cross-section Intercomparison: Tropical and Subtropical Cloud Transitions
NASA Astrophysics Data System (ADS)
Teixeira, J.
2008-12-01
In this presentation I will discuss the role of the GEWEX Cloud Systems Study (GCSS) working groups in paving the way for substantial improvements in cloud parameterization in weather and climate models. The GCSS/WGNE Pacific Cross-section Intercomparison (GPCI) is an extension of GCSS and is a different type of model evaluation where climate models are analyzed along a Pacific Ocean transect from California to the equator. This approach aims at complementing the more traditional efforts in GCSS by providing a simple framework for the evaluation of models that encompasses several fundamental cloud regimes such as stratocumulus, shallow cumulus and deep cumulus, as well as the transitions between them. Currently twenty four climate and weather prediction models are participating in GPCI. We will present results of the comparison between models and recent satellite data. In particular, we will explore in detail the potential of the Atmospheric Infrared Sounder (AIRS) and CloudSat data for the evaluation of the representation of clouds and convection in climate models.
Transferable atomistic model to describe the energetics of zirconia
NASA Astrophysics Data System (ADS)
Wilson, Mark; Schönberger, Uwe; Finnis, Michael W.
1996-10-01
We have investigated the energies of a number of phases of ZrO2 using models of an increasing degree of sophistication: the simple ionic model, the polarizable ion model, the compressible ion model, and finally a model including quadrupole polarizability of the oxygen ions. The three structures which are observed with increasing temperatures are monoclinic, tetragonal, and cubic (fluorite). Besides these we have studied some hypothetical structures which certain potentials erroneously predict or which occur in other oxides with this stoichiometry, e.g., the α-PbO2 structure and rutile. We have also performed ab initio density functional calculations with the full-potential linear combination of muffin-tin orbitals method to investigate the cubic-tetragonal distortion. A detailed comparison is made between the results using classical potentials, the experimental data, and our own and other ab initio results. The factors which stabilize the various structure are analyzed. We find the only genuinely transferable model is the one including compressible ions and anion polarizability to the quadrupole level.
Hass, Joachim; Hertäg, Loreen; Durstewitz, Daniel
2016-01-01
The prefrontal cortex is centrally involved in a wide range of cognitive functions and their impairment in psychiatric disorders. Yet, the computational principles that govern the dynamics of prefrontal neural networks, and link their physiological, biochemical and anatomical properties to cognitive functions, are not well understood. Computational models can help to bridge the gap between these different levels of description, provided they are sufficiently constrained by experimental data and capable of predicting key properties of the intact cortex. Here, we present a detailed network model of the prefrontal cortex, based on a simple computationally efficient single neuron model (simpAdEx), with all parameters derived from in vitro electrophysiological and anatomical data. Without additional tuning, this model could be shown to quantitatively reproduce a wide range of measures from in vivo electrophysiological recordings, to a degree where simulated and experimentally observed activities were statistically indistinguishable. These measures include spike train statistics, membrane potential fluctuations, local field potentials, and the transmission of transient stimulus information across layers. We further demonstrate that model predictions are robust against moderate changes in key parameters, and that synaptic heterogeneity is a crucial ingredient to the quantitative reproduction of in vivo-like electrophysiological behavior. Thus, we have produced a physiologically highly valid, in a quantitative sense, yet computationally efficient PFC network model, which helped to identify key properties underlying spike time dynamics as observed in vivo, and can be harvested for in-depth investigation of the links between physiology and cognition. PMID:27203563
From 6D superconformal field theories to dynamic gauged linear sigma models
NASA Astrophysics Data System (ADS)
Apruzzi, Fabio; Hassler, Falk; Heckman, Jonathan J.; Melnikov, Ilarion V.
2017-09-01
Compactifications of six-dimensional (6D) superconformal field theories (SCFTs) on four- manifolds generate a large class of novel two-dimensional (2D) quantum field theories. We consider in detail the case of the rank-one simple non-Higgsable cluster 6D SCFTs. On the tensor branch of these theories, the gauge group is simple and there are no matter fields. For compactifications on suitably chosen Kähler surfaces, we present evidence that this provides a method to realize 2D SCFTs with N =(0 ,2 ) supersymmetry. In particular, we find that reduction on the tensor branch of the 6D SCFT yields a description of the same 2D fixed point that is described in the UV by a gauged linear sigma model (GLSM) in which the parameters are promoted to dynamical fields, that is, a "dynamic GLSM" (DGLSM). Consistency of the model requires the DGLSM to be coupled to additional non-Lagrangian sectors obtained from reduction of the antichiral two-form of the 6D theory. These extra sectors include both chiral and antichiral currents, as well as spacetime filling noncritical strings of the 6D theory. For each candidate 2D SCFT, we also extract the left- and right-moving central charges in terms of data of the 6D SCFT and the compactification manifold.
Modelling of ‘sub-atomic’ contrast resulting from back-bonding on Si(111)-7×7
Jarvis, Samuel P; Rashid, Mohammad A
2016-01-01
Summary It has recently been shown that ‘sub-atomic’ contrast can be observed during NC-AFM imaging of the Si(111)-7×7 substrate with a passivated tip, resulting in triangular shaped atoms [Sweetman et al. Nano Lett. 2014, 14, 2265]. The symmetry of the features, and the well-established nature of the dangling bond structure of the silicon adatom means that in this instance the contrast cannot arise from the orbital structure of the atoms, and it was suggested by simple symmetry arguments that the contrast could only arise from the backbonding symmetry of the surface adatoms. However, no modelling of the system has been performed in order to understand the precise origin of the contrast. In this paper we provide a detailed explanation for ‘sub-atomic’ contrast observed on Si(111)-7×7 using a simple model based on Lennard-Jones potentials, coupled with a flexible tip, as proposed by Hapala et al. [Phys. Rev. B 2014, 90, 085421] in the context of interpreting sub-molecular contrast. Our results show a striking similarity to experimental results, and demonstrate how ‘sub-atomic’ contrast can arise from a flexible tip exploring an asymmetric potential created due to the positioning of the surrounding surface atoms. PMID:27547610
Forecasting Chikungunya spread in the Americas via data-driven empirical approaches.
Escobar, Luis E; Qiao, Huijie; Peterson, A Townsend
2016-02-29
Chikungunya virus (CHIKV) is endemic to Africa and Asia, but the Asian genotype invaded the Americas in 2013. The fast increase of human infections in the American epidemic emphasized the urgency of developing detailed predictions of case numbers and the potential geographic spread of this disease. We developed a simple model incorporating cases generated locally and cases imported from other countries, and forecasted transmission hotspots at the level of countries and at finer scales, in terms of ecological features. By late January 2015, >1.2 M CHIKV cases were reported from the Americas, with country-level prevalences between nil and more than 20 %. In the early stages of the epidemic, exponential growth in case numbers was common; later, however, poor and uneven reporting became more common, in a phenomenon we term "surveillance fatigue." Economic activity of countries was not associated with prevalence, but diverse social factors may be linked to surveillance effort and reporting. Our model predictions were initially quite inaccurate, but improved markedly as more data accumulated within the Americas. The data-driven methodology explored in this study provides an opportunity to generate descriptive and predictive information on spread of emerging diseases in the short-term under simple models based on open-access tools and data that can inform early-warning systems and public health intelligence.
NASA Technical Reports Server (NTRS)
Mosher, Marianne
1990-01-01
The principal objective is to assess the adequacy of linear acoustic theory with an impedence wall boundary condition to model the detailed sound field of an acoustic source in a duct. Measurements and calculations are compared of a simple acoustic source in a rectangular concrete duct lined with foam on the walls and anechoic end terminations. Measurement of acoustic pressure for twelve wave numbers provides variation in frequency and absorption characteristics of the duct walls. Close to the source, where the interference of wall reflections is minimal, correlation is very good. Away from the source, correlation degrades, especially for the lower frequencies. Sensitivity studies show little effect on the predicted results for changes in impedance boundary condition values, source location, measurement location, temperature, and source model for variations spanning the expected measurement error.
Simulation model for wind energy storage systems. Volume II. Operation manual. [SIMWEST code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warren, A.W.; Edsinger, R.W.; Burroughs, J.D.
1977-08-01
The effort developed a comprehensive computer program for the modeling of wind energy/storage systems utilizing any combination of five types of storage (pumped hydro, battery, thermal, flywheel and pneumatic). An acronym for the program is SIMWEST (Simulation Model for Wind Energy Storage). The level of detail of SIMWEST is consistent with a role of evaluating the economic feasibility as well as the general performance of wind energy systems. The software package consists of two basic programs and a library of system, environmental, and load components. Volume II, the SIMWEST operation manual, describes the usage of the SIMWEST program, the designmore » of the library components, and a number of simple example simulations intended to familiarize the user with the program's operation. Volume II also contains a listing of each SIMWEST library subroutine.« less
Electron impact ionization of atomic targets at relativistic energies
NASA Astrophysics Data System (ADS)
Uddin, M. A.; Basak, A. K.; Saha, B. C.
2009-05-01
The huge demand and scarcity of electron impact ionization cross sections (EIICS) that are essential not only in modeling but also in basic researches can be best filled in by simple to use analytical models [1] that are sufficiently accurate and provide fast generation of EIICS data over wide domain. We report few such models and compare their productive powers in terms of few adjustable parameters. Details of our results will be presented in the conference. [1] A. K. F. Haque, M. A. Uddin, A. K. Basak, K. R. Karim, B. C. Saha, and F. B. Malik, Phys. Scr. 74, 377 (2006); Phys. Rev A 73, 052703; M. A. R. Patoary, M. A. Uddin, A. K. F. Haque, M. Shahjahan, A. K. Basak, M. R. Talukdar and B. C. Saha, Int. J. Quan. Chem (in press). Supported by NSF CREST.
NASA Astrophysics Data System (ADS)
Skorobogatiy, Maksim; Sadasivan, Jayesh; Guerboukha, Hichem
2018-05-01
In this paper, we first discuss the main types of noise in a typical pump-probe system, and then focus specifically on terahertz time domain spectroscopy (THz-TDS) setups. We then introduce four statistical models for the noisy pulses obtained in such systems, and detail rigorous mathematical algorithms to de-noise such traces, find the proper averages and characterise various types of experimental noise. Finally, we perform a comparative analysis of the performance, advantages and limitations of the algorithms by testing them on the experimental data collected using a particular THz-TDS system available in our laboratories. We conclude that using advanced statistical models for trace averaging results in the fitting errors that are significantly smaller than those obtained when only a simple statistical average is used.
NASA Astrophysics Data System (ADS)
Almbladh, C.-O.; Morales, A. L.
1989-02-01
Auger CVV spectra of simple metals are generally believed to be well described by one-electron-like theories in the bulk which account for matrix elements and, in some cases, also static core-hole screening effects. We present here detailed calculations on Li, Be, Na, Mg, and Al using self-consistent bulk wave functions and proper matrix elements. The resulting spectra differ markedly from experiment and peak at too low energies. To explain this discrepancy we investigate effects of the surface and dynamical effects of the sudden disappearance of the core hole in the final state. To study core-hole effects we solve Mahan-Nozières-De Dominicis (MND) model numerically over the entire band. The core-hole potential and other parameters in the MND model are determined by self-consistent calculations of the core-hole impurity. The results are compared with simpler approximations based on the final-state rule due to von Barth and Grossmann. To study surface and mean-free-path effects we perform slab calculations for Al but use a simpler infinite-barrier model in the remaining cases. The model reproduces the slab spectra for Al with very good accuracy. In all cases investigated either the effects of the surface or the effects of the core hole give important modifications and a much improved agreement with experiment.
NASA Astrophysics Data System (ADS)
Adeline, K.; Ustin, S.; Roth, K. L.; Huesca Martinez, M.; Schaaf, C.; Baldocchi, D. D.; Gastellu-Etchegorry, J. P.
2015-12-01
The assessment of canopy biochemical diversity is critical for monitoring ecological and physiological functioning and for mapping vegetation change dynamics in relation to environmental resources. For example in oak woodland savannas, these dynamics are mainly driven by water constraints. Inversion using radiative transfer theory is one method for estimating canopy biochemistry. However, this approach generally only considers relatively simple scenarios to model the canopy due to the difficulty in encompassing stand heterogeneity with spatial and temporal consistency. In this research, we compared 3 modeling strategies for estimating canopy biochemistry variables (i.e. chlorophyll, carotenoids, water, dry matter) by coupling of the PROSPECT (leaf level) and DART (canopy level) models : i) a simple forest representation made of ellipsoid trees, and two representations taking into account the tree species and structural composition, and the landscape spatial pattern, using (ii) geometric tree crown shapes and iii) detailed tree crown and wood structure retrieved from terrestrial lidar acquisitions. AVIRIS 18m remote sensing data are up-scaled to simulate HyspIRI 30m images. Both spatial resolutions are validated by measurements acquired during 2013-2014 field campaigns (cover/tree inventory, LAI, leaf sampling, optical measures). The results outline the trade-off between accurate and abstract canopy modeling for inversion purposes and may provide perspectives to assess the impact of the California drought with multi-temporal monitoring of canopy biochemistry traits.
Fagan, William F; Lutscher, Frithjof
2006-04-01
Spatially explicit models for populations are often difficult to tackle mathematically and, in addition, require detailed data on individual movement behavior that are not easily obtained. An approximation known as the "average dispersal success" provides a tool for converting complex models, which may include stage structure and a mechanistic description of dispersal, into a simple matrix model. This simpler matrix model has two key advantages. First, it is easier to parameterize from the types of empirical data typically available to conservation biologists, such as survivorship, fecundity, and the fraction of juveniles produced in a study area that also recruit within the study area. Second, it is more amenable to theoretical investigation. Here, we use the average dispersal success approximation to develop estimates of the critical reserve size for systems comprising single patches or simple metapopulations. The quantitative approach can be used for both plants and animals; however, to provide a concrete example of the technique's utility, we focus on a special case pertinent to animals. Specifically, for territorial animals, we can characterize such an estimate of minimum viable habitat area in terms of the number of home ranges that the reserve contains. Consequently, the average dispersal success framework provides a framework through which home range size, natal dispersal distances, and metapopulation dynamics can be linked to reserve design. We briefly illustrate the approach using empirical data for the swift fox (Vulpes velox).
Dynamics of non-Markovian exclusion processes
NASA Astrophysics Data System (ADS)
Khoromskaia, Diana; Harris, Rosemary J.; Grosskinsky, Stefan
2014-12-01
Driven diffusive systems are often used as simple discrete models of collective transport phenomena in physics, biology or social sciences. Restricting attention to one-dimensional geometries, the asymmetric simple exclusion process (ASEP) plays a paradigmatic role to describe noise-activated driven motion of entities subject to an excluded volume interaction and many variants have been studied in recent years. While in the standard ASEP the noise is Poissonian and the process is therefore Markovian, in many applications the statistics of the activating noise has a non-standard distribution with possible memory effects resulting from internal degrees of freedom or external sources. This leads to temporal correlations and can significantly affect the shape of the current-density relation as has been studied recently for a number of scenarios. In this paper we report a general framework to derive the fundamental diagram of ASEPs driven by non-Poissonian noise by using effectively only two simple quantities, viz., the mean residual lifetime of the jump distribution and a suitably defined temporal correlation length. We corroborate our results by detailed numerical studies for various noise statistics under periodic boundary conditions and discuss how our approach can be applied to more general driven diffusive systems.
Probing Gamma-ray Emission of Geminga and Vela with Non-stationary Models
NASA Astrophysics Data System (ADS)
Chai, Yating; Cheng, Kwong-Sang; Takata, Jumpei
2016-06-01
It is generally believed that the high energy emissions from isolated pulsars are emitted from relativistic electrons/positrons accelerated in outer magnetospheric accelerators (outergaps) via a curvature radiation mechanism, which has a simple exponential cut-off spectrum. However, many gamma-ray pulsars detected by the Fermi LAT (Large Area Telescope) cannot be fitted by simple exponential cut-off spectrum, and instead a sub-exponential is more appropriate. It is proposed that the realistic outergaps are non-stationary, and that the observed spectrum is a superposition of different stationary states that are controlled by the currents injected from the inner and outer boundaries. The Vela and Geminga pulsars have the largest fluxes among all targets observed, which allows us to carry out very detailed phase-resolved spectral analysis. We have divided the Vela and Geminga pulsars into 19 (the off pulse of Vela was not included) and 33 phase bins, respectively. We find that most phase resolved spectra still cannot be fitted by a simple exponential spectrum: in fact, a sub-exponential spectrum is necessary. We conclude that non-stationary states exist even down to the very fine phase bins.
NASA Astrophysics Data System (ADS)
Muthukrishnan, S.; Harbor, J.
2001-12-01
Hydrological studies are significant part of every engineering, developmental project and geological studies done to assess and understand the interactions between the hydrology and the environment. Such studies are generally conducted before the beginning of the project as well as after the project is completed, such that a comprehensive analysis can be done on the impact of such projects on the local and regional hydrology of the area. A good understanding of the chain of relationships that form the hydro-eco-biological and environmental cycle can be of immense help in maintaining the natural balance as we work towards exploration and exploitation of the natural resources as well as urbanization of undeveloped land. Rainfall-Runoff modeling techniques have been of great use here for decades since they provide fast and efficient means of analyzing vast amount of data that is gathered. Though process based, detailed models are better than the simple models, the later ones are used more often due to their simplicity, ease of use, and easy availability of data needed to run them. The Curve Number (CN) method developed by the United States Department of Agriculture (USDA) is one of the most widely used hydrologic modeling tools in the US, and has earned worldwide acceptance as a practical method for evaluating the effects of land use changes on the hydrology of an area. The Long-Term Hydrological Impact Assessment (L-THIA) model is a basic, CN-based, user-oriented model that has gained popularity amongst watershed planners because of its reliance on readily available data, and because the model is easy to use (http://www.ecn.purdue.edu/runoff) and produces results geared to the general information needs of planners. The L-THIA model was initially developed to study the relative long-term hydrologic impacts of different land use (past/current/future) scenarios, and it has been successful in meeting this goal. However, one of the weaknesses of L-THIA, as well as other models that focus strictly on surface runoff, is that many users are interested in predictions of runoff that match observations of flow in streams and rivers. To make L-THIA more useful for the planners and engineers alike, a simple, long-term calibration method based on linear regression of L-THIA predicted and observed surface runoff has been developed and tested here. The results from Little Eagle Creek (LEC) in Indiana show that such calibrations are successful and valuable. This method can be used to calibrate other simple rainfall-runoff models too.
Corresponding-states behavior of an ionic model fluid with variable dispersion interactions
NASA Astrophysics Data System (ADS)
Weiss, Volker C.
2016-06-01
Guggenheim's corresponding-states approach for simple fluids leads to a remarkably universal representation of their thermophysical properties. For more complex fluids, such as polar or ionic ones, deviations from this type of behavior are to be expected, thereby supplying us with valuable information about the thermodynamic consequences of the interaction details in fluids. Here, the gradual transition of a simple fluid to an ionic one is studied by varying the relative strength of the dispersion interactions compared to the electrostatic interactions among the charged particles. In addition to the effects on the reduced surface tension that were reported earlier [F. Leroy and V. C. Weiss, J. Chem. Phys. 134, 094703 (2011)], we address the shape of the coexistence curve and focus on properties that are related to and derived from the vapor pressure. These quantities include the enthalpy and entropy of vaporization, the boiling point, and the critical compressibility factor Zc. For all of these properties, the crossover from simple to characteristically ionic fluid is seen once the dispersive attraction drops below 20%-40% of the electrostatic attraction (as measured for two particles at contact). Below this threshold, ionic fluids display characteristically low values of Zc as well as large Guggenheim and Guldberg ratios for the reduced enthalpy of vaporization and the reduced boiling point, respectively. The coexistence curves are wider and more skewed than those for simple fluids. The results for the ionic model fluid with variable dispersion interactions improve our understanding of the behavior of real ionic fluids, such as inorganic molten salts and room temperature ionic liquids, by gauging the importance of different types of interactions for thermodynamic properties.
Colloidal membranes: The rich confluence of geometry and liquid crystals
NASA Astrophysics Data System (ADS)
Kaplan, Cihan Nadir
A simple and experimentally realizable model system of chiral symmetry breaking is liquid-crystalline monolayers of aligned, identical hard rods. In these materials, tuning the chirality at the molecular level affects the geometry at systems level, thereby inducing a myriad of morphological transitions. This thesis presents theoretical studies motivated by the rich phenomenology of these colloidal monolayers. High molecular chirality leads to assemblages of rods exhibiting macroscopic handedness. In the first part we consider one such geometry, twisted ribbons, which are minimal surfaces to a double helix. By employing a theoretical approach that combines liquid-crystalline order with the preferred shape, we focus on the phase transition from simple flat monolayers to these twisted structures. In these monolayers, regions of broken chiral symmetry nucleate at the interfaces, as in a chiral smectic A sample. The second part particularly focuses on the detailed structure and thermodynamic stability of two types of observed interfaces, the monolayer edge and domain walls in simple flat monolayers. Both the edge and "twist-walls" are quasi-one-dimensional bands of molecular twist deformations dictated by local chiral interactions and surface energy considerations. We develop a unified theory of these interfaces by utilizing the de Gennes framework accompanied by appropriate surface energy terms. The last part turns to colloidal "cookies", which form in mixtures of rods with opposite handedness. These elegant structures are essentially flat monolayers surrounded by an array of local, three dimensional cusp defects. We reveal the thermodynamic and structural characteristics of cookies. Furthermore, cookies provide us with a simple relation to determine the intrinsic curvature modulus of our model system, an important constant associated with topological properties of membranes. Our results may have impacts on a broader class of soft thin films.
Corresponding-states behavior of an ionic model fluid with variable dispersion interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weiss, Volker C., E-mail: volker.weiss@bccms.uni-bremen.de
2016-06-21
Guggenheim’s corresponding-states approach for simple fluids leads to a remarkably universal representation of their thermophysical properties. For more complex fluids, such as polar or ionic ones, deviations from this type of behavior are to be expected, thereby supplying us with valuable information about the thermodynamic consequences of the interaction details in fluids. Here, the gradual transition of a simple fluid to an ionic one is studied by varying the relative strength of the dispersion interactions compared to the electrostatic interactions among the charged particles. In addition to the effects on the reduced surface tension that were reported earlier [F. Leroymore » and V. C. Weiss, J. Chem. Phys. 134, 094703 (2011)], we address the shape of the coexistence curve and focus on properties that are related to and derived from the vapor pressure. These quantities include the enthalpy and entropy of vaporization, the boiling point, and the critical compressibility factor Z{sub c}. For all of these properties, the crossover from simple to characteristically ionic fluid is seen once the dispersive attraction drops below 20%–40% of the electrostatic attraction (as measured for two particles at contact). Below this threshold, ionic fluids display characteristically low values of Z{sub c} as well as large Guggenheim and Guldberg ratios for the reduced enthalpy of vaporization and the reduced boiling point, respectively. The coexistence curves are wider and more skewed than those for simple fluids. The results for the ionic model fluid with variable dispersion interactions improve our understanding of the behavior of real ionic fluids, such as inorganic molten salts and room temperature ionic liquids, by gauging the importance of different types of interactions for thermodynamic properties.« less
Simulation of semi-explicit mechanisms of SOA formation from glyoxal in a 3D model
NASA Astrophysics Data System (ADS)
Knote, C. J.; Hodzic, A.; Jimenez, J. L.; Volkamer, R.; Orlando, J. J.; Baidar, S.; Brioude, J. F.; Fast, J. D.; Gentner, D. R.; Goldstein, A. H.; Hayes, P. L.; Knighton, W. B.; Oetjen, H.; Setyan, A.; Stark, H.; Thalman, R. M.; Tyndall, G. S.; Washenfelder, R. A.; Waxman, E.; Zhang, Q.
2013-12-01
Formation of secondary organic aerosols (SOA) through multi-phase processing of glyoxal has been proposed recently as a relevant contributor to SOA mass. Glyoxal has both anthropogenic and biogenic sources, and readily partitions into the aqueous-phase of cloud droplets and aerosols. Both reversible and irreversible chemistry in the liquid-phase has been observed. A recent laboratory study indicates that the presence of salts in the liquid-phase strongly enhances the Henry';s law constant of glyoxal, allowing for much more effective multi-phase processing. In our work we investigate the contribution of glyoxal to SOA formation on the regional scale. We employ the regional chemistry transport model WRF-chem with MOZART gas-phase chemistry and MOSAIC aerosols, which we both extended to improve the description of glyoxal formation in the gas-phase, and its interactions with aerosols. The detailed description of aerosols in our setup allows us to compare very simple (uptake coefficient) parameterizations of SOA formation from glyoxal, as has been used in previous modeling studies, with much more detailed descriptions of the various pathways postulated based on laboratory studies. Measurements taken during the CARES and CalNex campaigns in California in summer 2010 allowed us to constrain the model, including the major direct precursors of glyoxal. Simulations at convection-permitting resolution over a 2 week period in June 2010 have been conducted to assess the effect of the different ways to parameterize SOA formation from glyoxal and investigate its regional variability. We find that depending on the parameterization used the contribution of glyoxal to SOA is between 1 and 15% in the LA basin during this period, and that simple parameterizations based on uptake coefficients derived from box model studies lead to higher contributions (15%) than parameterizations based on lab experiments (1%). A kinetic limitation found in experiments hinders substantial contribution of volume-based pathways to total SOA formation from glyoxal. Once removed, 5% of total SOA can be formed from glyoxal through these channels. Results from a year-long simulation over the continental US will give a broader picture of the contribution of glyoxal to SOA formation.
Advances in Scientific Balloon Thermal Modeling
NASA Technical Reports Server (NTRS)
Bohaboj, T.; Cathey, H. M., Jr.
2004-01-01
The National Aeronautics and Space Administration's Balloon Program office has long acknowledged that the accurate modeling of balloon performance and flight prediction is dependant on how well the balloon is thermally modeled. This ongoing effort is focused on developing accurate balloon thermal models that can be used to quickly predict balloon temperatures and balloon performance. The ability to model parametric changes is also a driver for this effort. This paper will present the most recent advances made in this area. This research effort continues to utilize the "Thrmal Desktop" addition to AUTO CAD for the modeling. Recent advances have been made by using this analytical tool. A number of analyses have been completed to test the applicability of this tool to the problem with very positive results. Progressively detailed models have been developed to explore the capabilities of the tool as well as to provide guidance in model formulation. A number of parametric studies have been completed. These studies have varied the shape of the structure, material properties, environmental inputs, and model geometry. These studies have concentrated on spherical "proxy models" for the initial development stages and then to transition to the natural shaped zero pressure and super pressure balloons. An assessment of required model resolution has also been determined. Model solutions have been cross checked with known solutions via hand calculations. The comparison of these cases will also be presented. One goal is to develop analysis guidelines and an approach for modeling balloons for both simple first order estimates and detailed full models. This papa presents the step by step advances made as part of this effort, capabilities, limitations, and the lessons learned. Also presented are the plans for further thermal modeling work.
Combining 3d Volume and Mesh Models for Representing Complicated Heritage Buildings
NASA Astrophysics Data System (ADS)
Tsai, F.; Chang, H.; Lin, Y.-W.
2017-08-01
This study developed a simple but effective strategy to combine 3D volume and mesh models for representing complicated heritage buildings and structures. The idea is to seamlessly integrate 3D parametric or polyhedral models and mesh-based digital surfaces to generate a hybrid 3D model that can take advantages of both modeling methods. The proposed hybrid model generation framework is separated into three phases. Firstly, after acquiring or generating 3D point clouds of the target, these 3D points are partitioned into different groups. Secondly, a parametric or polyhedral model of each group is generated based on plane and surface fitting algorithms to represent the basic structure of that region. A "bare-bones" model of the target can subsequently be constructed by connecting all 3D volume element models. In the third phase, the constructed bare-bones model is used as a mask to remove points enclosed by the bare-bones model from the original point clouds. The remaining points are then connected to form 3D surface mesh patches. The boundary points of each surface patch are identified and these boundary points are projected onto the surfaces of the bare-bones model. Finally, new meshes are created to connect the projected points and original mesh boundaries to integrate the mesh surfaces with the 3D volume model. The proposed method was applied to an open-source point cloud data set and point clouds of a local historical structure. Preliminary results indicated that the reconstructed hybrid models using the proposed method can retain both fundamental 3D volume characteristics and accurate geometric appearance with fine details. The reconstructed hybrid models can also be used to represent targets in different levels of detail according to user and system requirements in different applications.
GLACE: The Global Land-Atmosphere Coupling Experiment. Part 1; Overview
NASA Technical Reports Server (NTRS)
Koster, Randal D.; Guo, Zhi-Chang; Dirmeyer, Paul A.; Bonan, Gordon; Chan, Edmond; Cox, Peter; Davies, Harvey; Gordon, C. T.; Kanae, Shinjiro; Kowalczyk, Eva
2005-01-01
GLACE is a model intercomparison study focusing on a typically neglected yet critical element of numerical weather and climate modeling: land-atmosphere coupling strength, or the degree to which anomalies in land surface state (e.g., soil moisture) can affect rainfall generation and other atmospheric processes. The twelve AGCM groups participating in GLACE performed a series of simple numerical experiments that allow the objective quantification of this element. The derived coupling strengths vary widely. Some similarity, however, is found in the spatial patterns generated by the models, enough similarity to pinpoint multi-model "hot spots" of land-atmosphere coupling. For boreal summer, such hot spots for precipitation and temperature are found over large regions of Africa, central North America and India; a hot spot for temperature is also found over eastern China. The design of the GLACE simulations are described in full detail so that any interested modeling group can repeat them easily and thereby place their model s coupling strength within the broad range of those documented here.
Ji, C.; Helmberger, D.V.; Wald, D.J.
2004-01-01
Slip histories for the 2002 M7.9 Denali fault, Alaska, earthquake are derived rapidly from global teleseismic waveform data. In phases, three models improve matching waveform data and recovery of rupture details. In the first model (Phase I), analogous to an automated solution, a simple fault plane is fixed based on the preliminary Harvard Centroid Moment Tensor mechanism and the epicenter provided by the Preliminary Determination of Epicenters. This model is then updated (Phase II) by implementing a more realistic fault geometry inferred from Digital Elevation Model topography and further (Phase III) by using the calibrated P-wave and SH-wave arrival times derived from modeling of the nearby 2002 M6.7 Nenana Mountain earthquake. These models are used to predict the peak ground velocity and the shaking intensity field in the fault vicinity. The procedure to estimate local strong motion could be automated and used for global real-time earthquake shaking and damage assessment. ?? 2004, Earthquake Engineering Research Institute.
Let's Go Off the Grid: Subsurface Flow Modeling With Analytic Elements
NASA Astrophysics Data System (ADS)
Bakker, M.
2017-12-01
Subsurface flow modeling with analytic elements has the major advantage that no grid or time stepping are needed. Analytic element formulations exist for steady state and transient flow in layered aquifers and unsaturated flow in the vadose zone. Analytic element models are vector-based and consist of points, lines and curves that represent specific features in the subsurface. Recent advances allow for the simulation of partially penetrating wells and multi-aquifer wells, including skin effect and wellbore storage, horizontal wells of poly-line shape including skin effect, sharp changes in subsurface properties, and surface water features with leaky beds. Input files for analytic element models are simple, short and readable, and can easily be generated from, for example, GIS databases. Future plans include the incorporation of analytic element in parts of grid-based models where additional detail is needed. This presentation will give an overview of advanced flow features that can be modeled, many of which are implemented in free and open-source software.
23. DETAIL PHOTO OF A TYPICAL PIER BELT COURSE AT ...
23. DETAIL PHOTO OF A TYPICAL PIER BELT COURSE AT THE SPRING LINE OF ONE OF THE ARCHES. IT IS BEVELLED AND SUPPORTED BY A SIMPLE CAVETTO MOLDING. THE PILE OF AGGREGATE ON THE COPING HAS FALLEN FROM THE ERODING ARRISES ABOVE. - Main Street Bridge, Spanning East Fork Whitewater River, Richmond, Wayne County, IN
Single organic microtwist with tunable pitch.
Chen, Hai-Bo; Zhou, Yan; Yin, Jie; Yan, Jing; Ma, Yuguo; Wang, Lei; Cao, Yong; Wang, Jian; Pei, Jian
2009-05-19
A facile synthesis of previously unknown, well-separated, uniform chiral microstructures from achiral pi-conjugated organic molecules was developed by simple solution process. Detailed characterization and formation mechanism were presented. By simple structure modification or temperature change, the pitch of the chiral structure can be fine tuned. Our result opens new possibilities for novel materials in which structure chirality is coupled to device performance.
NASA Astrophysics Data System (ADS)
Aligholi, Saeed; Lashkaripour, Gholam Reza; Ghafoori, Mohammad
2017-01-01
This paper sheds further light on the fundamental relationships between simple methods, rock strength, and brittleness of igneous rocks. In particular, the relationship between mechanical (point load strength index I s(50) and brittleness value S 20), basic physical (dry density and porosity), and dynamic properties (P-wave velocity and Schmidt rebound values) for a wide range of Iranian igneous rocks is investigated. First, 30 statistical models (including simple and multiple linear regression analyses) were built to identify the relationships between mechanical properties and simple methods. The results imply that rocks with different Schmidt hardness (SH) rebound values have different physicomechanical properties or relations. Second, using these results, it was proved that dry density, P-wave velocity, and SH rebound value provide a fine complement to mechanical properties classification of rock materials. Further, a detailed investigation was conducted on the relationships between mechanical and simple tests, which are established with limited ranges of P-wave velocity and dry density. The results show that strength values decrease with the SH rebound value. In addition, there is a systematic trend between dry density, P-wave velocity, rebound hardness, and brittleness value of the studied rocks, and rocks with medium hardness have a higher brittleness value. Finally, a strength classification chart and a brittleness classification table are presented, providing reliable and low-cost methods for the classification of igneous rocks.
The development of a 3D immunocompetent model of human skin.
Chau, David Y S; Johnson, Claire; MacNeil, Sheila; Haycock, John W; Ghaemmaghami, Amir M
2013-09-01
As the first line of defence, skin is regularly exposed to a variety of biological, physical and chemical insults. Therefore, determining the skin sensitization potential of new chemicals is of paramount importance from the safety assessment and regulatory point of view. Given the questionable biological relevance of animal models to human as well as ethical and regulatory pressure to limit or stop the use of animal models for safety testing, there is a need for developing simple yet physiologically relevant models of human skin. Herein, we describe the construction of a novel immunocompetent 3D human skin model comprising of dendritic cells co-cultured with keratinocytes and fibroblasts. This model culture system is simple to assemble with readily-available components and importantly, can be separated into its constitutive individual layers to allow further insight into cell-cell interactions and detailed studies of the mechanisms of skin sensitization. In this study, using non-degradable microfibre scaffolds and a cell-laden gel, we have engineered a multilayer 3D immunocompetent model comprised of keratinocytes and fibroblasts that are interspersed with dendritic cells. We have characterized this model using a combination of confocal microscopy, immuno-histochemistry and scanning electron microscopy and have shown differentiation of the epidermal layer and formation of an epidermal barrier. Crucially the immune cells in the model are able to migrate and remain responsive to stimulation with skin sensitizers even at low concentrations. We therefore suggest this new biologically relevant skin model will prove valuable in investigating the mechanisms of allergic contact dermatitis and other skin pathologies in human. Once fully optimized, this model can also be used as a platform for testing the allergenic potential of new chemicals and drug leads.
Biological neural networks as model systems for designing future parallel processing computers
NASA Technical Reports Server (NTRS)
Ross, Muriel D.
1991-01-01
One of the more interesting debates of the present day centers on whether human intelligence can be simulated by computer. The author works under the premise that neurons individually are not smart at all. Rather, they are physical units which are impinged upon continuously by other matter that influences the direction of voltage shifts across the units membranes. It is only the action of a great many neurons, billions in the case of the human nervous system, that intelligent behavior emerges. What is required to understand even the simplest neural system is painstaking analysis, bit by bit, of the architecture and the physiological functioning of its various parts. The biological neural network studied, the vestibular utricular and saccular maculas of the inner ear, are among the most simple of the mammalian neural networks to understand and model. While there is still a long way to go to understand even this most simple neural network in sufficient detail for extrapolation to computers and robots, a start was made. Moreover, the insights obtained and the technologies developed help advance the understanding of the more complex neural networks that underlie human intelligence.
Probing the prodigious strain fringes from Lourdes
NASA Astrophysics Data System (ADS)
Aerden, Domingo G. A. M.; Sayab, Mohammad
2017-12-01
We investigate the kinematics of classic sigmoidal strain fringes from Lourdes (France) and review previous genetic models, strain methods and strain rates for these microstructures. Displacement controlled quartz and calcite fibers within the fringes yield an average strain of 195% with the technique of Ramsay and Huber (1983). This agrees well with strains measured from boudinaged pyrite layers and calcite veins in the same rocks, but conflicts with ca. ∼675% strain in previous analogue models for the studied strain fringes produced by progressive simple shear. We show that the detailed geometry and orientation of fiber patterns are insufficiently explained by simple shear but imply two successive, differently oriented strain fields. Although all strain fringes have the same overall asymmetry, considerable morphological variation resulted from different amounts of rotation of pyrite grains and fringes. Minor rotation led to sharply kinked fibers that record a ca. 70° rotation of the kinematic frame. Larger (up to 145°) rotations, accommodated by antithetic sliding on pyrite-fringe contacts, produced more strongly and smoothly curved fibers. Combined with published Rb-Sr ages for the studied microstructures, our new strain data indicate an average strain rate of 1.41 10-15 s-1 during ca. 37 Myr. continuous growth.
Point-source helicity injection for ST plasma startup in Pegasus
NASA Astrophysics Data System (ADS)
Redd, A. J.; Battaglia, D. J.; Bongard, M. W.; Fonck, R. J.; Schlossberg, D. J.
2009-11-01
Plasma current guns are used as point-source DC helicity injectors for forming non-solenoidal tokamak plasmas in the Pegasus Toroidal Experiment. Discharges driven by this injection scheme have achieved Ip>= 100 kA using Iinj<= 4 kA. They form at the outboard midplane, transition to a tokamak-like equilibrium, and continue to grow inward as Ip increases due to helicity injection and outer- PF induction. The maximum Ip is determined by helicity balance (injection rate vs resistive dissipation) and a Taylor relaxation limit, in which Ip√ITF Iinj/w, where w is the radial thickness of the gun-driven edge. Preliminary experiments tentatively confirm these scalings with ITF, Iinj, and w, increasing confidence in this simple relaxation model. Adding solenoidal inductive drive during helicity injection can push Ip up to, but not beyond, the predicted relaxation limit, demonstrating that this is a hard performance limit. Present experiments are focused on increasing the injection voltage (i.e., helicity injection rate) and reducing w. Near-term goals are to further test scalings predicted by the simple relaxation model and to study in detail the observed bursty n=1 activity correlated with rapid increases in Ip.
Experimental Estimating Deflection of a Simple Beam Bridge Model Using Grating Eddy Current Sensors
Lü, Chunfeng; Liu, Weiwen; Zhang, Yongjie; Zhao, Hui
2012-01-01
A novel three-point method using a grating eddy current absolute position sensor (GECS) for bridge deflection estimation is proposed in this paper. Real spatial positions of the measuring points along the span axis are directly used as relative reference points of each other rather than using any other auxiliary static reference points for measuring devices in a conventional method. Every three adjacent measuring points are defined as a measuring unit and a straight connecting bar with a GECS fixed on the center section of it links the two endpoints. In each measuring unit, the displacement of the mid-measuring point relative to the connecting bar measured by the GECS is defined as the relative deflection. Absolute deflections of each measuring point can be calculated from the relative deflections of all the measuring units directly without any correcting approaches. Principles of the three-point method and displacement measurement of the GECS are introduced in detail. Both static and dynamic experiments have been carried out on a simple beam bridge model, which demonstrate that the three-point deflection estimation method using the GECS is effective and offers a reliable way for bridge deflection estimation, especially for long-term monitoring. PMID:23112583
Experimental estimating deflection of a simple beam bridge model using grating eddy current sensors.
Lü, Chunfeng; Liu, Weiwen; Zhang, Yongjie; Zhao, Hui
2012-01-01
A novel three-point method using a grating eddy current absolute position sensor (GECS) for bridge deflection estimation is proposed in this paper. Real spatial positions of the measuring points along the span axis are directly used as relative reference points of each other rather than using any other auxiliary static reference points for measuring devices in a conventional method. Every three adjacent measuring points are defined as a measuring unit and a straight connecting bar with a GECS fixed on the center section of it links the two endpoints. In each measuring unit, the displacement of the mid-measuring point relative to the connecting bar measured by the GECS is defined as the relative deflection. Absolute deflections of each measuring point can be calculated from the relative deflections of all the measuring units directly without any correcting approaches. Principles of the three-point method and displacement measurement of the GECS are introduced in detail. Both static and dynamic experiments have been carried out on a simple beam bridge model, which demonstrate that the three-point deflection estimation method using the GECS is effective and offers a reliable way for bridge deflection estimation, especially for long-term monitoring.
Caballero-Lima, David; Kaneva, Iliyana N.; Watton, Simon P.
2013-01-01
In the hyphal tip of Candida albicans we have made detailed quantitative measurements of (i) exocyst components, (ii) Rho1, the regulatory subunit of (1,3)-β-glucan synthase, (iii) Rom2, the specialized guanine-nucleotide exchange factor (GEF) of Rho1, and (iv) actin cortical patches, the sites of endocytosis. We use the resulting data to construct and test a quantitative 3-dimensional model of fungal hyphal growth based on the proposition that vesicles fuse with the hyphal tip at a rate determined by the local density of exocyst components. Enzymes such as (1,3)-β-glucan synthase thus embedded in the plasma membrane continue to synthesize the cell wall until they are removed by endocytosis. The model successfully predicts the shape and dimensions of the hyphae, provided that endocytosis acts to remove cell wall-synthesizing enzymes at the subapical bands of actin patches. Moreover, a key prediction of the model is that the distribution of the synthase is substantially broader than the area occupied by the exocyst. This prediction is borne out by our quantitative measurements. Thus, although the model highlights detailed issues that require further investigation, in general terms the pattern of tip growth of fungal hyphae can be satisfactorily explained by a simple but quantitative model rooted within the known molecular processes of polarized growth. Moreover, the methodology can be readily adapted to model other forms of polarized growth, such as that which occurs in plant pollen tubes. PMID:23666623
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCaskey, Alex; Billings, Jay Jay; de Almeida, Valmor F
2011-08-01
This report details the progress made in the development of the Reprocessing Plant Toolkit (RPTk) for the DOE Nuclear Energy Advanced Modeling and Simulation (NEAMS) program. RPTk is an ongoing development effort intended to provide users with an extensible, integrated, and scalable software framework for the modeling and simulation of spent nuclear fuel reprocessing plants by enabling the insertion and coupling of user-developed physicochemical modules of variable fidelity. The NEAMS Safeguards and Separations IPSC (SafeSeps) and the Enabling Computational Technologies (ECT) supporting program element have partnered to release an initial version of the RPTk with a focus on software usabilitymore » and utility. RPTk implements a data flow architecture that is the source of the system's extensibility and scalability. Data flows through physicochemical modules sequentially, with each module importing data, evolving it, and exporting the updated data to the next downstream module. This is accomplished through various architectural abstractions designed to give RPTk true plug-and-play capabilities. A simple application of this architecture, as well as RPTk data flow and evolution, is demonstrated in Section 6 with an application consisting of two coupled physicochemical modules. The remaining sections describe this ongoing work in full, from system vision and design inception to full implementation. Section 3 describes the relevant software development processes used by the RPTk development team. These processes allow the team to manage system complexity and ensure stakeholder satisfaction. This section also details the work done on the RPTk ``black box'' and ``white box'' models, with a special focus on the separation of concerns between the RPTk user interface and application runtime. Section 4 and 5 discuss that application runtime component in more detail, and describe the dependencies, behavior, and rigorous testing of its constituent components.« less
2011-01-01
Background Real-time forecasting of epidemics, especially those based on a likelihood-based approach, is understudied. This study aimed to develop a simple method that can be used for the real-time epidemic forecasting. Methods A discrete time stochastic model, accounting for demographic stochasticity and conditional measurement, was developed and applied as a case study to the weekly incidence of pandemic influenza (H1N1-2009) in Japan. By imposing a branching process approximation and by assuming the linear growth of cases within each reporting interval, the epidemic curve is predicted using only two parameters. The uncertainty bounds of the forecasts are computed using chains of conditional offspring distributions. Results The quality of the forecasts made before the epidemic peak appears largely to depend on obtaining valid parameter estimates. The forecasts of both weekly incidence and final epidemic size greatly improved at and after the epidemic peak with all the observed data points falling within the uncertainty bounds. Conclusions Real-time forecasting using the discrete time stochastic model with its simple computation of the uncertainty bounds was successful. Because of the simplistic model structure, the proposed model has the potential to additionally account for various types of heterogeneity, time-dependent transmission dynamics and epidemiological details. The impact of such complexities on forecasting should be explored when the data become available as part of the disease surveillance. PMID:21324153
Nuclear reactor descriptions for space power systems analysis
NASA Technical Reports Server (NTRS)
Mccauley, E. W.; Brown, N. J.
1972-01-01
For the small, high performance reactors required for space electric applications, adequate neutronic analysis is of crucial importance, but in terms of computational time consumed, nuclear calculations probably yield the least amount of detail for mission analysis study. It has been found possible, after generation of only a few designs of a reactor family in elaborate thermomechanical and nuclear detail to use simple curve fitting techniques to assure desired neutronic performance while still performing the thermomechanical analysis in explicit detail. The resulting speed-up in computation time permits a broad detailed examination of constraints by the mission analyst.
Modeling fibrous biological tissues with a general invariant that excludes compressed fibers
NASA Astrophysics Data System (ADS)
Li, Kewei; Ogden, Ray W.; Holzapfel, Gerhard A.
2018-01-01
Dispersed collagen fibers in fibrous soft biological tissues have a significant effect on the overall mechanical behavior of the tissues. Constitutive modeling of the detailed structure obtained by using advanced imaging modalities has been investigated extensively in the last decade. In particular, our group has previously proposed a fiber dispersion model based on a generalized structure tensor. However, the fiber tension-compression switch described in that study is unable to exclude compressed fibers within a dispersion and the model requires modification so as to avoid some unphysical effects. In a recent paper we have proposed a method which avoids such problems, but in this present study we introduce an alternative approach by using a new general invariant that only depends on the fibers under tension so that compressed fibers within a dispersion do not contribute to the strain-energy function. We then provide expressions for the associated Cauchy stress and elasticity tensors in a decoupled form. We have also implemented the proposed model in a finite element analysis program and illustrated the implementation with three representative examples: simple tension and compression, simple shear, and unconfined compression on articular cartilage. We have obtained very good agreement with the analytical solutions that are available for the first two examples. The third example shows the efficacy of the fibrous tissue model in a larger scale simulation. For comparison we also provide results for the three examples with the compressed fibers included, and the results are completely different. If the distribution of collagen fibers is such that it is appropriate to exclude compressed fibers then such a model should be adopted.
Economic decision making and the application of nonparametric prediction models
Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.
2008-01-01
Sustained increases in energy prices have focused attention on gas resources in low-permeability shale or in coals that were previously considered economically marginal. Daily well deliverability is often relatively small, although the estimates of the total volumes of recoverable resources in these settings are often large. Planning and development decisions for extraction of such resources must be areawide because profitable extraction requires optimization of scale economies to minimize costs and reduce risk. For an individual firm, the decision to enter such plays depends on reconnaissance-level estimates of regional recoverable resources and on cost estimates to develop untested areas. This paper shows how simple nonparametric local regression models, used to predict technically recoverable resources at untested sites, can be combined with economic models to compute regional-scale cost functions. The context of the worked example is the Devonian Antrim-shale gas play in the Michigan basin. One finding relates to selection of the resource prediction model to be used with economic models. Models chosen because they can best predict aggregate volume over larger areas (many hundreds of sites) smooth out granularity in the distribution of predicted volumes at individual sites. This loss of detail affects the representation of economic cost functions and may affect economic decisions. Second, because some analysts consider unconventional resources to be ubiquitous, the selection and order of specific drilling sites may, in practice, be determined arbitrarily by extraneous factors. The analysis shows a 15-20% gain in gas volume when these simple models are applied to order drilling prospects strategically rather than to choose drilling locations randomly. Copyright ?? 2008 Society of Petroleum Engineers.
Applying a Particle-only Model to the HL Tau Disk
NASA Astrophysics Data System (ADS)
Tabeshian, Maryam; Wiegert, Paul A.
2018-04-01
Observations have revealed rich structures in protoplanetary disks, offering clues about their embedded planets. Due to the complexities introduced by the abundance of gas in these disks, modeling their structure in detail is computationally intensive, requiring complex hydrodynamic codes and substantial computing power. It would be advantageous if computationally simpler models could provide some preliminary information on these disks. Here we apply a particle-only model (that we developed for gas-poor debris disks) to the gas-rich disk, HL Tauri, to address the question of whether such simple models can inform the study of these systems. Assuming three potentially embedded planets, we match HL Tau’s radial profile fairly well and derive best-fit planetary masses and orbital radii (0.40, 0.02, 0.21 Jupiter masses for the planets orbiting a 0.55 M ⊙ star at 11.22, 29.67, 64.23 au). Our derived parameters are comparable to those estimated by others, except for the mass of the second planet. Our simulations also reproduce some narrower gaps seen in the ALMA image away from the orbits of the planets. The nature of these gaps is debated but, based on our simulations, we argue they could result from planet–disk interactions via mean-motion resonances, and need not contain planets. Our results suggest that a simple particle-only model can be used as a first step to understanding dynamical structures in gas disks, particularly those formed by planets, and determine some parameters of their hidden planets, serving as useful initial inputs to hydrodynamic models which are needed to investigate disk and planet properties more thoroughly.
Selective sweeps in growing microbial colonies
NASA Astrophysics Data System (ADS)
Korolev, Kirill S.; Müller, Melanie J. I.; Karahan, Nilay; Murray, Andrew W.; Hallatschek, Oskar; Nelson, David R.
2012-04-01
Evolutionary experiments with microbes are a powerful tool to study mutations and natural selection. These experiments, however, are often limited to the well-mixed environments of a test tube or a chemostat. Since spatial organization can significantly affect evolutionary dynamics, the need is growing for evolutionary experiments in spatially structured environments. The surface of a Petri dish provides such an environment, but a more detailed understanding of microbial growth on Petri dishes is necessary to interpret such experiments. We formulate a simple deterministic reaction-diffusion model, which successfully predicts the spatial patterns created by two competing species during colony expansion. We also derive the shape of these patterns analytically without relying on microscopic details of the model. In particular, we find that the relative fitness of two microbial strains can be estimated from the logarithmic spirals created by selective sweeps. The theory is tested with strains of the budding yeast Saccharomyces cerevisiae for spatial competitions with different initial conditions and for a range of relative fitnesses. The reaction-diffusion model also connects the microscopic parameters like growth rates and diffusion constants with macroscopic spatial patterns and predicts the relationship between fitness in liquid cultures and on Petri dishes, which we confirmed experimentally. Spatial sector patterns therefore provide an alternative fitness assay to the commonly used liquid culture fitness assays.
In vivo neuronal calcium imaging in C. elegans.
Chung, Samuel H; Sun, Lin; Gabel, Christopher V
2013-04-10
The nematode worm C. elegans is an ideal model organism for relatively simple, low cost neuronal imaging in vivo. Its small transparent body and simple, well-characterized nervous system allows identification and fluorescence imaging of any neuron within the intact animal. Simple immobilization techniques with minimal impact on the animal's physiology allow extended time-lapse imaging. The development of genetically-encoded calcium sensitive fluorophores such as cameleon and GCaMP allow in vivo imaging of neuronal calcium relating both cell physiology and neuronal activity. Numerous transgenic strains expressing these fluorophores in specific neurons are readily available or can be constructed using well-established techniques. Here, we describe detailed procedures for measuring calcium dynamics within a single neuron in vivo using both GCaMP and cameleon. We discuss advantages and disadvantages of both as well as various methods of sample preparation (animal immobilization) and image analysis. Finally, we present results from two experiments: 1) Using GCaMP to measure the sensory response of a specific neuron to an external electrical field and 2) Using cameleon to measure the physiological calcium response of a neuron to traumatic laser damage. Calcium imaging techniques such as these are used extensively in C. elegans and have been extended to measurements in freely moving animals, multiple neurons simultaneously and comparison across genetic backgrounds. C. elegans presents a robust and flexible system for in vivo neuronal imaging with advantages over other model systems in technical simplicity and cost.
RL10A-3-3A Rocket Engine Modeling Project
NASA Technical Reports Server (NTRS)
Binder, Michael; Tomsik, Thomas; Veres, Joseph P.
1997-01-01
Two RL10A-3-3A rocket engines comprise the main propulsion system for the Centaur upper stage vehicle. Centaur is used with bod Titan and Atlas launch vehicles, carrying military and civilian payloads from high altitudes into orbit and beyond. The RL10 has delivered highly reliable service for the past 30 years. Recently, however, there have been two in-flight failures which have refocused attention on the RL10. This heightened interest has sparked a desire for an independent RL10 modeling capability within NASA and th Air Force. Pratt & Whitney, which presently has the most detailed model of the RL10, also sees merit in having an independent model which could be used as a cross-check with their own simulations. The Space Propulsion Technology Division (SPTD) at the NASA Lewis Research Center has developed a computer model of the RL10A-3-3A. A project team was formed, consisting of experts in the areas of turbomachinery, combustion, and heat transfer. The overall goal of the project was to provide a model of the entire RL10 rocket engine for government use. In the course of the project, the major engine components have been modeled using a combination of simple correlations and detailed component analysis tools (computer codes). The results of these component analyses were verified with data provided by Pratt & Whitney. Select modeling results and test data curves were then integrated to form the RL10 engine system model The purpose of this report is to introduce the reader to the RL10 rocket engine and to describe the engine system model. The RL10 engine and its application to U.S. launch vehicles are described first, followed by a summary of the SPTD project organization, goals, and accomplishments. Simulated output from the system model are shown in comparison with test and flight data for start transient, steady state, and shut-down transient operations. Detailed descriptions of all component analyses, including those not selected for integration with the system model, are included as appendices.
Elemans, Coen P H; Muller, Mees; Larsen, Ole Naesbye; van Leeuwen, Johan L
2009-04-01
Birdsong has developed into one of the important models for motor control of learned behaviour and shows many parallels with speech acquisition in humans. However, there are several experimental limitations to studying the vocal organ - the syrinx - in vivo. The multidisciplinary approach of combining experimental data and mathematical modelling has greatly improved the understanding of neural control and peripheral motor dynamics of sound generation in birds. Here, we present a simple mechanical model of the syrinx that facilitates detailed study of vibrations and sound production. Our model resembles the 'starling resistor', a collapsible tube model, and consists of a tube with a single membrane in its casing, suspended in an external pressure chamber and driven by various pressure patterns. With this design, we can separately control 'bronchial' pressure and tension in the oscillating membrane and generate a wide variety of 'syllables' with simple sweeps of the control parameters. We show that the membrane exhibits high frequency, self-sustained oscillations in the audio range (>600 Hz fundamental frequency) using laser Doppler vibrometry, and systematically explore the conditions for sound production of the model in its control space. The fundamental frequency of the sound increases with tension in three membranes with different stiffness and mass. The lower-bound fundamental frequency increases with membrane mass. The membrane vibrations are strongly coupled to the resonance properties of the distal tube, most likely because of its reflective properties to sound waves. Our model is a gross simplification of the complex morphology found in birds, and more closely resembles mathematical models of the syrinx. Our results confirm several assumptions underlying existing mathematical models in a complex geometry.
COBRA ATD multispectral camera response model
NASA Astrophysics Data System (ADS)
Holmes, V. Todd; Kenton, Arthur C.; Hilton, Russell J.; Witherspoon, Ned H.; Holloway, John H., Jr.
2000-08-01
A new multispectral camera response model has been developed in support of the US Marine Corps (USMC) Coastal Battlefield Reconnaissance and Analysis (COBRA) Advanced Technology Demonstration (ATD) Program. This analytical model accurately estimates response form five Xybion intensified IMC 201 multispectral cameras used for COBRA ATD airborne minefield detection. The camera model design is based on a series of camera response curves which were generated through optical laboratory test performed by the Naval Surface Warfare Center, Dahlgren Division, Coastal Systems Station (CSS). Data fitting techniques were applied to these measured response curves to obtain nonlinear expressions which estimates digitized camera output as a function of irradiance, intensifier gain, and exposure. This COBRA Camera Response Model was proven to be very accurate, stable over a wide range of parameters, analytically invertible, and relatively simple. This practical camera model was subsequently incorporated into the COBRA sensor performance evaluation and computational tools for research analysis modeling toolbox in order to enhance COBRA modeling and simulation capabilities. Details of the camera model design and comparisons of modeled response to measured experimental data are presented.
Simulations of polarization from accretion disks
NASA Astrophysics Data System (ADS)
Schultz, J.
2000-12-01
The Monte Carlo Method was used to estimate the level of polarization from axisymmetric accretion disks similar to those in low-mass X-ray binaries and some classes of cataclysmic variables. In low-mass X-ray binaries electron scattering is supposed to be the dominant opacity source in the inner disk, and most of the optical light is produced in the disk. Thompson scattering occuring in the disk corona produces linear polarization. Detailed theoretical models of accretion disks are numerous, but simple mathematical disk models were used, as the accuracy of polarization measurements does not allow distinction of the fine details of disk models. Stokes parameters were used for the radiative transfer. The simulations indicate that the vertical distribution of emissivity has the greatest effect on polarization, and variations of radial emissivity distribution have no detectable effect on polarization. Irregularities in the disk may reduce the degree of polarization. The polarization levels produced by simulations are detectable with modern instruments. Polarization measurements could be used to get rough constraints on the vertical emissivity distribution of an accretion disk, provided that a reasonably accurate disk model can be constructed from photometric or spectrosopic observations in optical and/or X-ray wavelengths. Mainly based on observations taken at the Observatoire de Haute-Provence, France, and on some observations obtained at the European Southern Observatory, Chile (ESO Prog. IDs: 57.C-0492, 59.C-0293, 61.C-0512).
Strong neutron- γ competition above the neutron threshold in the decay of Co 70
Spyrou, A.; Liddick, S. N.; Naqvi, F.; ...
2016-09-29
The β-decay intensity of 70Co was measured for the first time using the technique of total absorption spectroscopy. The large β-decay Q value [12.3(3) MeV] offers a rare opportunity to study β-decay properties in a broad energy range. Two surprising features were observed in the experimental results, namely, the large fragmentation of the β intensity at high energies, as well as the strong competition between γ rays and neutrons, up to more than 2 MeV above the neutron-separation energy. The data are compared to two theoretical calculations: the shell model and the quasiparticle random phase approximation (QRPA). Both models seemmore » to be missing a significant strength at high excitation energies. Possible interpretations of this discrepancy are discussed. The shell model is used for a detailed nuclear structure interpretation and helps to explain the observed γ-neutron competition. The comparison to the QRPA calculations is done as a means to test a model that provides global β-decay properties for astrophysical calculations. Our work demonstrates the importance of performing detailed comparisons to experimental results, beyond the simple half-life comparisons. Finally, a realistic and robust description of the β-decay intensity is crucial for our understanding of nuclear structure as well as of r-process nucleosynthesis.« less
Predicting Cost/Performance Trade-Offs for Whitney: A Commodity Computing Cluster
NASA Technical Reports Server (NTRS)
Becker, Jeffrey C.; Nitzberg, Bill; VanderWijngaart, Rob F.; Kutler, Paul (Technical Monitor)
1997-01-01
Recent advances in low-end processor and network technology have made it possible to build a "supercomputer" out of commodity components. We develop simple models of the NAS Parallel Benchmarks version 2 (NPB 2) to explore the cost/performance trade-offs involved in building a balanced parallel computer supporting a scientific workload. We develop closed form expressions detailing the number and size of messages sent by each benchmark. Coupling these with measured single processor performance, network latency, and network bandwidth, our models predict benchmark performance to within 30%. A comparison based on total system cost reveals that current commodity technology (200 MHz Pentium Pros with 100baseT Ethernet) is well balanced for the NPBs up to a total system cost of around $1,000,000.
Fiber Composite Sandwich Thermostructural Behavior: Computational Simulation
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Aiello, R. A.; Murthy, P. L. N.
1986-01-01
Several computational levels of progressive sophistication/simplification are described to computationally simulate composite sandwich hygral, thermal, and structural behavior. The computational levels of sophistication include: (1) three-dimensional detailed finite element modeling of the honeycomb, the adhesive and the composite faces; (2) three-dimensional finite element modeling of the honeycomb assumed to be an equivalent continuous, homogeneous medium, the adhesive and the composite faces; (3) laminate theory simulation where the honeycomb (metal or composite) is assumed to consist of plies with equivalent properties; and (4) derivations of approximate, simplified equations for thermal and mechanical properties by simulating the honeycomb as an equivalent homogeneous medium. The approximate equations are combined with composite hygrothermomechanical and laminate theories to provide a simple and effective computational procedure for simulating the thermomechanical/thermostructural behavior of fiber composite sandwich structures.
An efficient solid modeling system based on a hand-held 3D laser scan device
NASA Astrophysics Data System (ADS)
Xiong, Hanwei; Xu, Jun; Xu, Chenxi; Pan, Ming
2014-12-01
The hand-held 3D laser scanner sold in the market is appealing for its port and convenient to use, but price is expensive. To develop such a system based cheap devices using the same principles as the commercial systems is impossible. In this paper, a simple hand-held 3D laser scanner is developed based on a volume reconstruction method using cheap devices. Unlike convenient laser scanner to collect point cloud of an object surface, the proposed method only scan few key profile curves on the surface. Planar section curve network can be generated from these profile curves to construct a volume model of the object. The details of design are presented, and illustrated by the example of a complex shaped object.
Simple neural substrate predicts complex rhythmic structure in duetting birds
NASA Astrophysics Data System (ADS)
Amador, Ana; Trevisan, M. A.; Mindlin, G. B.
2005-09-01
Horneros (Furnarius Rufus) are South American birds well known for their oven-looking nests and their ability to sing in couples. Previous work has analyzed the rhythmic organization of the duets, unveiling a mathematical structure behind the songs. In this work we analyze in detail an extended database of duets. The rhythms of the songs are compatible with the dynamics presented by a wide class of dynamical systems: forced excitable systems. Compatible with this nonlinear rule, we build a biologically inspired model for how the neural and the anatomical elements may interact to produce the observed rhythmic patterns. This model allows us to synthesize songs presenting the acoustic and rhythmic features observed in real songs. We also make testable predictions in order to support our hypothesis.
Use of an UROV to develop 3-D optical models of submarine environments
NASA Astrophysics Data System (ADS)
Null, W. D.; Landry, B. J.
2017-12-01
The ability to rapidly obtain high-fidelity bathymetry is crucial for a broad range of engineering, scientific, and defense applications ranging from bridge scour, bedform morphodynamics, and coral reef health to unexploded ordnance detection and monitoring. The present work introduces the use of an Underwater Remotely Operated Vehicle (UROV) to develop 3-D optical models of submarine environments. The UROV used a Raspberry Pi camera mounted to a small servo which allowed for pitch control. Prior to video data collection, in situ camera calibration was conducted with the system. Multiple image frames were extracted from the underwater video for 3D reconstruction using Structure from Motion (SFM). This system provides a simple and cost effective solution to obtaining detailed bathymetry in optically clear submarine environments.
Atomistic Modeling of Quaternary Alloys: Ti and Cu in NiAl
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo; Mosca, Hugo O.; Wilson, Allen W.; Noebe, Ronald D.; Garces, Jorge E.
2002-01-01
The change in site preference in NiAl(Ti,Cu) alloys with concentration is examined experimentally via ALCHEMI and theoretically using the Bozzolo-Ferrante-Smith (BFS) method for alloys. Results for the site occupancy of Ti and Cu additions as a function of concentration are determined experimentally for five alloys. These results are reproduced with large-scale BFS-based Monte Carlo atomistic simulations. The original set of five alloys is extended to 25 concentrations, which are modeled by means of the BFS method for alloys, showing in more detail the compositional range over which major changes in behavior occur. A simple but powerful approach based on the definition of atomic local environments also is introduced to describe energetically the interactions between the various elements and therefore to explain the observed behavior.