NASA Astrophysics Data System (ADS)
Lemaître, J.-F.; Dubray, N.; Hilaire, S.; Panebianco, S.; Sida, J.-L.
2013-12-01
Our purpose is to determine fission fragments characteristics in a framework of a scission point model named SPY for Scission Point Yields. This approach can be considered as a theoretical laboratory to study fission mechanism since it gives access to the correlation between the fragments properties and their nuclear structure, such as shell correction, pairing, collective degrees of freedom, odd-even effects. Which ones are dominant in final state? What is the impact of compound nucleus structure? The SPY model consists in a statistical description of the fission process at the scission point where fragments are completely formed and well separated with fixed properties. The most important property of the model relies on the nuclear structure of the fragments which is derived from full quantum microscopic calculations. This approach allows computing the fission final state of extremely exotic nuclei which are inaccessible by most of the fission model available on the market.
Scission-point configuration within the two-center shell model shape parameterization
NASA Astrophysics Data System (ADS)
Ivanyuk, F. A.; Chiba, S.; Aritomo, Y.
2014-11-01
Within the two-center shell model parameterization we defined the optimal shape that fissioning nuclei attain just before the scission and calculated the total deformation energy (liquid-drop part plus the shell correction) as a function of the mass asymmetry and elongation at the scission point. The three minima corresponding to one mass-symmetric and two mass-asymmetric peaks in the mass distribution of fission fragments are found in the deformation energy at the scission point. The calculated deformation energy is used in a quasistatic approximation for the estimation of the total kinetic and excitation energies of fission fragments and the total number of emitted prompt neutrons. The calculated results reproduce rather well the experimental data on the position of the peaks in the mass distribution of fission fragments, and the total kinetic and excitation energies of fission fragments. The calculated value of neutron multiplicity is somewhat larger than experimental results.
The Scission-Point Configuration within the Two-Center Shell Model Shape Parameterization
F. A. Ivanyuk; S. Chiba; Y. Aritomo
2014-10-28
Within the two-center shell model parameterization we have defined the optimal shape which fissioning nuclei attain just before the scission and calculated the total deformation energy (liquid drop part plus the shell correction) as function of the mass asymmetry and elongation at the scission point. The three minima corresponding to mass symmetric and two mass asymmetric peaks in the mass distribution of fission fragments are found in the deformation energy at the scission point. The calculated deformation energy is used in quasi-static approximation for the estimation of the total kinetic and excitation energy of fission fragments and the total number of emitted prompt neutrons. The calculated results reproduce rather well the experimental data on the position of the peaks in the mass distribution of fission fragments, the total kinetic and excitation energy of fission fragments. The calculated value of neutron multiplicity is somewhat larger than experimental results.
SPY: a new scission-point model based on microscopic inputs to predict fission fragment properties
NASA Astrophysics Data System (ADS)
Panebianco, Stefano; Dubray, Nöel; Goriely, Stéphane; Hilaire, Stéphane; Lemaître, Jean-François; Sida, Jean-Luc
2014-04-01
Despite the difficulty in describing the whole fission dynamics, the main fragment characteristics can be determined in a static approach based on a so-called scission-point model. Within this framework, a new Scission-Point model for the calculations of fission fragment Yields (SPY) has been developed. This model, initially based on the approach developed by Wilkins in the late seventies, consists in performing a static energy balance at scission, where the two fragments are supposed to be completely separated so that their macroscopic properties (mass and charge) can be considered as fixed. Given the knowledge of the system state density, averaged quantities such as mass and charge yields, mean kinetic and excitation energy can then be extracted in the framework of a microcanonical statistical description. The main advantage of the SPY model is the introduction of one of the most up-to-date microscopic descriptions of the nucleus for the individual energy of each fragment and, in the future, for their state density. These quantities are obtained in the framework of HFB calculations using the Gogny nucleon-nucleon interaction, ensuring an overall coherence of the model. Starting from a description of the SPY model and its main features, a comparison between the SPY predictions and experimental data will be discussed for some specific cases, from light nuclei around mercury to major actinides. Moreover, extensive predictions over the whole chart of nuclides will be discussed, with particular attention to their implication in stellar nucleosynthesis. Finally, future developments, mainly concerning the introduction of microscopic state densities, will be briefly discussed.
Characterization of the scission point from fission-fragment velocities
M. Caamaño; F. Farget; O. Delaune; K. -H. Schmidt; C. Schmitt; L. Audouin; C. -O. Bacri; J. Benlliure; E. Casarejos; X. Derkx; B. Fernández-Domínguez; L. Gaudefroy; C. Golabek; B. Jurado; A. Lemasson; D. Ramos; C. Rodríguez-Tajes; T. Roger; A. Shrivastava
2015-07-15
The isotopic-yield distributions and kinematic properties of fragments produced in transfer-induced fission of 240Pu and fusion-induced fission of 250Cf, with 9 MeV and 45 MeV of excitation energy respectively, were measured in inverse kinematics with the spectrometer VAMOS. The kinematic properties of identified fission fragments allow to derive properties of the scission configuration such as the distance between fragments, the total kinetic energy, the neutron multiplicity, the total excitation energy, and, for the first time, the proton- and neutron-number sharing during the emergence of the fragments. These properties of the scission point are studied as functions of the fragment atomic number. The correlation between these observables, gathered in one single experiment and for two different fissioning systems at different excitation energies, give valuable information for the understanding and modeling of the fission process.
Characterization of the scission point from fission-fragment velocities
Caamaño, M; Delaune, O; Schmidt, K -H; Schmitt, C; Audouin, L; Bacri, C -O; Benlliure, J; Casarejos, E; Derkx, X; Fernández-Domínguez, B; Gaudefroy, L; Golabek, C; Jurado, B; Lemasson, A; Ramos, D; Rodríguez-Tajes, C; Roger, T; Shrivastava, A
2015-01-01
The isotopic-yield distributions and kinematic properties of fragments produced in transfer-induced fission of 240Pu and fusion-induced fission of 250Cf, with 9 MeV and 45 MeV of excitation energy respectively, were measured in inverse kinematics with the spectrometer VAMOS. The kinematic properties of identified fission fragments allow to derive properties of the scission configuration such as the distance between fragments, the total kinetic energy, the neutron multiplicity, the total excitation energy, and, for the first time, the proton- and neutron-number sharing during the emergence of the fragments. These properties of the scission point are studied as functions of the fragment atomic number. The correlation between these observables, gathered in one single experiment and for two different fissioning systems at different excitation energies, give valuable information for the understanding and modeling of the fission process.
Kadmensky, S. G.; Bunakov, V. E.; Kadmensky, S. S.
2012-11-15
It is shown that the emergence of anisotropies in the angular distributions of fragments originating from the spontaneous and induced fission of oriented actinide nuclei is possible only if nonuniformities in the population of the projectionsM (K) of the fissile-nucleus spin onto the z axis of the laboratory frame (fissile-nucleus symmetry axis) appear simultaneously in the vicinity of the scission point but not in the vicinity of the outer saddle point of the deformation potential. The possibilities for creating the orientation of fissile nuclei for spontaneous and induced fission and the effect of these orientations on the anisotropies under analysis are considered. The role of Coriolis interaction as a unique source of the mixing of different-K fissile-nucleus states at all stages of the fission process is studied with allowance for the dynamical enhancement of this interaction for excited thermalized states of the nucleus involved that is characterized by a high energy density. It is shown that the absence of thermalization of excited states of the fissile nucleus that appear because of the effect of nonadiabaticity of its collective deformation motion in the vicinity of the scission point is a condition of conservation of the influence that transition fission states formed at the inner and outer fission barriers exerts on the distribution of the spin projections K for lowenergy spontaneous nuclear fission. It is confirmed that anisotropies observed in the angular distributions of fragments originating from the fission of nuclei that is induced by fast light particles (multiply charged ions) are due to the appearance of strongly excited equilibrium(nonequilibrium) states of the fissile nucleus in the vicinity of its scission point that have a Gibbs (non-Gibbs) distribution of projections K.
Ryabov, E. G.; Nadtochy, P. N.; Adeev, G. D.; Karpov, A. V.
2008-10-15
A stochastic approach to fission dynamics based on three-dimensional Langevin equations was applied to calculation of the mass-energy and angular distributions of fission fragments. The dependence of the mass-energy distribution parameters on the angular momentum and the anisotropy of the fission-fragment angular distribution on excitation energy have been studied in a wide range of the fissility parameter. A temperature-dependent finite-range liquid-drop model was used in a consistent way to calculate the functional of the Helmholtz free energy and level-density parameter. The modified one-body mechanism of nuclear dissipation (the so-called surface-plus-window dissipation) was used to determine the dissipative forces in Langevin equations. The evaporation of light prescission particles was taken into account on the basis of a statistical model combined with Langevin dynamics. The calculated parameters of the mass-energy distribution and their angular dependencies are in good quantitative agreement with the available experimental data at the value of the reduction coefficient of the contribution from the wall formula equal to 0.25. Analysis of the anisotropy of the fission-fragment angular distribution performed with the saddle-point transition state model and scission-point transition state model indicates that it is necessary to take into account the dynamical aspects of the fission-fragment angular distribution formation.
NASA Astrophysics Data System (ADS)
Schmidt, K.-H.; Jurado, B.
2011-10-01
Structural effects in fission-product yields and neutron data for a large number of fissioning nuclei between 220Th and 256Fm from spontaneous fission to 14-MeV-neutron-induced fission have been used to deduce information on the properties of the fissioning systems. Macroscopic properties are attributed to the compound nucleus, while fission channels are ascribed to shells in the nascent fragments. Using a recent general empirical description of the nuclear level density and assuming different characteristic time scales for the collective degrees of freedom of the fissioning system, a new fission model has been developed. The model combines the statistical concept of the scission-point model of Wilkins et al. with empirically determined properties of the potential-energy surface and some characteristic dynamical freeze-out times. Although no fine tuning of the parameters has yet been performed, the model reproduces all measured fission yields and neutron data rather well with a unique set and a relatively small number of free parameters. Since the parameters of the model are closely related to physical properties of the systems, some interesting conclusions on the fission process can be deduced. Prospects for the predictive power of this semi-empirical approach for hitherto unknown fissioning systems are discussed.
Fission fragment mass distribution for nuclei in the r-process region
Tatsuda, S.; Hashizume, K.; Wada, T.; Ohta, M. [Department of Physics, Konan University, 8-9-1 Okamoto, Kobe 658-8501 (Japan); Sumiyoshi, K. [Numazu College of Technology, NAO (Japan); Otsuki, K. [Univ. of Chicago (United States); Kajino, T. [NAO, GUSA, Univ. of Tokyo (Japan); Koura, H.; Chiba, S. [JAEA (Japan); Aritomo, Y. [FLNR (JINR) (United States)
2007-02-26
The fission fragment mass distribution is estimated theoretically on about 2000 nuclides which might have a critical role on the r-process nucleosynthesis through fission (Z>85). The mass distribution of fission fragment is derived by considering the location and the depth of valleys of potential energy surface near scission point of nuclei calculated by means of the liquid drop model with the shell energy correction by the Two-Center shell model. The guiding principle of determining the fission mass asymmetry is the behavior of the fission paths from the saddle to the scission point given by the Langevin calculation.
Spädtke, P
2013-01-01
Modeling of technical machines became a standard technique since computer became powerful enough to handle the amount of data relevant to the specific system. Simulation of an existing physical device requires the knowledge of all relevant quantities. Electric fields given by the surrounding boundary as well as magnetic fields caused by coils or permanent magnets have to be known. Internal sources for both fields are sometimes taken into account, such as space charge forces or the internal magnetic field of a moving bunch of charged particles. Used solver routines are briefly described and some bench-marking is shown to estimate necessary computing times for different problems. Different types of charged particle sources will be shown together with a suitable model to describe the physical model. Electron guns are covered as well as different ion sources (volume ion sources, laser ion sources, Penning ion sources, electron resonance ion sources, and H$^-$-sources) together with some remarks on beam transport.
Mass distributions for induced fission of different Hg isotopes
A. V. Andreev; G. G. Adamian; N. V. Antonenko
2011-12-13
With the improved scission-point model the mass distributions are calculated for induced fission of different Hg isotopes with the masses 180-196. The drastic change in the shape of the mass distribution from asymmetric to symmetric is revealed with increasing mass number of the fissioning Hg isotope, and the reactions are proposed to verify this prediction experimentally. The asymmetric mass distribution of fission fragments observed in the recent experiment on the fission of 180Hg is explained. The calculated mass distribution and mean total kinetic energy of fission fragments are in a good agreement with the available experimental data.
On Microscopic Energy Corrections Around Scission Configuration
NASA Astrophysics Data System (ADS)
Pomorski, Krzysztof; Nerlo-Pomorska, Bo?ena
The spontaneous fission of actinides is analysed by the macroscopic - microscopic method based on the Lublin Strasbourg Drop model and the two deformed Nilsson wells or Yukawa folded single particle potentials. The microscopic corrections are obtained within the Strutinsky prescription and the BCS theory. In the scission point the shell correction possess the additive property but this is not true for pairing correlations during asymmetric fission. Here the shape dependent pairing force of the ? or Gogny type should be used and the pairing strength should grow with increasing deformation of fissioning nucleus. The examples of sponta- neous fission results are presented. The possibility of ? decay accompanying the spontaneous fission is noticed.
Loth, E.; Tryggvason, G.; Tsuji, Y.; Elghobashi, S. E.; Crowe, Clayton T.; Berlemont, A.; Reeks, M.; Simonin, O.; Frank, Th; Onishi, Yasuo; Van Wachem, B.
2005-09-01
Slurry flows occur in many circumstances, including chemical manufacturing processes, pipeline transfer of coal, sand, and minerals; mud flows; and disposal of dredged materials. In this section we discuss slurry flow applications related to radioactive waste management. The Hanford tank waste solids and interstitial liquids will be mixed to form a slurry so it can be pumped out for retrieval and treatment. The waste is very complex chemically and physically. The ARIEL code is used to model the chemical interactions and fluid dynamics of the waste.
Effect of transients in nuclear fission on multiplicity of prescission neutrons
Gargi Chaudhuri; Santanu Pal
2002-04-19
Transients in the fission of highly excited nuclei are studied in the framework of the Langevin equation. Time-dependent fission widths are calculated which show that after the initial transients, a steady flow towards the scission point is established not only for nuclei which have fission barriers but also for nuclei which have no fission barrier. It is shown from a comparison of the transient time and the fission life time that fission changes from a diffusive to a transient dominated process over a certain transition region as a function of the spin of the fissioning nucleus. Multiplicities of prescission neutrons are calculated in a statistical model with as well as without a single swoop description of fission and they are found to differ in the transition region. We however find that the difference is marginal and hence a single swoop picture of fission though not strictly valid in the transition region can still be used in the statistical model calculations.
Fission of 238U induced by inelastic scattering of 120 MeV ? particles
NASA Astrophysics Data System (ADS)
Back, B. B.; Shotter, A. C.; Symons, T. J. M.; Bice, A.; Gelbke, C. K.; Awes, T. C.; Scott, D. K.
1981-03-01
The fission decay of 238U has been measured as a function of excitation energy in inelastic scattering of 120 MeV ? particles. Total kinetic energies and masses of fission fragments were measured by the double energy method. It is observed that the total kinetic energy EK decreases and that the valley in the mass distribution is reduced when the excitation energy of the system is increased. No indication of anomalous total kinetic energy release in the region of the giant quadrupole resonance has been found. A qualitative interpretation of the data is given on the basis of a static scission point model. NUCLEAR REACTIONS, FISSION 238U(?,?'f), E=120 MeV, ??=16° measured total kinetic energy and fission fragment mass distributions as a function of excitation energy.
Visualizing Nuclear Scission through a Multifield Extension of Topological Analysis.
Duke, D; Carr, H; Knoll, A; Schunck, N; Hai Ah Nam; Staszczak, A
2012-12-01
In nuclear science, density functional theory (DFT) is a powerful tool to model the complex interactions within the atomic nucleus, and is the primary theoretical approach used by physicists seeking a better understanding of fission. However DFT simulations result in complex multivariate datasets in which it is difficult to locate the crucial `scission' point at which one nucleus fragments into two, and to identify the precursors to scission. The Joint Contour Net (JCN) has recently been proposed as a new data structure for the topological analysis of multivariate scalar fields, analogous to the contour tree for univariate fields. This paper reports the analysis of DFT simulations using the JCN, the first application of the JCN technique to real data. It makes three contributions to visualization: (i) a set of practical methods for visualizing the JCN, (ii) new insight into the detection of nuclear scission, and (iii) an analysis of aesthetic criteria to drive further work on representing the JCN. PMID:26357109
NASA Astrophysics Data System (ADS)
Piessens, M.; Jacobs, E.; Pommé, S.; De Frenne, D.
1993-05-01
Post- and preneutron-emission mass and kinetic energy distributions of the fragments emitted in the photofission of 232Th with 6.44, 7.33, 8.35, 9.31, 11.13 and 13.15 MeV have been studied. Energy correlation and ?-spectrometric measurements were performed. Sb, Ru and Cd were separated chemically to determine postneutron yields in the symmetric mass region. The 232Th system predominantly splits in an asymmetric way with a maximum yield for heavy fragments in the region of mass 140. An enhanced yield around heavy mass 134 is observed, becoming of increasing importance with increasing compound nucleus excitation energy. For 6.44 and 7.35 MeV bremsstrahlung induced fission no symmetric component in the mass distribution could be observed. For the higher endpoint energies symmetric fission becomes more and more evident. From the symmetric fission yields at different excitation energies, using barrier penetration calculations, the height of the symmetric fission barrier is estimated to be of the order of 7.5 to 7.7 MeV. The total fragment kinetic energy shows a minimum for symmetric splits and a maximum for splits with heavy mass in the vicinity of mass 132. It increases with increasing excitation energy of the 232Th compound nucleus. This effect is especially pronounced in the energy region just above the barrier. It is observed for all masses, but mass splits with heavy mass in the vicinity of mass 132 show the strongest effects. The fragment mass distributions for 232Th(?, f) show a clear difference when compared with those for ?-particle accompanied fission of 235U. Our results are interpreted in the framework of the Brosa fission channels model and in the scission point model. They also provide information concerning the dissipation of collective energy into the intrinsic degrees of freedom during the transition from saddle to scission point.
Analytic Modeling Deterministic Model
Shihada, Basem
Analytic Modeling Deterministic Model 1 Infinite Population Model 2 #12;Analytic Results 3 Example discipline 11 Timing Diagram 12 #12;13 Analytic Results Utilization Factor U 14 #12;Analytic Results Response Time T 15 Performance Results · Exact analytic results are available · Performance measures
ERIC Educational Resources Information Center
Lesh, Richard; Carmona, Guadalupe; Post, Thomas
In this workshop, we will continue to reflect on a models and modeling perspective to understand how students and teachers learn and reason about real life situations encountered in a mathematics and science classroom. We will discuss the idea of a model as a conceptual system that is expressed by using external representational media, and that is…
Models, Fiction, and Fictional Models
NASA Astrophysics Data System (ADS)
Liu, Chuang
2014-03-01
The following sections are included: * Introduction * Why Most Models in Science Are Not Fictional * Typically Fictional Models in Science * Modeling the Unobservable * Fictional Models for the Unobservable? * References
ten Cate, Jacob M
2015-01-01
Developing experimental models to understand dental caries has been the theme in our research group. Our first, the pH-cycling model, was developed to investigate the chemical reactions in enamel or dentine, which lead to dental caries. It aimed to leverage our understanding of the fluoride mode of action and was also utilized for the formulation of oral care products. In addition, we made use of intra-oral (in situ) models to study other features of the oral environment that drive the de/remineralization balance in individual patients. This model addressed basic questions, such as how enamel and dentine are affected by challenges in the oral cavity, as well as practical issues related to fluoride toothpaste efficacy. The observation that perhaps fluoride is not sufficiently potent to reduce dental caries in the present-day society triggered us to expand our knowledge in the bacterial aetiology of dental caries. For this we developed the Amsterdam Active Attachment biofilm model. Different from studies on planktonic ('single') bacteria, this biofilm model captures bacteria in a habitat similar to dental plaque. With data from the combination of these models, it should be possible to study separate processes which together may lead to dental caries. Also products and novel agents could be evaluated that interfere with either of the processes. Having these separate models in place, a suggestion is made to design computer models to encompass the available information. Models but also role models are of the utmost importance in bringing and guiding research and researchers. PMID:25871413
Niche Modeling: Model Evaluation
Peterson, A. Townsend
2012-08-29
Ecological niche modeling has become a very popular tool in ecological and biogeographic studies across broad extents. The tool is used in hundreds of publications each year now, but some fundamental aspects of the approach have seen a fair amount...
Quantum Circuit Model Topological Model
Rowell, Eric C.
Quantum Circuit Model Topological Model Comparison of Models Topological Quantum Computation Eric Rowell Texas A&M University October 2010 Eric Rowell Topological Quantum Computation #12;Quantum Circuit Model Topological Model Comparison of Models Outline 1 Quantum Circuit Model Gates, Circuits
MODEL DEVELOPMENT - DOSE MODELS
Model Development
Humans are exposed to mixtures of chemicals from multiple pathways and routes. These exposures may result from a single event or may accumulate over time if multiple exposure events occur. The traditional approach of assessing risk from a single chemica...
MODEL DEVELOPMENT - DOSE MODELS
Model Development Humans are exposed to mixtures of chemicals from multiple pathways and routes. These exposures may result from a single event or may accumulate over time if multiple exposure events occur. The traditional approach of assessing risk from a single chemica...
NASA Astrophysics Data System (ADS)
Li, Qin; Zhao, Yongxin; Wu, Xiaofeng; Liu, Si
There can be multitudinous models specifying aspects of the same system. Each model has a bias towards one aspect. These models often override in specific aspects though they have different expressions. A specification written in one model can be refined by introducing additional information from other models. The paper proposes a concept of promoting models which is a methodology to obtain refinements with support from cooperating models. It refines a primary model by integrating the information from a secondary model. The promotion principle is not merely an academic point, but also a reliable and robust engineering technique which can be used to develop software and hardware systems. It can also check the consistency between two specifications from different models. A case of modeling a simple online shopping system with the cooperation of the guarded design model and CSP model illustrates the practicability of the promotion principle.
NSDL National Science Digital Library
Betty Blecha
The Fair model web site includes a freely available United States macroeconomic econometric model and a multicounty econometric model. The models run on the Windows OS. Instructors can use the models to teach forecasting, run policy experiments, and evaluate historical episodes of macroeconomic behavior. The web site includes extensive documentation for both models. The simulation is for upper-division economics courses in macroeconomics or econometrics. The principle developer is Ray Fair at Yale University.
ERIC Educational Resources Information Center
Levenson, Harold E.; Hurni, Andre
1978-01-01
Suggests building models as a way to reinforce and enhance related subjects such as architectural drafting, structural carpentry, etc., and discusses time, materials, scales, tools or equipment needed, how to achieve realistic special effects, and the types of projects that can be built (model of complete building, a panoramic model, and model…
NSDL National Science Digital Library
2012-06-26
In this activity, PVC pipe, plastic water bottles and vinyl tubing are used to make a simple working toilet model. The model shows the role of a siphon in the flushing of a toilet. Educators can pre-assemble this model and use it for demonstration purposes or engage learners in the model building process.
NSDL National Science Digital Library
Shirley Watt Ireton
2003-01-01
Chapter 1 defines and discusses models in a broad, and perhaps unusual, way. In particular, the chapter stresses the framework of personal models that underlie science and learning across fields. Subsequent chapters will deal more with particular kinds of expressed models that are important in science and science teaching: physical models, analog models and plans, mathematical models, and computer simulations. Throughout, the book examines how all models are important to science, how they are used, and how to use them effectively. They can and should be used not only to teach science, but also to teach students something about the process of learning and about the nature of knowledge itself.
NASA Astrophysics Data System (ADS)
Aritomo, Y.; Hagino, K.; Nishio, K.; Chiba, S.
2012-04-01
In order to describe heavy-ion fusion reactions around the Coulomb barrier with an actinide target nucleus, we propose a model which combines the coupled-channels approach and a fluctuation-dissipation model for dynamical calculations. This model takes into account couplings to the collective states of the interacting nuclei in the penetration of the Coulomb barrier and the subsequent dynamical evolution of a nuclear shape from the contact configuration. In the fluctuation-dissipation model with a Langevin equation, the effect of nuclear orientation at the initial impact on the prolately deformed target nucleus is considered. Fusion-fission, quasifission, and deep quasifission are separated as different Langevin trajectories on the potential energy surface. Using this model, we analyze the experimental data for the mass distribution of fission fragments (MDFF) in the reactions of 34,36S + 238U and 30Si + 238U at several incident energies around the Coulomb barrier. We find that the time scale in the quasifission as well as the deformation of fission fragments at the scission point are different between the 30Si + 238U and 36S + 238U systems, causing different mass asymmetries of the quasifission.
Models for Modeling* Michael Weisberg
Weisberg, Michael
Models for Modeling* Michael Weisberg University of Pennsylvania Dra of March, 2008 (5a) Under acre-feet of fresh water. e plan's critics worried that it would destroy commercial fisheries, render scale model of the San Francisco Bay. is wasn't any ordinary scale model. It was the "San Francisco Bay
Models-3 is a third generation air quality modeling system that contains a variety of tools to perform research and analysis of critical environmental questions and problems. These tools provide regulatory analysts and scientists with quicker results, greater scientific accuracy ...
This presentation presented information on entrainment models. Entrainment models use entrainment hypotheses to express the continuity equation. The advantage is that plume boundaries are known. A major disadvantage is that the problems that can be solved are rather simple. The ...
NASA Technical Reports Server (NTRS)
Rubesin, Morris W.
1987-01-01
Recent developments at several levels of statistical turbulence modeling applicable to aerodynamics are briefly surveyed. Emphasis is on examples of model improvements for transonic, two-dimensional flows. Experience with the development of these improved models is cited to suggest methods of accelerating the modeling process necessary to keep abreast of the rapid movement of computational fluid dynamics into the computation of complex three-dimensional flows.
NSDL National Science Digital Library
Each model organism has its own advantages and disadvantages. Choosing an appropriate model depends on the question being asked. Many laboratories find it useful to perform parallel experiments in two or more model systems to understand different aspects of a biochemical process. This animation from Cold Spring Harbor Laboratory's Dolan DNA Learning Center presents Model Organisms through a series of illustrations of the processes involved.
Phoenix (formerly referred to as the Second Generation Model or SGM) is a global general equilibrium model designed to analyze energy-economy-climate related questions and policy implications in the medium- to long-term. This model disaggregates the global economy into 26 industr...
ERIC Educational Resources Information Center
James, W. G. G.
1970-01-01
Discusses the historical development of both the wave and the corpuscular photon model of light. Suggests that students should be informed that the two models are complementary and that each model successfully describes a wide range of radiation phenomena. Cites 19 references which might be of interest to physics teachers and students. (LC)
Gary Hall; Ranabir Gupta
1991-01-01
It is argued that the dynamics of an application domain is best modeled as patterns of change in the entities that make up the domain. An abstraction mechanism for semantic data models is described which represents the transition of domain entities among entity classes. The model of transitions is related to a general computational formalism with well-understood properties. It is
Sadar, A.J.
1993-03-01
Mathematical modeling of air pollution dispersion has been performed for many years to estimate the impact of source emissions on air quality. EPA provides guidance on choosing appropriate computer models, such as COMPLEX I for regulator applications. The agency says several models are suitable for predicting air quality impacts for most situations.
Hydrological models are mediating models
NASA Astrophysics Data System (ADS)
Babel, L. V.; Karssenberg, D.
2013-08-01
Despite the increasing role of models in hydrological research and decision-making processes, only few accounts of the nature and function of models exist in hydrology. Earlier considerations have traditionally been conducted while making a clear distinction between physically-based and conceptual models. A new philosophical account, primarily based on the fields of physics and economics, transcends classes of models and scientific disciplines by considering models as "mediators" between theory and observations. The core of this approach lies in identifying models as (1) being only partially dependent on theory and observations, (2) integrating non-deductive elements in their construction, and (3) carrying the role of instruments of scientific enquiry about both theory and the world. The applicability of this approach to hydrology is evaluated in the present article. Three widely used hydrological models, each showing a different degree of apparent physicality, are confronted to the main characteristics of the "mediating models" concept. We argue that irrespective of their kind, hydrological models depend on both theory and observations, rather than merely on one of these two domains. Their construction is additionally involving a large number of miscellaneous, external ingredients, such as past experiences, model objectives, knowledge and preferences of the modeller, as well as hardware and software resources. We show that hydrological models convey the role of instruments in scientific practice by mediating between theory and the world. It results from these considerations that the traditional distinction between physically-based and conceptual models is necessarily too simplistic and refers at best to the stage at which theory and observations are steering model construction. The large variety of ingredients involved in model construction would deserve closer attention, for being rarely explicitly presented in peer-reviewed literature. We believe that devoting more importance to identifying and communicating on the many factors involved in model development might increase transparency of model building.
Model Experiments and Model Descriptions
NASA Technical Reports Server (NTRS)
Jackman, Charles H.; Ko, Malcolm K. W.; Weisenstein, Debra; Scott, Courtney J.; Shia, Run-Lie; Rodriguez, Jose; Sze, N. D.; Vohralik, Peter; Randeniya, Lakshman; Plumb, Ian
1999-01-01
The Second Workshop on Stratospheric Models and Measurements Workshop (M&M II) is the continuation of the effort previously started in the first Workshop (M&M I, Prather and Remsberg [1993]) held in 1992. As originally stated, the aim of M&M is to provide a foundation for establishing the credibility of stratospheric models used in environmental assessments of the ozone response to chlorofluorocarbons, aircraft emissions, and other climate-chemistry interactions. To accomplish this, a set of measurements of the present day atmosphere was selected. The intent was that successful simulations of the set of measurements should become the prerequisite for the acceptance of these models as having a reliable prediction for future ozone behavior. This section is divided into two: model experiment and model descriptions. In the model experiment, participant were given the charge to design a number of experiments that would use observations to test whether models are using the correct mechanisms to simulate the distributions of ozone and other trace gases in the atmosphere. The purpose is closely tied to the needs to reduce the uncertainties in the model predicted responses of stratospheric ozone to perturbations. The specifications for the experiments were sent out to the modeling community in June 1997. Twenty eight modeling groups responded to the requests for input. The first part of this section discusses the different modeling group, along with the experiments performed. Part two of this section, gives brief descriptions of each model as provided by the individual modeling groups.
NSDL National Science Digital Library
Modeling & Simulation is a journal published by The Society for Modeling and Simulation International. The Society has made its 2004 Modeling and Simulation Resource Guide available free to download. The directory provides descriptions and contact information for the many modeling and simulation software packages currently available, as well as listings for various modeling and simulation organizations worldwide. Two guest articles describe techniques for the application of real-time simulation in simulations that are complex. Previously published articles are also posted in the online archive.
NASA Technical Reports Server (NTRS)
Shih, Tsan-Hsing
1991-01-01
The performance of existing two-equation eddy viscosity models was examined. An effort was made to develop better models for near-wall turbulence using direct numerical simulations of plane channel and boundary layer flows. The asymptotic near-wall behavior of turbulence was used to examine the problems of current second order closure models and develop new models with the correct near-wall behavior. Rapid Distortion Theory was used to analytically study the effects of mean deformation on turbulence, obtain analytical solutions for the spectrum tensor, Reynolds stress tensor, anisotropy tensor and its invariants, which can be used in the turbulence model development. The potential of the renormalization group theory in turbulence modeling was studied, as well as compressible turbulent flows, and modeling of bypass transition.
Motivation & Background Microarchitectural Modeling
Lee, Benjamin C.
Motivation & Background Microarchitectural Modeling Application Modeling Conclusion Regression Modeling Application Modeling Conclusion Outline Motivation & Background Parameter Space Exploration Livermore National Laboratory #12;Motivation & Background Microarchitectural Modeling Application Modeling
Functions and Models: Mathematical Models
NSDL National Science Digital Library
Michael Freeze
Describe the process of mathematical modeling;Name and describe some methods of modeling;Classify a symbolically represented function as one of the elementary algebraic or transcendental functions;Appraise the suitability of different models for interpreting a given set of data.
New Fission Fragment Distributions and r-Process Origin of the Rare-Earth Elements
NASA Astrophysics Data System (ADS)
Goriely, S.; Sida, J.-L.; Lemaître, J.-F.; Panebianco, S.; Dubray, N.; Hilaire, S.; Bauswein, A.; Janka, H.-T.
2013-12-01
Neutron star (NS) merger ejecta offer a viable site for the production of heavy r-process elements with nuclear mass numbers A?140. The crucial role of fission recycling is responsible for the robustness of this site against many astrophysical uncertainties, but calculations sensitively depend on nuclear physics. In particular, the fission fragment yields determine the creation of 110?A?170 nuclei. Here, we apply a new scission-point model, called SPY, to derive the fission fragment distribution (FFD) of all relevant neutron-rich, fissioning nuclei. The model predicts a doubly asymmetric FFD in the abundant A?278 mass region that is responsible for the final recycling of the fissioning material. Using ejecta conditions based on relativistic NS merger calculations, we show that this specific FFD leads to a production of the A?165 rare-earth peak that is nicely compatible with the abundance patterns in the Sun and metal-poor stars. This new finding further strengthens the case of NS mergers as possible dominant origin of r nuclei with A?140.
Braby, L.A.
1990-09-01
The biological effects of ionizing radiation exposure are the result of a complex sequence of physical, chemical, biochemical, and physiological interactions. One way to begin a search for an understanding of health effects of radiation is through the development of phenomenological models of the response. Many models have been presented and tested in the slowly evolving process of characterizing cellular response. A range of models covering different endpoints and phenomena has developed in parallel. Many of these models employ similar assumptions about some underlying processes while differing about the nature of others. An attempt is made to organize many of the models into groups with similar features and to compare the consequences of those features with the actual experimental observations. It is assumed that by showing that some assumptions are inconsistent with experimental observations, the job of devising and testing mechanistic models can be simplified. 43 refs., 13 figs.
Burr, M.T.
1995-04-01
As developers make progress on independent power projects around the world, models for success are beginning to emerge. Different models are evolving to create ownership structures that accomoate a complex system of regulatory requirements. Other frameworks make use of previously untapped fuel resources, or establish new sources of financing; however, not all models may be applied to a given project. This article explores how developers are finding new alternatives for overcoming development challenges that are common to projects in many countries.
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Moncrieff, Mitchell; Einaud, Franco (Technical Monitor)
2001-01-01
Numerical cloud models have been developed and applied extensively to study cloud-scale and mesoscale processes during the past four decades. The distinctive aspect of these cloud models is their ability to treat explicitly (or resolve) cloud-scale dynamics. This requires the cloud models to be formulated from the non-hydrostatic equations of motion that explicitly include the vertical acceleration terms since the vertical and horizontal scales of convection are similar. Such models are also necessary in order to allow gravity waves, such as those triggered by clouds, to be resolved explicitly. In contrast, the hydrostatic approximation, usually applied in global or regional models, does allow the presence of gravity waves. In addition, the availability of exponentially increasing computer capabilities has resulted in time integrations increasing from hours to days, domain grids boxes (points) increasing from less than 2000 to more than 2,500,000 grid points with 500 to 1000 m resolution, and 3-D models becoming increasingly prevalent. The cloud resolving model is now at a stage where it can provide reasonably accurate statistical information of the sub-grid, cloud-resolving processes poorly parameterized in climate models and numerical prediction models.
NSDL National Science Digital Library
Dr. Janet M Dubinsky (University of Minnesota Neuroscience)
2011-07-20
This lesson allows students to apply engineering principles in the science classroom. Students learn how neurons convey information through designing and building a physical model of neurotransmission.
NSDL National Science Digital Library
In this lesson, students will explore volcanoes by constructing models and reflect upon their learning through drawing sketches of their models. Once they have finished making their models, they will experiment with making their volcanoes erupt. They will observe how eruption changes the original form of their volcano models. In this way, students see first hand how this type of phenomena creates physical change. While students at this level may struggle to understand larger and more abstract geographical concepts, they will work directly with material that will help them build a foundation for understanding concepts of phenomena that sculpt the earth.
V. Chipman
2002-10-05
The purpose of the Ventilation Model is to simulate the heat transfer processes in and around waste emplacement drifts during periods of forced ventilation. The model evaluates the effects of emplacement drift ventilation on the thermal conditions in the emplacement drifts and surrounding rock mass, and calculates the heat removal by ventilation as a measure of the viability of ventilation to delay the onset of peak repository temperature and reduce its magnitude. The heat removal by ventilation is temporally and spatially dependent, and is expressed as the fraction of heat carried away by the ventilation air compared to the fraction of heat produced by radionuclide decay. One minus the heat removal is called the wall heat fraction, or the remaining amount of heat that is transferred via conduction to the surrounding rock mass. Downstream models, such as the ''Multiscale Thermohydrologic Model'' (BSC 2001), use the wall heat fractions as outputted from the Ventilation Model to initialize their post-closure analyses. The Ventilation Model report was initially developed to analyze the effects of preclosure continuous ventilation in the Engineered Barrier System (EBS) emplacement drifts, and to provide heat removal data to support EBS design. Revision 00 of the Ventilation Model included documentation of the modeling results from the ANSYS-based heat transfer model. The purposes of Revision 01 of the Ventilation Model are: (1) To validate the conceptual model for preclosure ventilation of emplacement drifts and verify its numerical application in accordance with new procedural requirements as outlined in AP-SIII-10Q, Models (Section 7.0). (2) To satisfy technical issues posed in KTI agreement RDTME 3.14 (Reamer and Williams 2001a). Specifically to demonstrate, with respect to the ANSYS ventilation model, the adequacy of the discretization (Section 6.2.3.1), and the downstream applicability of the model results (i.e. wall heat fractions) to initialize post-closure thermal models (Section 6.6). (3) To satisfy the remainder of KTI agreement TEF 2.07 (Reamer and Williams 2001b). Specifically to provide the results of post-test ANSYS modeling of the Atlas Facility forced convection tests (Section 7.1.2). This portion of the model report also serves as a validation exercise per AP-SIII.10Q, Models, for the ANSYS ventilation model. (4) To further satisfy KTI agreements RDTME 3.01 and 3.14 (Reamer and Williams 2001a) by providing the source documentation referred to in the KTI Letter Report, ''Effect of Forced Ventilation on Thermal-Hydrologic Conditions in the Engineered Barrier System and Near Field Environment'' (Williams 2002). Specifically to provide the results of the MULTIFLUX model which simulates the coupled processes of heat and mass transfer in and around waste emplacement drifts during periods of forced ventilation. This portion of the model report is presented as an Alternative Conceptual Model with a numerical application, and also provides corroborative results used for model validation purposes (Section 6.3 and 6.4).
Model Selection for Geostatistical Models
Hoeting, Jennifer A.; Davis, Richard A.; Merton, Andrew A.; Thompson, Sandra E.
2006-02-01
We consider the problem of model selection for geospatial data. Spatial correlation is typically ignored in the selection of explanatory variables and this can influence model selection results. For example, the inclusion or exclusion of particular explanatory variables may not be apparent when spatial correlation is ignored. To address this problem, we consider the Akaike Information Criterion (AIC) as applied to a geostatistical model. We offer a heuristic derivation of the AIC in this context and provide simulation results that show that using AIC for a geostatistical model is superior to the often used approach of ignoring spatial correlation in the selection of explanatory variables. These ideas are further demonstrated via a model for lizard abundance. We also employ the principle of minimum description length (MDL) to variable selection for the geostatistical model. The effect of sampling design on the selection of explanatory covariates is also explored.
Forbus, Kenneth D
2011-07-01
Qualitative modeling concerns the representations and reasoning that people use to understand continuous aspects of the world. Qualitative models formalize everyday notions of causality and provide accounts of how to ground symbolic, relational representations in perceptual processes. This article surveys the basic ideas of qualitative modeling and their applications from a cognitive science perspective. It describes the basic principles of qualitative modeling, and a variety of qualitative representations that have been developed for quantities and for relationships between them, providing a kind of qualitative mathematics. Three ontological frameworks for organizing modeling knowledge (processes, components, and field) are summarized, along with research on automatically assembling models for particular tasks from such knowledge. Qualitative simulation and how it carves up time into meaningful units is discussed. We discuss several accounts of causal reasoning about dynamical systems, based on different choices of qualitative mathematics and ontology. Qualitative spatial reasoning is explored, both in terms of relational systems and visual reasoning. Applications of qualitative models of particular interest to cognitive scientists are described, including how they have been used to capture the expertise of scientists and engineers and how they have been used in education. Open questions and frontiers are also discussed, focusing on relationships between ideas developed in the qualitative modeling community and other areas of cognitive science. WIREs Cogni Sci 2011 2 374-391 DOI: 10.1002/wcs.115 For further resources related to this article, please visit the WIREs website. PMID:26302198
NSDL National Science Digital Library
Lacey, Michelle
This site, presented by the Department of Statistics at Yale University, gives an explanation, a definition and an example of probability models. Topics include components of probability models and the basic rules of probability. Overall, this is a great resource for any mathematics classroom studying statistics.
ERIC Educational Resources Information Center
Oh, Phil Seok; Oh, Sung Jin
2013-01-01
Modeling in science has been studied by education researchers for decades and is now being applied broadly in school. It is among the scientific practices featured in the "Next Generation Science Standards" ("NGSS") (Achieve Inc. 2013). This article describes modeling activities in an extracurricular science club in a high…
Climate models and model evaluation
Gates, W.L. [Lawrence Livermore National Lab., CA (United States)
1994-12-31
This brief overview addresses aspects of the nature, uses, evaluation and limitations of climate models. A comprehensive global modeling capability has been achieved only for the physical climate system, which is characterized by processes that serve to transport and exchange momentum, heat and moisture within and between the atmosphere, ocean and land surface. The fundamental aim of climate modeling, and the justification for the use of climate models, is the need to achieve a quantitative understanding of the operation of the climate system and to exploit any potential predictability that may exist.
NSDL National Science Digital Library
2012-06-26
In this activity, learners explore the relative sizes and distances of objects in the solar system. Without being informed of the expected product, learners will make a Play-doh model of the Earth-Moon system, scaled to size and distance. The facilitator reveals the true identity of the system at the conclusion of the activity. During the construction phase, learners try to guess what members of the solar system their model represents. Each group receives different amounts of Play-doh, with each group assigned a color (red, blue, yellow, white). At the end, groups set up their models and inspect the models of other groups. They report patterns of scale that they notice; as the amount of Play-doh increases, for example, so do the size and distance of the model. This resource guide includes background information about the Earth to Moon ratio and solar eclipses.
Veronica J. Rutledge
2013-01-01
The absence of industrial scale nuclear fuel reprocessing in the U.S. has precluded the necessary driver for developing the advanced simulation capability now prevalent in so many other countries. Thus, it is essential to model complex series of unit operations to simulate, understand, and predict inherent transient behavior and feedback loops. A capability of accurately simulating the dynamic behavior of advanced fuel cycle separation processes will provide substantial cost savings and many technical benefits. The specific fuel cycle separation process discussed in this report is the off-gas treatment system. The off-gas separation consists of a series of scrubbers and adsorption beds to capture constituents of interest. Dynamic models are being developed to simulate each unit operation involved so each unit operation can be used as a stand-alone model and in series with multiple others. Currently, an adsorption model has been developed within Multi-physics Object Oriented Simulation Environment (MOOSE) developed at the Idaho National Laboratory (INL). Off-gas Separation and REcoverY (OSPREY) models the adsorption of off-gas constituents for dispersed plug flow in a packed bed under non-isothermal and non-isobaric conditions. Inputs to the model include gas, sorbent, and column properties, equilibrium and kinetic data, and inlet conditions. The simulation outputs component concentrations along the column length as a function of time from which breakthrough data is obtained. The breakthrough data can be used to determine bed capacity, which in turn can be used to size columns. It also outputs temperature along the column length as a function of time and pressure drop along the column length. Experimental data and parameters were input into the adsorption model to develop models specific for krypton adsorption. The same can be done for iodine, xenon, and tritium. The model will be validated with experimental breakthrough curves. Customers will be given access to OSPREY to used and evaluate the model.
Fission of actinides through quasimolecular shapes
NASA Astrophysics Data System (ADS)
Royer, Guy; Zhang, Hongfei; Eudes, Philippe; Moustabchir, Rachid; Moreau, Damien; Jaffré, Muriel; Morabit, Youssef; Particelli, Benjamin
2013-12-01
The potential energy of heavy nuclei has been calculated in the quasimolecular shape path from a generalized liquid drop model including the proximity energy, the charge and mass asymmetries and the microscopic corrections. The potential barriers are multiple-humped. The second maximum is the saddle-point. It corresponds to the transition from compact one-body shapes with a deep neck to two touching ellipsoids. The scission point lies at the end of an energy plateau well below the saddle-point and where the effects of the nuclear attractive forces between two separated fragments vanish. The energy on this plateau is the sum of the kinetic and excitation energies of the fragments. The shell and pairing corrections play an essential role to select the most probable fission path. The potential barrier heights agree with the experimental data and the theoretical half-lives follow the trend of the experimental values. A third peak and a shallow third minimum appear in asymmetric decay paths when one fragment is close to a double magic quasi-spherical nucleus, while the smaller one changes from oblate to prolate shapes.
Modeling Quality Information within Business Process Models
Paech, Barbara
Modeling Quality Information within Business Process Models Robert Heinrich, Alexander Kappe. Business process models are a useful means to document information about structure and behavior literature and tool survey on modeling quality information within business process models. Keywords: Business
Woosley, S.E.; Weaver, T.A.
1981-12-29
Recent progress in understanding the observed properties of type I supernovae as a consequence of the thermonuclear detonation of white dwarf stars and the ensuing decay of the Ni-56 produced therein is reviewed. The expected nucleosynthesis and gamma-line spectra for this model of type I explosions and a model for type II explosions are presented. Finally, a qualitatively new approach to the problem of massive star death and type II supernovae based upon a combination of rotation and thermonuclear burning is discussed. While the theoretical results of existing models are predicated upon the assumption of a successful core bounce calculation and the neglect of such two-dimensional effects as rotation and magnetic fields the new model suggests an entirely different scenario in which a considerable portion of the energy carried by an equatorially ejected blob is deposited in the red giant envelope overlying the mantle of the star.
Energy models characterize the energy system, its evolution, and its interactions with the broader economy. The energy system consists of primary resources, including both fossil fuels and renewables; power plants, refineries, and other technologies to process and convert these r...
ERIC Educational Resources Information Center
Taylor, Emma
1991-01-01
A cheap and simple model that can be made and used by pupils to study the human breathing mechanism is presented. A list of needed materials, procedures for construction, possible refinements, and method of use are included. (KR)
NASA Astrophysics Data System (ADS)
Insepov, Z.; Norem, J.; Vetizer, S.; Mahalingam, S.
2011-12-01
Although vacuum arcs were first identified over 110 years ago, they are not yet well understood. We have since developed a model of breakdown and gradient limits that tries to explain, in a self-consistent way: arc triggering, plasma initiation, plasma evolution, surface damage and gradient limits. We use simple PIC codes for modeling plasmas, molecular dynamics for modeling surface breakdown, and surface damage, and mesoscale surface thermodynamics and finite element electrostatic codes for to evaluate surface properties. Since any given experiment seems to have more variables than data points, we have tried to consider a wide variety of arcing (rf structures, e beam welding, laser ablation, etc.) to help constrain the problem, and concentrate on common mechanisms. While the mechanisms can be comparatively simple, modeling can be challenging.
NASA Technical Reports Server (NTRS)
Guillet, J. E.
1984-01-01
A reaction kinetics based model of the photodegradation process, which measures all important rate constants, and a computerized model capable of predicting the photodegradation rate and failure modes of a 30 year period, were developed. It is shown that the computerized photodegradation model for polyethylene correctly predicts failure of ELVAX 15 and cross linked ELVAX 150 on outdoor exposure. It is indicated that cross linking ethylene vinyl acetate (EVA) does not significantly change its degradation rate. It is shown that the effect of the stabilizer package is approximately equivalent on both polymers. The computerized model indicates that peroxide decomposers and UV absorbers are the most effective stabilizers. It is found that a combination of UV absorbers and a hindered amine light stabilizer (HALS) is the most effective stabilizer system.
Daniel, David J; Mc Pherson, Allen; Thorp, John R; Barrett, Richard; Clay, Robert; De Supinski, Bronis; Dube, Evi; Heroux, Mike; Janssen, Curtis; Langer, Steve; Laros, Jim
2011-01-14
A programming model is a set of software technologies that support the expression of algorithms and provide applications with an abstract representation of the capabilities of the underlying hardware architecture. The primary goals are productivity, portability and performance.
Curtis, S.B.
1990-09-01
Several models and theories are reviewed that incorporate the idea of radiation-induced lesions (repairable and/or irreparable) that can be related to molecular lesions in the DNA molecule. Usually the DNA double-strand or chromatin break is suggested as the critical lesion. In the models, the shoulder on the low-LET survival curve is hypothesized as being due to one (or more) of the following three mechanisms: (1) ``interaction`` of lesions produced by statistically independent particle tracks; (2) nonlinear (i.e., linear-quadratic) increase in the yield of initial lesions, and (3) saturation of repair processes at high dose. Comparisons are made between the various approaches. Several significant advances in model development are discussed; in particular, a description of the matrix formulation of the Markov versions of the RMR and LPL models is given. The more advanced theories have incorporated statistical fluctuations in various aspects of the energy-loss and lesion-formation process. An important direction is the inclusion of physical and chemical processes into the formulations by incorporating relevant track structure theory (Monte Carlo track simulations) and chemical reactions of radiation-induced radicals. At the biological end, identification of repair genes and how they operate as well as a better understanding of how DNA misjoinings lead to lethal chromosome aberrations are needed for appropriate inclusion into the theories. More effort is necessary to model the complex end point of radiation-induced carcinogenesis.
Curtis, S.B.
1990-09-01
Several models and theories are reviewed that incorporate the idea of radiation-induced lesions (repairable and/or irreparable) that can be related to molecular lesions in the DNA molecule. Usually the DNA double-strand or chromatin break is suggested as the critical lesion. In the models, the shoulder on the low-LET survival curve is hypothesized as being due to one (or more) of the following three mechanisms: (1) interaction'' of lesions produced by statistically independent particle tracks; (2) nonlinear (i.e., linear-quadratic) increase in the yield of initial lesions, and (3) saturation of repair processes at high dose. Comparisons are made between the various approaches. Several significant advances in model development are discussed; in particular, a description of the matrix formulation of the Markov versions of the RMR and LPL models is given. The more advanced theories have incorporated statistical fluctuations in various aspects of the energy-loss and lesion-formation process. An important direction is the inclusion of physical and chemical processes into the formulations by incorporating relevant track structure theory (Monte Carlo track simulations) and chemical reactions of radiation-induced radicals. At the biological end, identification of repair genes and how they operate as well as a better understanding of how DNA misjoinings lead to lethal chromosome aberrations are needed for appropriate inclusion into the theories. More effort is necessary to model the complex end point of radiation-induced carcinogenesis.
NASA Astrophysics Data System (ADS)
Fryer, M. O.
The temperature measurements provided by thermocouples (TCs) are important for the operation of pressurized water reactors. During severe inadequate core cooling incidents, extreme temperatures may cause type K thermocouples (TCs) used for core exit temperature monitoring to perform poorly. A model to TC electrical behavior has been developed to determine how TCs react under extreme temperatures. The model predicts the voltage output of the TC and its impedance. A series of experiments were conducted on a length of type K thermocouple to validate the model. Impedance was measured at several temperatures between 22 C and 1100 C and at frequencies between dc and 10 MHz. The model was able to accurately predict impedance over this wide range of conditions. The average percentage difference between experimental data and the model was less than 6.5%. Experimental accuracy was + or - 2.5%. There is a striking difference between impedance versus frequency plots at 300(0)C and at higher temperatures. This may be useful in validating TC data during accident conditions.
Introduction Improved Model Alternative Statistical Model
Introduction Improved Model Alternative Statistical Model: Weighted Least Square and Generalized These properties hold only when the model is a right model. To be more specific, when the standard statistical i=1 (yi -y(ti;C,K))2 #12;Introduction Improved Model Current Model Underlying Statistical Models
NASA Technical Reports Server (NTRS)
Denning, Peter J.
1990-01-01
Although powerful computers have allowed complex physical and manmade hardware systems to be modeled successfully, we have encountered persistent problems with the reliability of computer models for systems involving human learning, human action, and human organizations. This is not a misfortune; unlike physical and manmade systems, human systems do not operate under a fixed set of laws. The rules governing the actions allowable in the system can be changed without warning at any moment, and can evolve over time. That the governing laws are inherently unpredictable raises serious questions about the reliability of models when applied to human situations. In these domains, computers are better used, not for prediction and planning, but for aiding humans. Examples are systems that help humans speculate about possible futures, offer advice about possible actions in a domain, systems that gather information from the networks, and systems that track and support work flows in organizations.
2012-01-01
Background This work focuses on the computational modelling of osteomyelitis, a bone pathology caused by bacteria infection (mostly Staphylococcus aureus). The infection alters the RANK/RANKL/OPG signalling dynamics that regulates osteoblasts and osteoclasts behaviour in bone remodelling, i.e. the resorption and mineralization activity. The infection rapidly leads to severe bone loss, necrosis of the affected portion, and it may even spread to other parts of the body. On the other hand, osteoporosis is not a bacterial infection but similarly is a defective bone pathology arising due to imbalances in the RANK/RANKL/OPG molecular pathway, and due to the progressive weakening of bone structure. Results Since both osteoporosis and osteomyelitis cause loss of bone mass, we focused on comparing the dynamics of these diseases by means of computational models. Firstly, we performed meta-analysis on a gene expression data of normal, osteoporotic and osteomyelitis bone conditions. We mainly focused on RANKL/OPG signalling, the TNF and TNF receptor superfamilies and the NF-kB pathway. Using information from the gene expression data we estimated parameters for a novel model of osteoporosis and of osteomyelitis. Our models could be seen as a hybrid ODE and probabilistic verification modelling framework which aims at investigating the dynamics of the effects of the infection in bone remodelling. Finally we discuss different diagnostic estimators defined by formal verification techniques, in order to assess different bone pathologies (osteopenia, osteoporosis and osteomyelitis) in an effective way. Conclusions We present a modeling framework able to reproduce aspects of the different bone remodeling defective dynamics of osteomyelitis and osteoporosis. We report that the verification-based estimators are meaningful in the light of a feed forward between computational medicine and clinical bioinformatics. PMID:23095605
Woosley, S.E.; Weaver, T.A.
1980-01-01
Recent progress in understanding the observed properties of Type I supernovae as a consequence of the thermonuclear detonation of white dwarf stars and the ensuing decay of the /sup 56/Ni produced therein is reviewed. Within the context of this model for Type I explosions and the 1978 model for Type II explosions, the expected nucleosynthesis and gamma-line spectra from both kinds of supernovae are presented. Finally, a qualitatively new approach to the problem of massive star death and Type II supernovae based upon a combination of rotation and thermonuclear burning is discussed.
NSDL National Science Digital Library
2009-04-14
A human is a complicated organism, and it is considered unethical to do many kinds of experiments on human subjects. For these reasons, biologists often use simpler Ã¢Â?Â?modelÃ¢Â?Â organisms that are easy to keep and manipulate in the laboratory. Despite ob
Although air quality models have been applied historically to address issues specific to ambient air quality standards (i.e., one criteria pollutant at a time) or welfare (e.g.. acid deposition or visibility impairment). they are inherently multipollutant based. Therefore. in pri...
NSDL National Science Digital Library
Kirsten Menking
The Daisyworld model created by Andrew Watson and James Lovelock (1983, Tellus, v. 35B, p. 284-289) is a wonderful example of a self-regulating system incorporating positive and negative feedbacks. The model consists of a planet on which black and white daisies are growing. The growth of these daisies is governed by a parabolic shaped growth function regulated by planetary temperature and is set to zero for temperatures less than 5 ºC or greater than 40 ºC and optimized at 22.5 ºC. The model explores the effect of a steadily increasing solar luminosity on the growth of daisies and the resulting planetary temperature. The growth function for the daisies allows them to modulate the planet's temperature for many years, warming it early on as black daisies grow, and cooling it later as white daisies grow. Eventually, the solar luminosity increases beyond the daisies' capability to modulate the temperature and they die out, leading to a rapid rise in the planetary temperature. Students read Watson and Lovelock's original paper, and then use STELLA to create their own Daisyworld model with which they can experiment. Experiments include changing the albedos of the daisies, changing their death rates, and changing the rate at which energy is conducted from one part of the planet to another. In all cases, students keep track of daisy populations and of planetary temperature over time.
Ensemble forecasting has been used for operational numerical weather prediction in the United States and Europe since the early 1990s. An ensemble of weather or climate forecasts is used to characterize the two main sources of uncertainty in computer models of physical systems: ...
ERIC Educational Resources Information Center
Ebert, James R.; Elliott, Nancy A.; Hurteau, Laura; Schulz, Amanda
2004-01-01
Students must understand the fundamental process of convection before they can grasp a wide variety of Earth processes, many of which may seem abstract because of the scales on which they operate. Presentation of a very visual, concrete model prior to instruction on these topics may facilitate students' understanding of processes that are largely…
Hajj-Boutros, J. )
1989-04-01
An LRS Bianchi type II cosmological model is built with a state equation that is a function of the cosmic time t. The ratio p/{mu} is 1/3 when t {yields} 0 and is insignificant when t {yields} {infinity}. Thus, the matter content behaves like radiation for small t and like dust for large t.
ERIC Educational Resources Information Center
Goodwyn, Lauren; Salm, Sarah
2007-01-01
Teaching the anatomy of the muscle system to high school students can be challenging. Students often learn about muscle anatomy by memorizing information from textbooks or by observing plastic, inflexible models. Although these mediums help students learn about muscle placement, the mediums do not facilitate understanding regarding integration of…
A. Alsaed
2004-09-14
The ''Disposal Criticality Analysis Methodology Topical Report'' (YMP 2003) presents the methodology for evaluating potential criticality situations in the monitored geologic repository. As stated in the referenced Topical Report, the detailed methodology for performing the disposal criticality analyses will be documented in model reports. Many of the models developed in support of the Topical Report differ from the definition of models as given in the Office of Civilian Radioactive Waste Management procedure AP-SIII.10Q, ''Models'', in that they are procedural, rather than mathematical. These model reports document the detailed methodology necessary to implement the approach presented in the Disposal Criticality Analysis Methodology Topical Report and provide calculations utilizing the methodology. Thus, the governing procedure for this type of report is AP-3.12Q, ''Design Calculations and Analyses''. The ''Criticality Model'' is of this latter type, providing a process evaluating the criticality potential of in-package and external configurations. The purpose of this analysis is to layout the process for calculating the criticality potential for various in-package and external configurations and to calculate lower-bound tolerance limit (LBTL) values and determine range of applicability (ROA) parameters. The LBTL calculations and the ROA determinations are performed using selected benchmark experiments that are applicable to various waste forms and various in-package and external configurations. The waste forms considered in this calculation are pressurized water reactor (PWR), boiling water reactor (BWR), Fast Flux Test Facility (FFTF), Training Research Isotope General Atomic (TRIGA), Enrico Fermi, Shippingport pressurized water reactor, Shippingport light water breeder reactor (LWBR), N-Reactor, Melt and Dilute, and Fort Saint Vrain Reactor spent nuclear fuel (SNF). The scope of this analysis is to document the criticality computational method. The criticality computational method will be used for evaluating the criticality potential of configurations of fissionable materials (in-package and external to the waste package) within the repository at Yucca Mountain, Nevada for all waste packages/waste forms. The criticality computational method is also applicable to preclosure configurations. The criticality computational method is a component of the methodology presented in ''Disposal Criticality Analysis Methodology Topical Report'' (YMP 2003). How the criticality computational method fits in the overall disposal criticality analysis methodology is illustrated in Figure 1 (YMP 2003, Figure 3). This calculation will not provide direct input to the total system performance assessment for license application. It is to be used as necessary to determine the criticality potential of configuration classes as determined by the configuration probability analysis of the configuration generator model (BSC 2003a).
Models in algebraic statistics Toric models
Little, John B.
Models in algebraic statistics Toric models Maximum likelihood estimation and inference Toric Statistics #12;Models in algebraic statistics Toric models Maximum likelihood estimation and inference Outline 1 Models in algebraic statistics 2 Toric models 3 Maximum likelihood estimation and inference John
ATMOSPHERIC MODELING: MODEL AND ACCURACY
The development of models to assess the emission control requirements of primary precursor pollutants in the production of photochemical oxidants has been underway for approximately 20 years. Over the period there has been a considerable increase in our understanding of the basic...
NASA Astrophysics Data System (ADS)
Kleimann, Jens
Most of the existing models for the solar wind's interaction with the ambient interstellar medium focus on the upwind direction, where (magnto-)hydrodynamic pressure compresses the different shock surfaces into a relatively small radial interval, and for which the Voyager spacecraft allowed predictions to be verified through comparison with in-situ data. The opposing downwind region, which traditionally had not received quite as much attention, has recently seen a surge in interest triggered by new ground-based observations in the context of cosmic ray anisotropies. Furthermore, the heliotail might also lead to a signature in the all-sky maps of energetic neutral atom fluxes observed with IBEX. In this overview talk, I shall present and discuss several models of varying complexity aiming at the downwind region, and compare their predictions for the physical properties of the heliotail, in particular its shape and spatial extent.
NSDL National Science Digital Library
2009-09-10
This interactive simulation gives students practice in the operation and the physical parts of a real micrometer, a measuring device that employs a screw to amplify distances that are too small to measure easily. The accuracy of a micrometer derives from the accuracy of the thread that is at its heart. The basic operating principle of a micrometer is that the rotation of an accurately made screw can be directly and precisely correlated to a certain amount of axial movement (and vice-versa), through the constant known as the screw's lead. The Micrometer model was created using the Easy Java Simulations (EJS) modeling tool. It is distributed as a ready-to-run (compiled) Java archive. Double click the ejs_ntnu_Micrometer.jar file to run the program (Java must be installed).
Fossion, Ruben
2010-09-10
The atomic nucleus is a typical example of a many-body problem. On the one hand, the number of nucleons (protons and neutrons) that constitute the nucleus is too large to allow for exact calculations. On the other hand, the number of constituent particles is too small for the individual nuclear excitation states to be explained by statistical methods. Another problem, particular for the atomic nucleus, is that the nucleon-nucleon (n-n) interaction is not one of the fundamental forces of Nature, and is hard to put in a single closed equation. The nucleon-nucleon interaction also behaves differently between two free nucleons (bare interaction) and between two nucleons in the nuclear medium (dressed interaction).Because of the above reasons, specific nuclear many-body models have been devised of which each one sheds light on some selected aspects of nuclear structure. Only combining the viewpoints of different models, a global insight of the atomic nucleus can be gained. In this chapter, we revise the the Nuclear Shell Model as an example of the microscopic approach, and the Collective Model as an example of the geometric approach. Finally, we study the statistical properties of nuclear spectra, basing on symmetry principles, to find out whether there is quantum chaos in the atomic nucleus. All three major approaches have been rewarded with the Nobel Prize of Physics. In the text, we will stress how each approach introduces its own series of approximations to reduce the prohibitingly large number of degrees of freedom of the full many-body problem to a smaller manageable number of effective degrees of freedom.
NSDL National Science Digital Library
Frank Wattenberg
1997-01-01
This site uses linear models to demonstrate the change in bird populations on a barren island over time, supply and demand, and the natural cleaning of a polluted lake by fresh water over time. The problems are laid out and turned into both graphic and equation form in order to understand the rate of change happening in each scenario. There are also links to previously covered materials that can help student review material from past math lessons.
NSDL National Science Digital Library
2012-07-12
In this quick activity about pollutants and groundwater (page 2 of PDF), learners build a model well with a toilet paper tube. Learners use food coloring to simulate pollutants and observe how they can be carried by groundwater and eventually enter water sources such as wells, rivers, and streams. This activity is associated with nanotechnology and relates to linked video, DragonflyTV Nano: Water Clean-up.
10. MOVABLE BED SEDIMENTATION MODELS. DOGTOOTH BEND MODEL (MODEL SCALE: ...
10. MOVABLE BED SEDIMENTATION MODELS. DOGTOOTH BEND MODEL (MODEL SCALE: 1' = 400' HORIZONTAL, 1' = 100' VERTICAL), AND GREENVILLE BRIDGE MODEL (MODEL SCALE: 1' = 360' HORIZONTAL, 1' = 100' VERTICAL). - Waterways Experiment Station, Hydraulics Laboratory, Halls Ferry Road, 2 miles south of I-20, Vicksburg, Warren County, MS
Students' Models of Curve Fitting: A Models and Modeling Perspective
ERIC Educational Resources Information Center
Gupta, Shweta
2010-01-01
The Models and Modeling Perspectives (MMP) has evolved out of research that began 26 years ago. MMP researchers use Model Eliciting Activities (MEAs) to elicit students' mental models. In this study MMP was used as the conceptual framework to investigate the nature of students' models of curve fitting in a problem-solving environment consisting of…
Plimpton, Steven James; Heffernan, Julieanne; Sasaki, Darryl Yoshio; Frischknecht, Amalie Lucile; Stevens, Mark Jackson; Frink, Laura J. Douglas
2005-11-01
Understanding the properties and behavior of biomembranes is fundamental to many biological processes and technologies. Microdomains in biomembranes or ''lipid rafts'' are now known to be an integral part of cell signaling, vesicle formation, fusion processes, protein trafficking, and viral and toxin infection processes. Understanding how microdomains form, how they depend on membrane constituents, and how they act not only has biological implications, but also will impact Sandia's effort in development of membranes that structurally adapt to their environment in a controlled manner. To provide such understanding, we created physically-based models of biomembranes. Molecular dynamics (MD) simulations and classical density functional theory (DFT) calculations using these models were applied to phenomena such as microdomain formation, membrane fusion, pattern formation, and protein insertion. Because lipid dynamics and self-organization in membranes occur on length and time scales beyond atomistic MD, we used coarse-grained models of double tail lipid molecules that spontaneously self-assemble into bilayers. DFT provided equilibrium information on membrane structure. Experimental work was performed to further help elucidate the fundamental membrane organization principles.
Introduction Bayes Model Selection
Kaski, Samuel
Introduction Bayes Model Selection Path sampling for Factor Models Bayes Model Selection with Path, USABayes Model Selection with Path Sampling: Factor Models #12;Introduction Bayes Model Selection Path and Purdue University, USABayes Model Selection with Path Sampling: Factor Models #12;Introduction Bayes
NASA Astrophysics Data System (ADS)
Holmes, Jon L.
1999-06-01
Molecular modeling has trickled down from the realm of pharmaceutical and research laboratories into the realm of undergraduate chemistry instruction. It has opened avenues for the visualization of chemical concepts that previously were difficult or impossible to convey. I am sure that many of you have developed exercises using the various molecular modeling tools. It is the desire of this Journal to become an avenue for you to share these exercises among your colleagues. It is to this end that Ron Starkey has agreed to edit such a column and to publish not only the description of such exercises, but also the software documents they use. The WWW is the obvious medium to distribute this combination and so accepted submissions will appear online as a feature of JCE Internet. Typical molecular modeling exercise: finding conformation energies. Molecular Modeling Exercises and Experiments is the latest feature column of JCE Internet, joining Conceptual Questions and Challenge Problems, Hal's Picks, and Mathcad in the Chemistry Curriculum. JCE Internet continues to seek submissions in these areas of interest and submissions of general interest. If you have developed materials and would like to submit them, please see our Guide to Submissions for more information. The Chemical Education Resource Shelf, Equipment Buyers Guide, and WWW Site Review would also like to hear about chemistry textbooks and software, equipment, and WWW sites, respectively. Please consult JCE Internet Features to learn more about these resources at JCE Online. Email Announcements Would you like to be informed by email when the latest issue of the Journal is available online? when a new JCE Software title is shipping? when a new JCE Internet article has been published or is available for Open Review? when your subscription is about to expire? A new feature of JCE Online makes this possible. Visit our Guestbook to learn how. When you submit the form on this page, which includes your email address, you may choose to receive an email notice about a Journal event that interests you. Currently such events include availability of the latest issue of the Journal at JCE Online, expiration of your Journal subscription, shipment of a new JCE Software issue, publication of a new JCE Internet article or its availability for Open Review, and other announcements from the Journal. You may choose any number of these options independently. JCE Online Guestbook. Your Privacy JCE Online promises to you that we will not use the information that you provide in our Guestbook for anything other than our own internal information. We will not provide this information to third parties. We will use the information you provide only in our effort to help make the JCE serve you better. You only need to provide your email address to take advantage of this service; the other information you provide is optional. Molecular Modeling Exercises and Experiments: Mission Statement We are seeking in this JCE Internet feature column to publish molecular modeling exercises and experiments that have been used successfully in undergraduate instruction. The exercises will be published here on JCE Internet. An abstract of published submissions will appear in print in the Journal of Chemical Education. Acceptable exercises could be used in either a chemistry laboratory or a chemistry computer laboratory. The exercise could cover any area of chemistry, but should be limited to undergraduate instructional applications. We envision that most of the exercises/experiments will utilize one of the popular instructional molecular modeling software programs (e.g. HyperChem, Spartan, CAChe, PC Model). Exercises that are specific to a particular modeling program are acceptable, but those usable with any modeling program are preferred. Ideally the exercises/experiments will be of the type where the "correct"answer is not obvious so
Atomic Models, Nagaoka's Saturnian Model
Klaus Hentschel
In late 1903, Hantaro Nagaoka (1865–1950) developed the earliest published quasi-planetary model of the atom. This graduate\\u000a of the University of Tokyo from 1887 spent his postdoctoral period in Vienna, Berlin and Munich before obtaining a professorship\\u000a in Tokyo to become Japan's foremost modern physicist. Nagaoka assumed that the atom is a large, massive, positively charged\\u000a sphere, encircled by very
Phyloclimatic Modelling Workshop
Yesson, Christopher
2012-11-13
Phyloclimatic modelling is a term to describe the integration of ecological niche models and phylogenetic reconstruction. Phyloclimatic models are essentially niche models for ancestral lineages; these models can be used ...
Modeling uncertainty: quicksand for water temperature modeling
Bartholow, John M.
2003-01-01
Uncertainty has been a hot topic relative to science generally, and modeling specifically. Modeling uncertainty comes in various forms: measured data, limited model domain, model parameter estimation, model structure, sensitivity to inputs, modelers themselves, and users of the results. This paper will address important components of uncertainty in modeling water temperatures, and discuss several areas that need attention as the modeling community grapples with how to incorporate uncertainty into modeling without getting stuck in the quicksand that prevents constructive contributions to policy making. The material, and in particular the reference, are meant to supplement the presentation given at this conference.
Analysis Meets Modeling Analysis Meets Modeling
Peterson, James K
and computer code issues "Simple Brain Based Compositional Models", www.ces.clemson.edu/petersj/books/FullBrain and Beyond... (with T. Lemmonds, Lawrence Livermore Labs) small brain model for decision making neural and painting compositional models text extraction models Small brain model training algorithms mathematical
Constructing NARMAX models using ARMAX models
TOR A. JOHANSEN; BJARNE A. FOSS
1993-01-01
This paper outlines how it is possible to decompose a complex non-linear modelling problem into a set of simpler linear modelling problems. Local ARMAX models valid within certain operating regimes are interpolated to construct a global NARMAX (non-linear NARMAX) model. Knowledge of the system behaviour in terms of operating regimes is the primary basis for building such models, hence it
NASA Astrophysics Data System (ADS)
Bahcall, John N.
I will summarize in four slides the 40 years of development of the standard solar model that is used to predict solar neutrino fluxes and then describe the current uncertainties in the predictions. I will dispel the misconception that the p-p neutrino flux is determined by the solar luminosity and present a related formula that gives, in terms of the p-p and 7Be neutrino fluxes, the ratio of the rates of the two primary ways of terminating the p-p fusion chain. I will also attempt to explain why it took so long, about three and a half decades, to reach a consensus view that new physics is being learned from solar neutrino experiments. Finally, I close with a personal confession and some personal remarks.
Chemistry Old Models New Models The Mayo-Lewis Copolymerization Model
Ponomarenko, Vadim
Chemistry Old Models New Models The Mayo-Lewis Copolymerization Model Vadim Ponomarenko Department://www-rohan.sdsu.edu/vadim/mayolewis.pdf #12;Chemistry Old Models New Models Outline Chemistry Old Models New Models #12;Chemistry Old Models New Models Outline Chemistry Old Models New Models #12;Chemistry Old Models New Models Basics
Analytic Modeling Birth-Death Model
Shihada, Basem
Analytic Modeling Birth-Death Model 1 A Review -Random Variables · A variable representing on Exponential Distribution 11 Birth-Death Model 12 #12;Birth-Death Model · Queuing system with a single service State Dependent Arrival Rate 14 #12;State Dependent Service Rate 15 Definition of Birth-Death Process 16
John A. Schroeder
2012-06-01
The Standardized Plant Analysis Risk (SPAR) models for the U.S. commercial nuclear power plants currently have very limited instrumentation and control (I&C) modeling [1]. Most of the I&C components in the operating plant SPAR models are related to the reactor protection system. This was identified as a finding during the industry peer review of SPAR models. While the Emergency Safeguard Features (ESF) actuation and control system was incorporated into the Peach Bottom Unit 2 SPAR model in a recent effort [2], various approaches to expend resources for detailed I&C modeling in other SPAR models are investigated.
CISNET: Standardized Model Documents
Modeling is a complex endeavor, and often it is very difficult to reconcile results from different models. To aid in this process of model description and comparison, CISNET has developed and implemented standardized model documentation. Model profiles are standardized descriptions that facilitate the comparison of models and their results. Users can read documentation about a single model or read side-by-side descriptions that contrast how models address different components of the process.
Forward model nonlinearity versus inverse model nonlinearity
Mehl, S.
2007-01-01
The issue of concern is the impact of forward model nonlinearity on the nonlinearity of the inverse model. The question posed is, "Does increased nonlinearity in the head solution (forward model) always result in increased nonlinearity in the inverse solution (estimation of hydraulic conductivity)?" It is shown that the two nonlinearities are separate, and it is not universally true that increased forward model nonlinearity increases inverse model nonlinearity. ?? 2007 National Ground Water Association.
Neuro-rough models for modelling HIV
Tshilidzi Marwala; Bodie Crossingham
2008-01-01
This paper proposes a neuro-rough model based on multi-layered perceptron (MLP) and rough set theory. The neuro-rough model is then tested on modeling the risk of HIV (human immunodeficiency virus) from demographic data. The model is formulated using Bayesian framework and trained using Monte Carlo method and Metropolis criterion. When the model was tested to estimate the risk of HIV
Endogenous optimization fisheries models
Ragnar Arnason
2000-01-01
This paper deals with a special class of fisheries models referred to as endogenous optimization models. The distinctive feature of these models is that behaviour of the agents in the model is not predetermined by exogenous behavioural rules. In endogenous optimization models, the model agents are merely furnished with objectives such as profit or utility maximization. Given these objectives and
NASA Astrophysics Data System (ADS)
Burger, M. H.; Killen, R. M.; M, N.; Sarantos, M.; Crider, D. H.; Vervak, R. J.
2009-04-01
Mercury has a tenuous exosphere created by the combined effects of solar radiation and micrometeoroid bombardment on the surface and the interaction of the solar wind with Mercury's magnetic field and surface. Observations of this exosphere provide essential data necessary for understanding the composition and evolution of Mercury's surface, as well as the interaction between Mercury's magnetosphere with the solar wind. The sodium component of the exosphere has been well observed from the ground (see review by Killen et al., 2007). These observations have revealed a highly variable and inhomogeneous exosphere with emission often peaking in the polar regions. Radiation acceleration drives exospheric escape producing a sodium tail pointing away from the sun which has been detected up to 1400 Mercury radii from the planet (Potter et al. 2002; Baumgardner et al. 2008). Calcium has also been observed in Mercury's exosphere showing a distribution distinct from sodium, although also variable (Killen et al. 2005). During the first two encounters with Mercury by MESSENGER, observations of the exosphere were made by the UltraViolet and Visible Spectrometer (UVVS) channel of the Mercury Atmospheric and Surface Composition Spectrometer (MASCS). Sodium and calcium emission were detected during both flybys, and magnesium was detected for the first time in Mercury's exosphere during the second flyby. The spatial distributions of these species showed significant, unexpected differences which suggest differences in the mechanisms responsible for releasing them from the surface. We present a Monte-Carlo model of sodium, magnesium, and calcium in Mercury's exosphere. The important source mechanisms for ejecting these species from the surface are sputtering by solar wind ions, photon-stimulated desorption, and micrometeoroid impact vaporization. Thermal desorption on the dayside does not supply enough energy to significantly populate the exosphere, although it does play a role in redistributing volatiles over the surface. In addition, atomic calcium can be produced from the dissociation of Ca-bearing molecules, such as CaO, which can be formed in impact vapors. The primary loss processes are the escape of neutrals ejected with sufficient energy and photoionization. The former process is supplemented by radiation pressure which accelerates neutrals anti-sunward such that escaping neutrals form a tail pointing away from the sun. Because Mercury's heliocentric distance and radial velocity vary during its orbit, both loss processes are functions of Mercury's true anomaly. We also consider the spatial distribution of the surface source. Impact vaporization is roughly isotropic over the surface, although there may be a leading/trailing asymmetry in the impact rate due to Mercury's orbital motion. Sputtering is confined to regions where the solar wind can impact the surface, which is shielded somewhat by the internal magnetic field. The surface regions vulnerable depend on the solar wind conditions. References: Baumgardner et al., GRL, 35, L03201, 2008. Killen, R.M. et al., Space Sci. Rev. 132, 433-509, 2007. Killen, R.M. et al., Icarus, 173, 300-311, 2005. Potter et al., Meteoritics & Planetary Sci., 37, 1165, 2002.
Generalised Bayesian matrix factorisation models
Mohamed, Shakir
2011-03-15
.2: Graphical model showing the form of a general la- tent variable model. Latent Variable Model Gaussian Factor analysis/PCA Multinomial Gaussian mixture model Dirichlet Partial Membership model Laplace Sparse latent feature model Table 1.1: Models obtained...
Fragment characteristics for the photofission of 238U with 6.1-13.1 MeV bremsstrahhmg
NASA Astrophysics Data System (ADS)
Pommé, S.; Jacobs, E.; Piessens, M.; De Frenne, D.; Persyn, K.; Govaert, K.; Yoneama, M.-L.
1994-05-01
The photofission of 238U has been studied experimentally for seven bremsstrahlung endpoint energies between 6.1 and 13.1 MeV. The postneutron yields of the fission products were obtained with ?-spectrometry techniques. Provisional mass and kinetic energy distributions were measured using a double energy detection technique. From the combination of both methods average neutron emission curves and preneutron mass-energy distributions were deduced. The systematic trends of the fragment characteristics have been studied as a function of the compound nucleus excitation energy. The results are compared with expectations from the energy partition model of Ruben et al. They are also discussed in the framework of the scission point model of Wilkins et al., and of the multimode fission with random neck rupture model of Brosa et al. The mass and kinetic energy data can be represented well by the superposition of two dominant mass asymmetric fission modes (standard I and standard II), and one relatively weak mass-symmetric fission mode (superlong). The standard I mode yield diminishes slightly with increasing compound-nucleus excitation energy. Up to an excitation energy of 7.9 MeV, i.e. the fission barrier plus pairing gap, the average fission-fragment total kinetic energy increases, and the average total 'asymptotic' excitation energy and the charge odd-even effect in the element yields are about constant ( ?P = (29 ± 2)%). These results hint to low energy dissipation. At higher compound nucleus excitation energies the fragment kinetic energy decreases, their excitation energy increases, and the proton odd-even effect decreases exponentially.
Soil-Landscape Modeling (Integrative Ecosystem Modeling)
Ma, Lena
SWS 6722 Soil-Landscape Modeling (Integrative Ecosystem Modeling) INSTRUCTOR: Dr. Sabine Grunwald properties in various landscape settings considering its biological and chemical composition, physical, soil security, and natural vs. managed ecosystems. Soil-landscapes view the totality of ecosystems
Uncertainty Modeling Via Frequency Domain Model Validation
NASA Technical Reports Server (NTRS)
Waszak, Martin R.; Andrisani, Dominick, II
1999-01-01
Abstract The majority of literature on robust control assumes that a design model is available and that the uncertainty model bounds the actual variations about the nominal model. However, methods for generating accurate design models have not received as much attention in the literature. The influence of the level of accuracy of the uncertainty model on closed loop performance has received even less attention. The research reported herein is an initial step in applying and extending the concept of model validation to the problem of obtaining practical uncertainty models for robust control analysis and design applications. An extension of model validation called 'sequential validation' is presented and applied to a simple spring-mass-damper system to establish the feasibility of the approach and demonstrate the benefits of the new developments.
MODELS AND HISTORY OF MODELING Hermann Schichl
Schichl, Hermann
this knowledge calculate distances EarthSun EarthMoon and, best known, circumference of Earth a mathemati cal a mathematical model solar system with circles epicircles predict movement moon, and planets. The model accurate
Geoscientific Model Development - a journal about models, for modellers
NASA Astrophysics Data System (ADS)
Lunt, Daniel; Annan, James; Hargreaves, Julia; Rutt, Ian; Sander, Rolf
2010-05-01
The journal Geoscientific Model Development arose from the observation that despite modelling being central to climate/earth system science, the models themselves are not generally subject to the same level of scrutiny and peer review as the results they generate. Model descriptions are generally (with some exceptions) difficult to publish independent from scientific results, and so are necessarily space-limited when they do appear. Consequently, it is not uncommon that the description of a given model is spread across several papers, and crucial aspects of the formulation may not be published at all. Issues of reproducibility, platform-dependence, version proliferation and the various fudges and corrections often needed in modelling, are rarely addressed in the literature. GMD aims to change this by providing a place to publish detailed, peer-reviewed descriptions of numerical models, including verification and validation. Model developers can publish an initial description of a numbered version of their model, and address subsequent changes with a sequence of update papers. Thus, a body of citable literature can be developed which provides an authoritative reference for a given version of the model, greatly improving traceability and giving confidence in the provenance of the code. An additional benefit is that the citations generated will at last recognise the important contribution which model developers make to science. The publication process is typical for an open access EGU journal: papers are initially published in an on-line discussion journal (Geoscientific Model Development Discussions), for a period of eight weeks. Anonymous reviews are solicited as normal, but are also published in the discussion journal. Anyone else may contribute to the discussions, if they wish. After the discussion period, the revision/review process operates as normal, until the paper is finally accepted or rejected by the handling topical editor. In this paper we describe the journal, and present statistics of submissions, papers accepted etc. since its first issue in 2008. For more details, see http://www.geoscientific-model-development.net
Model selection for logistic regression models
NASA Astrophysics Data System (ADS)
Duller, Christine
2012-09-01
Model selection for logistic regression models decides which of some given potential regressors have an effect and hence should be included in the final model. The second interesting question is whether a certain factor is heterogeneous among some subsets, i.e. whether the model should include a random intercept or not. In this paper these questions will be answered with classical as well as with Bayesian methods. The application show some results of recent research projects in medicine and business administration.
ERIC Educational Resources Information Center
Willden, Jeff
2001-01-01
"Bohr's Atomic Model" is a small interactive multimedia program that introduces the viewer to a simplified model of the atom. This interactive simulation lets students build an atom using an atomic construction set. The underlying design methodology for "Bohr's Atomic Model" is model-centered instruction, which means the central model of the…
PRECIPITATION SIMULATION MODELS 1425
Technology Transfer Automated Retrieval System (TEKTRAN)
Precipitation simulation models generate synthesized sequences of precipitation at a range of spatial and temporal scales. Three broad categories are general circulation models, stochastic spatial-temporal rainstorm models and daily precipitation models. Model selection and use should be justified b...
NSDL National Science Digital Library
Denis Noble (Oxford University Department of Physiology)
2004-08-01
Models of the heart have been developed since 1960, starting with the discovery and modeling of potassium channels. The first models of calcium balance were made in the 1980s and have now reached a high degree of physiological detail. During the 1990s, these cell models were incorporated into anatomically detailed tissue and organ models.
Marker, David
of countable models is 2 @0 which avoids the Zariski geometry machinery. The final paper, ``Some model theoryModel Theory of Fields David Marker Margit Messmer Anand Pillay #12; Contents I. Introduction to the Model Theory of Fields, David Marker II. Model Theory of Differential Fields, David Marker III
Rasch's Logistic Model Vs. the Guttman Model
ERIC Educational Resources Information Center
Brink, Nicholas E.
1972-01-01
Study compares the Rasch and the Guttman models of measurement and thus adds to the description of the characteristics of Rasch's logistic model. Such knowledge is of importance in making decisions as to which model and which statistics should be used in evaluations of tests. (Author/CB)
Modeling Guru: Knowledge Base for NASA Modelers
M. S. Seablom; G. S. Wojcik; B. H. van Aartsen
2009-01-01
Modeling Guru is an on-line knowledge-sharing resource for anyone involved with or interested in NASA's scientific models or High End Computing (HEC) systems. Developed and maintained by the NASA's Software Integration and Visualization Office (SIVO) and the NASA Center for Computational Sciences (NCCS), Modeling Guru's combined forums and knowledge base for research and collaboration is becoming a repository for the
Modeling transient rootzone salinity (SWS Model)
Technology Transfer Automated Retrieval System (TEKTRAN)
The combined, water quality criteria for irrigation, water and ion processes in soils, and plant and soil response is sufficiently complex that adequate analysis requires computer models. Models for management are also needed but these models must consider that the input requirements must be reasona...
3 Human vs. model 2 Salience model
Peters, Rob
3 Human vs. model 2 Salience model 1 Human eyetracking Introduction Summary 5 Fitted contour model.klab.caltech.edu/rjpeters/pubs/2003_SFN_Poster.pdf Fractals Aerial imageryOutdoor photosGabor "snakes" Gabor arrays Human eyetracking Winner-take-all Inhibition of return Attended location adapted from Itti&Koch (2001), Nat. Rev. Neurosci
Cosmological Models Generalising Robertson-Walker Models
Abdussattar
2003-08-07
Considering the physical 3-space t = constant of the spacetime metrics as spheroidal and pseudo spheroidal, cosmological models which are generalizations of Robertson-Walker models are obtained. Specific forms of these general models as solutions of Einstein's field equations are also discussed in the radiation- and the matter-dominated eras of the universe.
Cognitive Modeling Cognitive Modelling -The nature of
Bremen, Universität
inhibits or enhances (never both) · networks need teaching · neurotransmitters are ignored by PDP November 17, 2014 4 #12;Cognitive Modeling neurotransmitter November 17, 2014 5 #12;Cognitive Modeling neurotransmitter · Injecting neurotransmitter can produce effects like: November 17, 2014 6 #12;Cognitive Modeling
Supplementary Material: Hydraulic Model Model Development
Clark, James S.
-hourly observations for the calibration period is given in Table S2. The number of sensors in each treatment for each year during the evaluation study period is given in Table S3. Models were compared with the BayesianSupplementary Material: Hydraulic Model Model Development Consider a mass of water in the height
Bohr model as an algebraic collective model
Rowe, D. J.; Welsh, T. A.; Caprio, M. A.
2009-05-15
Developments and applications are presented of an algebraic version of Bohr's collective model. Illustrative examples show that fully converged calculations can be performed quickly and easily for a large range of Hamiltonians. As a result, the Bohr model becomes an effective tool in the analysis of experimental data. The examples are chosen both to confirm the reliability of the algebraic collective model and to show the diversity of results that can be obtained by its use. The focus of the paper is to facilitate identification of the limitations of the Bohr model with a view to developing more realistic, computationally tractable models.
Transport Model with Quasipions
Xiong, L.; Ko, Che Ming; Koch, V.
1993-01-01
We extend the normal transport model to include the medium effect on pions by treating them as quasiparticles. The property of the quasipion is determined using the delta-hole model. Modelling heavy-ion collisions at intermediate energies...
Introduction & Scope Model & Calibration
Kuhn, Matthew R.
Introduction & Scope Principles Model & Calibration Applications Granular Fabric and Stress / papers / ASME07.pdf LATEX #12;Introduction & Scope Principles Model & Calibration Applications Outline 1 Introduction & Scope 2 Principles 3 Model & Calibration 4 Applications Kuhn -- November 12, 2007 http
NASA Astrophysics Data System (ADS)
Brimicombe, N. W.
1991-07-01
Hot air balloons can be modelled in a number of different ways. The most satisfactory, but least useful model is at a microscopic level. Macroscopic models are easier to use but can be very misleading.
NSDL National Science Digital Library
Weather Forecast Models from the NOAA, provides links to sites posting output from many of their numerical models. These models attempt to simulate the state of the atmosphere at various times in the future.
Editor's Roundtable: Model behavior
NSDL National Science Digital Library
Inez Liftig
2010-11-01
Models are manageable representations of objects, concepts, and phenomena, and are everywhere in science. Models are "thinking tools" for scientists and have always played a key role in the development of scientific knowledge. Models of the solar system,
Analog models General Relativity
Visser, Matt
Analog models of General Relativity: Introduction and Survey Matt Visser Physics Department Washington University Saint Louis USA #12; Presented at the workshop: Analog models for General Relativity \\analog models" for general relativity. Why? Laboratory experiments with general relativity black holes
Modeling Multiallelic Selection Using a Moran Model
Muirhead, Christina A.; Wakeley, John
2009-01-01
We present a Moran-model approach to modeling general multiallelic selection in a finite population and show how it may be used to develop theoretical models of biological systems of balancing selection such as plant gametophytic self-incompatibility loci. We propose new expressions for the stationary distribution of allele frequencies under selection and use them to show that the continuous-time Markov chain describing allele frequency change with exchangeable selection and Moran-model reproduction is reversible. We then use the reversibility property to derive the expected allele frequency spectrum in a finite population for several general models of multiallelic selection. Using simulations, we show that our approach is valid over a broader range of parameters than previous analyses of balancing selection based on diffusion approximations to the Wright–Fisher model of reproduction. Our results can be applied to any model of multiallelic selection in which fitness is solely a function of allele frequency. PMID:19474205
Induction Machine Modeling Concepts
Rik De Doncker; Duco W. J. Pulle; André Veltman
\\u000a In this chapter, induction machine models are developed. As a platform for introducing field-oriented models, first models\\u000a without leakage inductances are derived showing the essence of torque production of the machine. Central to this chapter is\\u000a the introduction of a universal flux linkage model which allows a three-to-two inductance transformation leading to a simplified\\u000a IRTF machine model. This universal model
Modeling of geothermal systems
Bodvarsson, G.S.; Pruess, K.; Lippmann, M.J.
1985-03-01
During the last decade the use of numerical modeling for geothermal resource evaluation has grown significantly, and new modeling approaches have been developed. In this paper we present a summary of the present status in numerical modeling of geothermal systems, emphasizing recent developments. Different modeling approaches are described and their applicability discussed. The various modeling tasks, including natural-state, exploitation, injection, multi-component and subsidence modeling, are illustrated with geothermal field examples. 99 refs., 14 figs.
NASA Technical Reports Server (NTRS)
Liou, J. C.
2012-01-01
Presentation outlne: (1) The NASA Orbital Debris (OD) Engineering Model -- A mathematical model capable of predicting OD impact risks for the ISS and other critical space assets (2) The NASA OD Evolutionary Model -- A physical model capable of predicting future debris environment based on user-specified scenarios (3) The NASA Standard Satellite Breakup Model -- A model describing the outcome of a satellite breakup (explosion or collision)
NSDL National Science Digital Library
Hosted by the European Bioinformatics Institute, the BioModels Database is a collaborative, "new effort to develop a data resource that will allow biologist to store, search and retrieve published mathematical models of biological interests. The models in the BioModels Database are annotated and linked to relevant data resources, such as publications, databases of compounds and pathways, controlled vocabularies, etc." The website allows visitors to browse and search the Database for models. The site also provides information about submitting models for the Database. It should be noted that submitted models must undergo tests conducted by BioModels Database curators before they are incorporated. [NL
NASA Technical Reports Server (NTRS)
Cellier, Francois E.
1991-01-01
A comprehensive and systematic introduction is presented for the concepts associated with 'modeling', involving the transition from a physical system down to an abstract description of that system in the form of a set of differential and/or difference equations, and basing its treatment of modeling on the mathematics of dynamical systems. Attention is given to the principles of passive electrical circuit modeling, planar mechanical systems modeling, hierarchical modular modeling of continuous systems, and bond-graph modeling. Also discussed are modeling in equilibrium thermodynamics, population dynamics, and system dynamics, inductive reasoning, artificial neural networks, and automated model synthesis.
Introduction Mathematical models and reality
Hennig, Christian
Introduction Mathematical models and reality What frequentist model assumptions mean The Bayesian;Introduction Mathematical models and reality What frequentist model assumptions mean The Bayesian assumptions models and reality What frequentist model assumptions mean The Bayesian assumptions The Bayes
ROCK PROPERTIES MODEL ANALYSIS MODEL REPORT
Clinton Lum
2002-02-04
The purpose of this Analysis and Model Report (AMR) is to document Rock Properties Model (RPM) 3.1 with regard to input data, model methods, assumptions, uncertainties and limitations of model results, and qualification status of the model. The report also documents the differences between the current and previous versions and validation of the model. The rock properties models are intended principally for use as input to numerical physical-process modeling, such as of ground-water flow and/or radionuclide transport. The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. This work was conducted in accordance with the following planning documents: WA-0344, ''3-D Rock Properties Modeling for FY 1998'' (SNL 1997, WA-0358), ''3-D Rock Properties Modeling for FY 1999'' (SNL 1999), and the technical development plan, Rock Properties Model Version 3.1, (CRWMS M&O 1999c). The Interim Change Notice (ICNs), ICN 02 and ICN 03, of this AMR were prepared as part of activities being conducted under the Technical Work Plan, TWP-NBS-GS-000003, ''Technical Work Plan for the Integrated Site Model, Process Model Report, Revision 01'' (CRWMS M&O 2000b). The purpose of ICN 03 is to record changes in data input status due to data qualification and verification activities. These work plans describe the scope, objectives, tasks, methodology, and implementing procedures for model construction. The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The work scope for this activity consists of the following: (1) Conversion of the input data (laboratory measured porosity data, x-ray diffraction mineralogy, petrophysical calculations of bound water, and petrophysical calculations of porosity) for each borehole into stratigraphic coordinates; (2) Re-sampling and merging of data sets; (3) Development of geostatistical simulations of porosity; (4) Generation of derivative property models via linear coregionalization with porosity; (5) Post-processing of the simulated models to impart desired secondary geologic attributes and to create summary and uncertainty models; and (6) Conversion of the models into real-world coordinates. The conversion to real world coordinates is performed as part of the integration of the RPM into the Integrated Site Model (ISM) 3.1; this activity is not part of the current analysis. The ISM provides a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site and consists of three components: (1) Geologic Framework Model (GFM); (2) RPM, which is the subject of this AMR; and (3) Mineralogic Model. The interrelationship of the three components of the ISM and their interface with downstream uses are illustrated in Figure 1. Figure 2 shows the geographic boundaries of the RPM and other component models of the ISM.
M. A. Wasiolek
2003-10-27
The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), the TSPA-LA. The ERMYN model provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs), the reference biosphere, the human receptor, and assumptions (Section 6.2 and Section 6.3); (3) Building a mathematical model using the biosphere conceptual model and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN model compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN model by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); and (8) Validating the ERMYN model by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).
D. W. Wu
2003-07-16
The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), the TSPA-LA. The ERMYN model provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs), the reference biosphere, the human receptor, and assumptions (Section 6.2 and Section 6.3); (3) Building a mathematical model using the biosphere conceptual model and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN model compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN model by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); and (8) Validating the ERMYN model by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).
Mons-Hainaut, UniversitÃ© de
Model Theory and Quantum Groups Sonia L'Innocente Model Theory and Quantum Groups Sonia L'Innocente (University of Mons) Model Theory and Quantum Groups 1 / 40 #12;Model Theory and Quantum Groups Sonia L quantum plane, submitted. This work is inspired by Ivo Herzog's paper: The pseudo-finite dimensional
Multimodeling and Model Abstraction
Technology Transfer Automated Retrieval System (TEKTRAN)
The multiplicity of models of the same process or phenomenon is the commonplace in environmental modeling. Last 10 years brought marked interest to making use of the variety of conceptual approaches instead of attempting to find the best model or using a single preferred model. Two systematic approa...
JAMES H. COWIE; DAVID M. NICOL; A. T. Ogielski
1999-01-01
A new scalable modeling framework and scalable parallel simulations make it possible to analyze the detailed behaviour of large, multidomain multiprotocol Internet models. The article focuses on simulation research. It describes the software designs that let us construct and run appropriately large models. After several years of research, we have developed a scalable network modeling framework, a scalable simulation framework
NASA Astrophysics Data System (ADS)
Rahmani, Fouad Lazhar
2010-11-01
The aim of this paper is to present mathematical modelling of the spread of infection in the context of the transmission of the human immunodeficiency virus (HIV) and the acquired immune deficiency syndrome (AIDS). These models are based in part on the models suggested in the field of th AIDS mathematical modelling as reported by ISHAM [6].
The Standard Cosmological Model
P. J. E. Peebles
1998-06-15
We have a well-established standard model for cosmology and prospects for considerable additions from work in progress. I offer a list of elements of the standard model, comments on controversies in the interpretation of the evidence in support of this model, and assessments of the directions extensions of the standard model seem to be taking.
ERIC Educational Resources Information Center
Sheehan, Bernard S.; Gulko, Warren W.
1976-01-01
A review of the elementary ideas that are fundamental to most higher education resource models and necessary for understanding model-supported cost analyses. Analytic constructs providing information for policy decisions include the elementary model, the fundamental cost model, and the instructional cost index. (JT)
ERIC Educational Resources Information Center
Clancey, William J.
The concept of a qualitative model is used as the focus of this review of qualitative student models in order to compare alternative computational models and to contrast domain requirements. The report is divided into eight sections: (1) Origins and Goals (adaptive instruction, qualitative models of processes, components of an artificial…
ERIC Educational Resources Information Center
Klopfer, Eric; Colella, Vanessa
This paper focuses on one method used to introduce model design and creation using StarLogo to a group of high school teachers. Teachers with model-building skills can easily customize modeling environments for their classes. More importantly, model building can enable teachers to approach their curricula from a more holistic perspective, as well…
Modelling Holocene climate trends: A model intercomparison
NASA Astrophysics Data System (ADS)
Lohmann, Gerrit
2013-04-01
For the paleomodel intercomparison, we compared the results from scenarios with identical forcing for the mid-to-late Holocene period: varying Earth's orbital parameters, fixed level of greenhouse gas concentrations, fixed land-sea mask and orography. 18 paleoclimate modelling groups are involved in this initiative, working on transient Holocene simulations. One major issue of both the modelling and reconstruction side were the quantification of uncertainties, and the evaluation of trend and variability patterns beyond a single proxy and beyond a single model simulation. The goal is to obtain robust results of trend patterns, seasonality changes, as well as transitions on a regional scale. The major objective is to investigate the spatio-temporal pattern of temperature and precipitation changes during Holocene as derived from integrations with a set comprehensive global climate models (GCMs), Earth system models of intermediate complexity (EMICs), as well as conceptual-statistical models. In the conceptual-statistical model by Laepple and Lohmann (2009) a rigorous simple concept is proposed: The temperature response on astronomical timescales has the same function as the response to seasonal insolation variations. The general pattern of surface temperatures in the models shows a high latitude cooling and a low latitude warming. Our analysis shows common patterns of temperature changes, especially for the respective summer seasons. This is a common feature for all model considered. Due to strong differences in atmospheric dynamics and sea ice, we find significant differences in the winter patterns. The precipitation trends show a clear difference between GCMs and EMICs mainly because the treatment of the hydological cycle in the tropics. Most models show a southward movement of the ITCZ. Using statistical analysis of the model variability modes and their amplitude during the Holocene, we reveal a strong heterogeneity in temperature and precipitation pattern and no common response in trend and variability, although a tendency towards NAO- and SOI- (El Nino-like) is detected. Our approach is to obtain, through ensemble runs for climate model output, a range of solutions that can be then compared and evaluated for their consistency with the range of uncertainty given by the palaeoclimate proxies. This approach allows a much more congruent way of comparison between proxy data and model result because both investigations will provide a range of possible climate change where the errors in the estimates are accounted for. We compare the ocean temperature evolution of the Holocene as simulated by climate models and reconstructed from marine temperature proxies. Independently of the choice of the climate model, we observe significant mismatches between modelled and reconstructed amplitudes in the trends for the last 6000 years.
New 3D model for dynamics modeling
NASA Astrophysics Data System (ADS)
Perez, Alain
1994-05-01
The wrist articulation represents one of the most complex mechanical systems of the human body. It is composed of eight bones rolling and sliding along their surface and along the faces of the five metacarpals of the hand and the two bones of the arm. The wrist dynamics are however fundamental for the hand movement, but it is so complex that it still remains incompletely explored. This work is a part of a new concept of computer-assisted surgery, which consists in developing computer models to perfect surgery acts by predicting their consequences. The modeling of the wrist dynamics are based first on the static model of its bones in three dimensions. This 3D model must optimise the collision detection procedure which is the necessary step to estimate the physical contact constraints. As many other possible computer vision models do not fit with enough precision to this problem, a new 3D model has been developed thanks to the median axis of the digital distance map of the bones reconstructed volume. The collision detection procedure is then simplified for contacts are detected between spheres. The experiment of this original 3D dynamic model products realistic computer animation images of solids in contact. It is now necessary to detect ligaments on digital medical images and to model them in order to complete a wrist model.
Pierce, B.; Hill, D.; Howe, S.O.
1981-01-01
The purpose of this paper is to describe and illustrate how process models developed at BNL are used to analyze industrial energy use. A model of the US pulp and paper industry is described and discussed as a specific application of process modeling methodology. Case study results from the pulp and paper model illustrate how process models can be used to analyze a variety of issues. Applications discussed include projections of energy demand, conservation technology assessment, energy-related tax policies, and sensitivity analysis. A subsequent discussion of these results supports the conclusion that industry process models are versatile and powerful tools for energy end-use modeling and conservation analysis.
Model Comparison Bayes rule for
Penny, Will
Model Comparison Bayes rule for models Bayes factors Spike rates Linear Models Model Evidence Complexity AIC and BIC Example fMRI example Bayes versus classical inference Model evidence in data space;Model Comparison Bayes rule for models Bayes factors Spike rates Linear Models Model Evidence Complexity
C.F. Ahlers, H.H. Liu
2001-12-18
The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the AMR Development Plan for U0035 Calibrated Properties Model REV00 (CRWMS M&O 1999c). These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions.
C. Ahlers; H. Liu
2000-03-12
The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the ''AMR Development Plan for U0035 Calibrated Properties Model REV00. These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions.
Honeycomb lattice solvable models
Kyung-Hoon Kwon; Doochul Kim
1993-03-10
We construct solvable models on the honeycomb lattice by combining three faces of the square lattice solvable models into a hexagon face. These models contain two independent, anisotropy controlling, spectral parameters and their transfer matrices with different spectral parameters commute with each other. At the critical point, the finite-size scaling of the transfer matrix spectrum for the honeycomb lattice models is written in terms of the quantities obtained from the finite-size scaling of the square lattice solvable models. We study in detail the phase transition properties of two models based on the interacting hard square model and the magnetic hard square model, respectively. The models, in general, can be extended to the IRF version of the Z-invariant models of Baxter.
Independent modeling efforts often yield disparate results that are difficult to reconcile. A comparative modeling approach explores differences between models in a systematic way. In joint collaborations, a set of common population inputs is shared across all models (e.g., dissemination patterns of screening and treatment, mortality from non-cancer causes), and common sets of intermediate and final outputs are developed. Results are then compared across models.
The CISNET lung group was initiated in the second round of CISNET I and consists of five modeling teams and two affiliate members. The groups' interests include areas such as tobacco control policies, screening, and genetic susceptibility. The models incorporate the association between smoking and lung cancer in various ways, from epidemiologic models to more mechanistic models, including various versions of the two-stage clonal expansion model of carcinogenesis.
Infrared thermal prediction modeling
NASA Astrophysics Data System (ADS)
Winterberger, Walter J.; Baird, Alfred M.; Taylor, William E.
1993-04-01
An enhanced thermal prediction model has been developed for weapon system performance analyses. Model development and validation was conducted as part of an effort to quantify the adverse weather capability of autonomous precision guided munitions. The emphasis was on modeling the thermal signatures of fixed, high value targets such as bridges and power plants. The techniques developed are directly applicable to modeling target and background features in IR wavebands that are used by model-based target recognition algorithms.
Graphical Event Models Causal Event Models
Spirtes, Peter
(3, 1, 2, 3) #12;Graphical Models · Explaining away type reasoning What is probability of Burglary given AlarmSound? What is probability of Burglary given AlarmSound and a NewsReport of an earthquake? Burglary?Earthquake? NewsReport? AlarmSound? #12;Graphical Models · Explaining away type reasoning What
14. Quark model 1 14. QUARK MODEL
Krusche, Bernd
14. Quark model 1 14. QUARK MODEL Revised December 2005 by C. Amsler (University of Z¨urich), T. DeGrand (University of Colorado, Boulder) and B. Krusche (University of Basel). 14.1. Quantum numbers of the quarks Quarks are strongly interacting fermions with spin 1/2 and, by convention, positive parity
1. Quark model 1 1. QUARK MODEL
Krusche, Bernd
1. Quark model 1 1. QUARK MODEL Revised December 2005 by C. Amsler (University of Z¨urich), T. DeGrand (University of Colorado, Boulder) and B. Krusche (University of Basel). 1.1. Quantum numbers of the quarks Quarks are strongly interacting fermions with spin 1/2 and, by convention, positive parity
WASP TRANSPORT MODELING AND WASP ECOLOGICAL MODELING
A combination of lectures, demonstrations, and hands-on excercises will be used to introduce pollutant transport modeling with the U.S. EPA's general water quality model, WASP (Water Quality Analysis Simulation Program). WASP features include a user-friendly Windows-based interfa...
Prencipe, Giuseppe
and from few parameters tied to the particular architacture that is used The BSP model Computational model, processors execute computational steps using locally stored data, and also can send and receive messages Communication Barrier Variable grain Loosely synchronous Nonzero overhead Message passing or shared
Geochemistry Model Validation Report: External Accumulation Model
K. Zarrabi
2001-09-27
The purpose of this Analysis and Modeling Report (AMR) is to validate the External Accumulation Model that predicts accumulation of fissile materials in fractures and lithophysae in the rock beneath a degrading waste package (WP) in the potential monitored geologic repository at Yucca Mountain. (Lithophysae are voids in the rock having concentric shells of finely crystalline alkali feldspar, quartz, and other materials that were formed due to entrapped gas that later escaped, DOE 1998, p. A-25.) The intended use of this model is to estimate the quantities of external accumulation of fissile material for use in external criticality risk assessments for different types of degrading WPs: U.S. Department of Energy (DOE) Spent Nuclear Fuel (SNF) codisposed with High Level Waste (HLW) glass, commercial SNF, and Immobilized Plutonium Ceramic (Pu-ceramic) codisposed with HLW glass. The scope of the model validation is to (1) describe the model and the parameters used to develop the model, (2) provide rationale for selection of the parameters by comparisons with measured values, and (3) demonstrate that the parameters chosen are the most conservative selection for external criticality risk calculations. To demonstrate the applicability of the model, a Pu-ceramic WP is used as an example. The model begins with a source term from separately documented EQ6 calculations; where the source term is defined as the composition versus time of the water flowing out of a breached waste package (WP). Next, PHREEQC, is used to simulate the transport and interaction of the source term with the resident water and fractured tuff below the repository. In these simulations the primary mechanism for accumulation is mixing of the high pH, actinide-laden source term with resident water; thus lowering the pH values sufficiently for fissile minerals to become insoluble and precipitate. In the final section of the model, the outputs from PHREEQC, are processed to produce mass of accumulation, density of accumulation, and the geometry of the accumulation zone. The density of accumulation and the geometry of the accumulation zone are calculated using a characterization of the fracture system based on field measurements made in the proposed repository (BSC 2001k). The model predicts that accumulation would spread out in a conical accumulation volume. The accumulation volume is represented with layers as shown in Figure 1. This model does not directly feed the assessment of system performance. The output from this model is used by several other models, such as the configuration generator, criticality, and criticality consequence models, prior to the evaluation of system performance.
Effects of covert modeling and model reinforcement on assertive behavior
Alan E. Kazdin
1974-01-01
Investigated the effect of covert modeling in developing assertive behavior in 45 17-52 yr olds. Nonassertive Ss received covert modeling (imagined scenes in which a model performed assertively), covert modeling plus reinforcement (imagined scenes in which a model performed assertively and favorable consequences followed model performance), no modeling (imagined scenes with neither an assertive model nor favorable consequences), or delayed
Model Validation Status Review
E.L. Hardin
2001-11-28
The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M&O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and engineered barriers, plus the TSPA model itself Description of the model areas is provided in Section 3, and the documents reviewed are described in Section 4. The responsible manager for the Model Validation Status Review was the Chief Science Officer (CSO) for Bechtel-SAIC Co. (BSC). The team lead was assigned by the CSO. A total of 32 technical specialists were engaged to evaluate model validation status in the 21 model areas. The technical specialists were generally independent of the work reviewed, meeting technical qualifications as discussed in Section 5.
Hickerson, Michael J
2014-06-01
As the field of phylogeography has continued to move in the model-based direction, researchers continue struggling to construct useful models for inference. These models must be both simple enough to be tractable yet contain enough of the complexity of the natural world to make meaningful inference. Beyond constructing such models for inference, researchers explore model space and test competing models with the data on hand, with the goal of improving the understanding of the natural world and the processes underlying natural biological communities. Approximate Bayesian computation (ABC) has increased in recent popularity as a tool for evaluating alternative historical demographic models given population genetic samples. As a thorough demonstration, Pelletier & Carstens (2014) use ABC to test 143 phylogeographic submodels given geographically widespread genetic samples from the salamander species Plethodon idahoensis (Carstens et al. 2014) and, in so doing, demonstrate how the results of the ABC model choice procedure are dependent on the model set one chooses to evaluate. PMID:24931159
Practical Marginalized Multilevel Models
Griswold, Michael E.; Swihart, Bruce J.; Caffo, Brian S.; Zeger, Scott L.
2013-01-01
Clustered data analysis is characterized by the need to describe both systematic variation in a mean model and cluster-dependent random variation in an association model. Marginalized multilevel models embrace the robustness and interpretations of a marginal mean model, while retaining the likelihood inference capabilities and flexible dependence structures of a conditional association model. Although there has been increasing recognition of the attractiveness of marginalized multilevel models, there has been a gap in their practical application arising from a lack of readily available estimation procedures. We extend the marginalized multilevel model to allow for nonlinear functions in both the mean and association aspects. We then formulate marginal models through conditional specifications to facilitate estimation with mixed model computational solutions already in place. We illustrate the MMM and approximate MMM approaches on a cerebrovascular deficiency crossover trial using SAS and an epidemiological study on race and visual impairment using R. Datasets, SAS and R code are included as supplemental materials. PMID:24357884
Trapped Radiation Model Uncertainties: Model-Data and Model-Model Comparisons
NASA Technical Reports Server (NTRS)
Armstrong, T. W.; Colborn, B. L.
2000-01-01
The standard AP8 and AE8 models for predicting trapped proton and electron environments have been compared with several sets of flight data to evaluate model uncertainties. Model comparisons are made with flux and dose measurements made on various U.S. low-Earth orbit satellites (APEX, CRRES, DMSP. LDEF, NOAA) and Space Shuttle flights, on Russian satellites (Photon-8, Cosmos-1887, Cosmos-2044), and on the Russian Mir space station. This report gives the details of the model-data comparisons -- summary results in terms of empirical model uncertainty factors that can be applied for spacecraft design applications are given in a companion report. The results of model-model comparisons are also presented from standard AP8 and AE8 model predictions compared with the European Space Agency versions of AP8 and AE8 and with Russian trapped radiation models.
Trapped Radiation Model Uncertainties: Model-Data and Model-Model Comparisons
NASA Technical Reports Server (NTRS)
Armstrong, T. W.; Colborn, B. L.
2000-01-01
The standard AP8 and AE8 models for predicting trapped proton and electron environments have been compared with several sets of flight data to evaluate model uncertainties. Model comparisons are made with flux and dose measurements made on various U.S. low-Earth orbit satellites (APEX, CRRES, DMSP, LDEF, NOAA) and Space Shuttle flights, on Russian satellites (Photon-8, Cosmos-1887, Cosmos-2044), and on the Russian Mir Space Station. This report gives the details of the model-data comparisons-summary results in terms of empirical model uncertainty factors that can be applied for spacecraft design applications are given in a combination report. The results of model-model comparisons are also presented from standard AP8 and AE8 model predictions compared with the European Space Agency versions of AP8 and AE8 and with Russian-trapped radiation models.
Modeling nonstationary longitudinal data.
Núñez-Antón, V; Zimmerman, D L
2000-09-01
An important theme of longitudinal data analysis in the past two decades has been the development and use of explicit parametric models for the data's variance-covariance structure. A variety of these models have been proposed, of which most are second-order stationary. A few are flexible enough to accommodate nonstationarity, i.e., nonconstant variances and/or correlations that are not a function solely of elapsed time between measurements. We review five nonstationary models that we regard as most useful: (1) the unstructured covariance model, (2) unstructured antedependence models, (3) structured antedependence models, (4) autoregressive integrated moving average and similar models, and (5) random coefficients models. We evaluate the relative strengths and limitations of each model, emphasizing when it is inappropriate or unlikely to be useful. We present three examples to illustrate the fitting and comparison of the models and to demonstrate that nonstationary longitudinal data can be modeled effectively and, in some cases, quite parsimoniously. In these examples, the antedependence models generally prove to be superior and the random coefficients models prove to be inferior. We conclude that antedependence models should be given much greater consideration than they have historically received. PMID:10985205
Modeling worldwide highway networks
NASA Astrophysics Data System (ADS)
Villas Boas, Paulino R.; Rodrigues, Francisco A.; da F. Costa, Luciano
2009-12-01
This Letter addresses the problem of modeling the highway systems of different countries by using complex networks formalism. More specifically, we compare two traditional geographical models with a modified geometrical network model where paths, rather than edges, are incorporated at each step between the origin and the destination vertices. Optimal configurations of parameters are obtained for each model and used for the comparison. The highway networks of Australia, Brazil, India, and Romania are considered and shown to be properly modeled by the modified geographical model.
Reiter, E.R.
1980-01-01
A highly sophisticated and accurate approach is described to compute on an hourly or daily basis the energy consumption for space heating by individual buildings, urban sectors, and whole cities. The need for models and specifically weather-sensitive models, composite models, and space-heating models are discussed. Development of the Colorado State University Model, based on heat-transfer equations and on a heuristic, adaptive, self-organizing computation learning approach, is described. Results of modeling energy consumption by the city of Minneapolis and Cheyenne are given. Some data on energy consumption in individual buildings are included.
NSDL National Science Digital Library
National Centers for Environmental Prediction, National Oceanic and Atmospheric Administration
The Marine Modeling and Analysis Branch (MMAB) of the Environmental Modeling Center is responsible for the development of improved numerical weather and marine prediction modeling systems. These models provide analysis and real-time forecast guidance on marine meteorological, oceanographic, and cryospheric parameters over the global oceans and coastal areas of the US. This site provides access to MMAB modeling tools for ocean waves (including an interactive presentation,) sea ice, marine meteorology, sea surface temperature and more. The site also features a mailing list, bibliography of publications, and information about modeling products still in the experimental and development phases.
Geller, Michael; Telem, Ofri
2015-05-15
We present the first realization of a "twin Higgs" model as a holographic composite Higgs model. Uniquely among composite Higgs models, the Higgs potential is protected by a new standard model (SM) singlet elementary "mirror" sector at the sigma model scale f and not by the composite states at m_{KK}, naturally allowing for m_{KK} beyond the LHC reach. As a result, naturalness in our model cannot be constrained by the LHC, but may be probed by precision Higgs measurements at future lepton colliders, and by direct searches for Kaluza-Klein excitations at a 100 TeV collider. PMID:26024160
NASA Technical Reports Server (NTRS)
Savage, M.; Caldwell, R. J.; Wisor, G. D.; Lewicki, D. G.
1987-01-01
A computer model has been constructed to simulate the compliance and load sharing in a spur gear mesh. The model adds the effect of rim deflections to previously developed state-of-the-art gear tooth deflection models. The effects of deflections on mesh compliance and load sharing are examined. The model can treat gear meshes composed of two external gears or an external gear driving an internal gear. The model includes deflection contributions from the bending and shear in the teeth, the Hertzian contact deformations, and primary and secondary rotations of the gear rims. The model shows that rimmed gears increase mesh compliance and, in some cases, improve load sharing.
NASA Technical Reports Server (NTRS)
Savage, M.; Caldwell, R. J.; Wisor, G. D.; Lewicki, D. G.
1986-01-01
A computer model has been constructed to simulate the compliance and load sharing in a spur gear mesh. The model adds the effect of rim deflections to previously developed state-of-the-art gear tooth deflection models. The effects of deflections on mesh compliance and load sharing are examined. The model can treat gear meshes composed to two external gears or an external gear driving an internal gear. The model includes deflection contributions from the bending and shear in the teeth, the Hertzian contact deformations, and primary and secondary rotations of the gear rims. The model shows that rimmed gears increase mesh compliance and, in some cases, improve load sharing.
NASA Technical Reports Server (NTRS)
McMann, Catherine M. (Inventor); Cohen, Gerald C. (Inventor)
1991-01-01
An improved method and system for automatically generating reliability models for use with a reliability evaluation tool is described. The reliability model generator of the present invention includes means for storing a plurality of low level reliability models which represent the reliability characteristics for low level system components. In addition, the present invention includes means for defining the interconnection of the low level reliability models via a system architecture description. In accordance with the principles of the present invention, a reliability model for the entire system is automatically generated by aggregating the low level reliability models based on the system architecture description.
Thatcher, R.M.
1984-05-01
The Surface-To-Air Missile (SAM) Electro-Magnetic-Pulse (EMP) (SEMP) model simulates the illumination of an entire SAM brigade with an EMP weapon. It computes probability distributions of SAM brigade performance levels after an EMP attack has occurred. Brigade performance is determined by the combination of components that survive the EMP. Accordingly, the SEMP model is separated into the component failure model and the condition model. The component failure model computes the failure probability of each component in the brigade from data supplied by two input data files. The condition model converts component failure probabilities into brigade performance in the form of missile availability probability tables.
Inflation models and observation
Laila Alabidi; David Lyth
2005-12-01
We consider small-field models which invoke the usual framework for the effective field theory, and large-field models which go beyond that. Present and future possibilities for discriminating between the models are assessed, on the assumption that the primordial curvature perturbation is generated during inflation. With PLANCK data, the theoretical and observational uncertainties on the spectral index will be comparable, providing useful discrimination between small-field models. Further discrimination between models may come later through the tensor fraction, the running of the spectral index and non-gaussianity. The prediction for the trispectrum in a generic multi-field inflation model is given for the first time.
NSDL National Science Digital Library
COMET
2006-05-16
The Marine Wave Model Matrix provides information on the formulation of wave models developed by the National Centers for Environmental Prediction (NCEP) and other modeling centers, including how these models forecast the generation, propagation, and dissipation of ocean waves using NWP model forecasts for winds and near-surface temperature and stability. Additionally, information is provided on data assimilation, post-processing of data, and verfication of wave models currently in operation. Within the post-processing pages are links to forecast output both in graphical and raw form, including links for data downloads. Links to COMET training on wave processes are also provided.
Antibody modeling assessment II. Structures and models.
Teplyakov, Alexey; Luo, Jinquan; Obmolova, Galina; Malia, Thomas J; Sweet, Raymond; Stanfield, Robyn L; Kodangattil, Sreekumar; Almagro, Juan Carlos; Gilliland, Gary L
2014-08-01
To assess the state-of-the-art in antibody structure modeling, a blinded study was conducted. Eleven unpublished Fab crystal structures were used as a benchmark to compare Fv models generated by seven structure prediction methodologies. In the first round, each participant submitted three non-ranked complete Fv models for each target. In the second round, CDR-H3 modeling was performed in the context of the correct environment provided by the crystal structures with CDR-H3 removed. In this report we describe the reference structures and present our assessment of the models. Some of the essential sources of errors in the predictions were traced to the selection of the structure template, both in terms of the CDR canonical structures and VL/VH packing. On top of this, the errors present in the Protein Data Bank structures were sometimes propagated in the current models, which emphasized the need for the curated structural database devoid of errors. Modeling non-canonical structures, including CDR-H3, remains the biggest challenge for antibody structure prediction. PMID:24633955
D.W. Wu; A.J. Smith
2004-11-08
The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), TSPA-LA. The ERMYN provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs) (Section 6.2), the reference biosphere (Section 6.1.1), the human receptor (Section 6.1.2), and approximations (Sections 6.3.1.4 and 6.3.2.4); (3) Building a mathematical model using the biosphere conceptual model (Section 6.3) and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); (8) Validating the ERMYN by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).
NASA Technical Reports Server (NTRS)
Ensey, Tyler S.
2013-01-01
During my internship at NASA, I was a model developer for Ground Support Equipment (GSE). The purpose of a model developer is to develop and unit test model component libraries (fluid, electrical, gas, etc.). The models are designed to simulate software for GSE (Ground Special Power, Crew Access Arm, Cryo, Fire and Leak Detection System, Environmental Control System (ECS), etc. .) before they are implemented into hardware. These models support verifying local control and remote software for End-Item Software Under Test (SUT). The model simulates the physical behavior (function, state, limits and 110) of each end-item and it's dependencies as defined in the Subsystem Interface Table, Software Requirements & Design Specification (SRDS), Ground Integrated Schematic (GIS), and System Mechanical Schematic.(SMS). The software of each specific model component is simulated through MATLAB's Simulink program. The intensiv model development life cycle is a.s follows: Identify source documents; identify model scope; update schedule; preliminary design review; develop model requirements; update model.. scope; update schedule; detailed design review; create/modify library component; implement library components reference; implement subsystem components; develop a test script; run the test script; develop users guide; send model out for peer review; the model is sent out for verifictionlvalidation; if there is empirical data, a validation data package is generated; if there is not empirical data, a verification package is generated; the test results are then reviewed; and finally, the user. requests accreditation, and a statement of accreditation is prepared. Once each component model is reviewed and approved, they are intertwined together into one integrated model. This integrated model is then tested itself, through a test script and autotest, so that it can be concluded that all models work conjointly, for a single purpose. The component I was assigned, specifically, was a fluid component, a discrete pressure switch. The switch takes a fluid pressure input, and if the pressure is greater than a designated cutoff pressure, the switch would stop fluid flow.
Wunsch, G
1994-01-01
The distinction between theories and models is blurred. Since models often have a theoretical content and theories are often expressed by models, the terms are sometimes used synonymously. Patterns of data may suggest theories and/or models, with the latter necessitating validation against facts. This form of scientific explanation therefore moves from data to theory, from theory to model, and from model back again to data. Theories, models, and data are considered. Theories are plausible explanatory propositions devised to link possible causes to their effects. Generally, models are schematic representations of reality or of one's view of a possible world, constructed to improve one's understanding about the world and/or to make predictions. Models are therefore intermediate between theory and data. Data are used to confirm or falsify theories and models. The author points out to the reader, however, that data do not impose themselves upon the scientist, other forms of scientific explanation exist, and some theories or models may have no link with data. Models without theory are common in applied research and they can be useful for smoothing, interpolating, and extrapolating. Theories and models should, however, be used in combination if one wishes to understand how the world works. PMID:12346076
Modeling Guru: Knowledge Base for NASA Modelers
NASA Astrophysics Data System (ADS)
Seablom, M. S.; Wojcik, G. S.; van Aartsen, B. H.
2009-05-01
Modeling Guru is an on-line knowledge-sharing resource for anyone involved with or interested in NASA's scientific models or High End Computing (HEC) systems. Developed and maintained by the NASA's Software Integration and Visualization Office (SIVO) and the NASA Center for Computational Sciences (NCCS), Modeling Guru's combined forums and knowledge base for research and collaboration is becoming a repository for the accumulated expertise of NASA's scientific modeling and HEC communities. All NASA modelers and associates are encouraged to participate and provide knowledge about the models and systems so that other users may benefit from their experience. Modeling Guru is divided into a hierarchy of communities, each with its own set forums and knowledge base documents. Current modeling communities include those for space science, land and atmospheric dynamics, atmospheric chemistry, and oceanography. In addition, there are communities focused on NCCS systems, HEC tools and libraries, and programming and scripting languages. Anyone may view most of the content on Modeling Guru (available at http://modelingguru.nasa.gov/), but you must log in to post messages and subscribe to community postings. The site offers a full range of "Web 2.0" features, including discussion forums, "wiki" document generation, document uploading, RSS feeds, search tools, blogs, email notification, and "breadcrumb" links. A discussion (a.k.a. forum "thread") is used to post comments, solicit feedback, or ask questions. If marked as a question, SIVO will monitor the thread, and normally respond within a day. Discussions can include embedded images, tables, and formatting through the use of the Rich Text Editor. Also, the user can add "Tags" to their thread to facilitate later searches. The "knowledge base" is comprised of documents that are used to capture and share expertise with others. The default "wiki" document lets users edit within the browser so others can easily collaborate on the same document, even allowing the author to select those who may edit and approve the document. To maintain knowledge integrity, all documents are moderated before they are visible to the public. Modeling Guru, running on Clearspace by Jive Software, has been an active resource to the NASA modeling and HEC communities for more than a year and currently has more than 100 active users. SIVO will soon install live instant messaging support, as well as a user-customizable homepage with social-networking features. In addition, SIVO plans to implement a large dataset/file storage capability so that users can quickly and easily exchange datasets and files with one another. Continued active community participation combined with periodic software updates and improved features will ensure that Modeling Guru remains a vibrant, effective, easy-to-use tool for the NASA scientific community.
ERIC Educational Resources Information Center
Pruneau, Diane; Chouinard, Omer; Arsenault, Charline
1998-01-01
Reports on a model of environmental education that aims to encourage greater attachment to the bioregion of Arcadia. The model results from cooperation within a village community and addresses the environmental education of people of all ages. (DDR)
NASA Technical Reports Server (NTRS)
1998-01-01
Model support system and instumentation cabling of the 1% scale X-33 reaction control system model. Installed in the Unitary Plan Wind Tunnel for supersonic testing. In building 1251, test section #2.
METEOROLOGICAL AND TRANSPORT MODELING
Advanced air quality simulation models, such as CMAQ, as well as other transport and dispersion models, require accurate and detailed meteorology fields. These meteorology fields include primary 3-dimensional dynamical and thermodynamical variables (e.g., winds, temperature, mo...
Statistical Language Modelling
Gotoh, Yoshihiko; Renals, Steve
2003-01-01
of sentences. An alternative approach originates from the work of Shannon over half a century ago [41], [42]. This approach assigns probabilities to linguistic events, where mathematical models are used to represent statistical knowledge. Once models are built...
NSDL National Science Digital Library
David Joiner
Monte Carlo modeling refers to the solution of mathematical problems with the use of random numbers. This can include both function integration and the modeling of stochastic phenomena using random processes.
Dimer models for parallelograms
Kazushi Ueda; Masahito Yamazaki
2010-01-25
We discuss the relation between dimer models and coamoebas associated with lattice parallelograms. We also discuss homological mirror symmetry for the product of two projective lines, emphasizing the role of a non-isoradial dimer model.
NSDL National Science Digital Library
2014-09-14
The goal of this training module is to help you increase your understanding of how mesoscale models work. Such understanding, in turn, can help you more efficiently and accurately evaluate model-generated forecast products.
Until recently, sediment geochemical models (diagenetic models) have been only able to explain sedimentary flux and concentration profiles for a few simplified geochemical cycles (e.g., nitrogen, carbon and sulfur). However with advances in numerical methods, increased accuracy ...
Melanoma Risk Prediction Models
The following risk prediction models are intended primarily for research use and have been peer-reviewed, meaning the methodology and results of these models have been evaluated by qualified scientists and clinicians and published in scientific and medical journals.
Christoudias, Chris Mario
2003-04-18
Statistical shape and texture appearance models are powerful image representations, but previously had been restricted to 2D or simple 3D shapes. In this paper we present a novel 3D morphable model based on image-based ...
Exposure Analysis Modeling System
The Exposure Analysis Modeling System (EXAMS) is an interactive software application for formulating aquatic ecosystem models and evaluating the fate, transport, and exposure concentrations of synthetic organic chemicals including pesticides, industrial materials, and leachates f...
Bounding Species Distribution Models
NASA Technical Reports Server (NTRS)
Stohlgren, Thomas J.; Jarnevich, Cahterine S.; Morisette, Jeffrey T.; Esaias, Wayne E.
2011-01-01
Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for "clamping" model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART) and maximum entropy (Maxent) models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used [Current Zoology 57 (5): 642-647, 2011].
Bounding species distribution models
Stohlgren, T.J.; Jarnevich, C.S.; Esaias, W.E.; Morisette, J.T.
2011-01-01
Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for "clamping" model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART) and maximum entropy (Maxent) models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used. ?? 2011 Current Zoology.
Chou, Danielle, 1981-
2004-01-01
The drive behind improved friction models has been better prediction and control of dynamic systems. The earliest model was of classical Coulomb friction; however, the discontinuity during force reversal of the Coulomb ...
NSDL National Science Digital Library
Center for Engineering Educational Outreach, Tufts University
2012-04-21
Students create models of objects of their choice, teaching them skills and giving them practice in techniques used by professionals. They use sketches as they build their objects. This activity facilitates a discussion on models and their usefulness.
Agena, S M; Pusey, M L; Bogle, I D
1999-07-20
A thermodynamic framework (UNIQUAC model with temperature dependent parameters) is applied to model the salt-induced protein crystallization equilibrium, i.e., protein solubility. The framework introduces a term for the solubility product describing protein transfer between the liquid and solid phase and a term for the solution behavior describing deviation from ideal solution. Protein solubility is modeled as a function of salt concentration and temperature for a four-component system consisting of a protein, pseudo solvent (water and buffer), cation, and anion (salt). Two different systems, lysozyme with sodium chloride and concanavalin A with ammonium sulfate, are investigated. Comparison of the modeled and experimental protein solubility data results in an average root mean square deviation of 5.8%, demonstrating that the model closely follows the experimental behavior. Model calculations and model parameters are reviewed to examine the model and protein crystallization process. PMID:10397850
Miller, Daniel Evan
2014-12-10
skeletal models from 3D data sets. Unfortunately, there is usually not a single correct definition for what makes a good skeleton, and different methods will produce different skeletal models from a given input. Furthermore, for many scanned data sets...
Quantile Models with Endogeneity
Chernozhukov, Victor V.
In this article, we review quantile models with endogeneity. We focus on models that achieve identification through the use of instrumental variables and discuss conditions under which partial and point identification are ...
NASA Technical Reports Server (NTRS)
Butler, Thomas G.
1987-01-01
Methods of modeling mass for bars are surveyed. A method for extending John Archer's concept of consistent mass beyond just translational inertia effects is included. Recommendations are given for various types of modeling situations.
ERIC Educational Resources Information Center
Brinner, Bonnie
1992-01-01
Presents an activity in which models help students visualize both the DNA process and transcription. After constructing DNA, RNA messenger, and RNA transfer molecules; students model cells, protein synthesis, codons, and RNA movement. (MDH)
NASA Technical Reports Server (NTRS)
Agena, S. M.; Pusey, M. L.; Bogle, I. D.
1999-01-01
A thermodynamic framework (UNIQUAC model with temperature dependent parameters) is applied to model the salt-induced protein crystallization equilibrium, i.e., protein solubility. The framework introduces a term for the solubility product describing protein transfer between the liquid and solid phase and a term for the solution behavior describing deviation from ideal solution. Protein solubility is modeled as a function of salt concentration and temperature for a four-component system consisting of a protein, pseudo solvent (water and buffer), cation, and anion (salt). Two different systems, lysozyme with sodium chloride and concanavalin A with ammonium sulfate, are investigated. Comparison of the modeled and experimental protein solubility data results in an average root mean square deviation of 5.8%, demonstrating that the model closely follows the experimental behavior. Model calculations and model parameters are reviewed to examine the model and protein crystallization process. Copyright 1999 John Wiley & Sons, Inc.
We developed a simplified spreadsheet modeling approach for characterizing and prioritizing sources of sediment loadings from watersheds in the United States. A simplified modeling approach was developed to evaluate sediment loadings from watersheds and selected land segments. ...
Sauper, Christina Joan
We present a probabilistic topic model for jointly identifying properties and attributes of social media review snippets. Our model simultaneously learns a set of properties of a product and captures aggregate user sentiments ...
NASA Technical Reports Server (NTRS)
Holland, L. D.; Walsh, J. R., Jr.; Wetherington, R. D.
1971-01-01
This report presents the results of work on communications systems modeling and covers three different areas of modeling. The first of these deals with the modeling of signals in communication systems in the frequency domain and the calculation of spectra for various modulations. These techniques are applied in determining the frequency spectra produced by a unified carrier system, the down-link portion of the Command and Communications System (CCS). The second modeling area covers the modeling of portions of a communication system on a block basis. A detailed analysis and modeling effort based on control theory is presented along with its application to modeling of the automatic frequency control system of an FM transmitter. A third topic discussed is a method for approximate modeling of stiff systems using state variable techniques.
ERIC Educational Resources Information Center
Bennett, Joan
1998-01-01
Recommends the use of a model of DNA made out of Velcro to help students visualize the steps of DNA replication. Includes a materials list, construction directions, and details of the demonstration using the model parts. (DDR)
NASA Astrophysics Data System (ADS)
Tashiro, Tohru
2014-03-01
We propose a new model about diffusion of a product which includes a memory of how many adopters or advertisements a non-adopter met, where (non-)adopters mean people (not) possessing the product. This effect is lacking in the Bass model. As an application, we utilize the model to fit the iPod sales data, and so the better agreement is obtained than the Bass model.
NASA Technical Reports Server (NTRS)
Figueroa-Feliciano, Enectali
2004-01-01
We have developed a software suite that models complex calorimeters in the time and frequency domain. These models can reproduce all measurements that we currently do in a lab setting, like IV curves, impedance measurements, noise measurements, and pulse generation. Since all these measurements are modeled from one set of parameters, we can fully describe a detector and characterize its behavior. This leads to a model than can be used effectively for engineering and design of detectors for particular applications.
Paleoclimate Model Output Visualization
NSDL National Science Digital Library
National Climatic Data Center
This site from the National Climatic Data Center features a tool that displays map images for paleoclimate simulations from various agencies, including COHMAP (CCMO and CCM1), the NASA Goddard Institute for Space Studies (GISS Model II and II') and NCAR. There is also a link to a page containing visualization tools for models from the Paleoclimate Modelling Intercomparison Project(PMIP) database. Each model is extensively documented, and a database is also included.
CISNET: Esophageal Cancer Modeling
The CISNET esophageal cancer group was formed in 2010 in the third round of CISNET funding with three distinct modeling teams focused on collaboratively modeling the incidence and mortality of esophageal adenocarcinoma (EAC) in the US population. The group’s work will include performing collaborative modeling of the natural history models of esophageal adenocarcinoma which will include precursor states such as Barrett’s esophagus and dysplasia that are calibrated to US SEER data.
J. Wang
2003-06-24
The purpose of this Model Report is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Office of Repository Development (ORD). The UZ contains the unsaturated rock layers overlying the repository and host unit, which constitute a natural barrier to flow, and the unsaturated rock layers below the repository which constitute a natural barrier to flow and transport. This work followed, and was planned in, ''Technical Work Plan (TWP) for: Performance Assessment Unsaturated Zone'' (BSC 2002 [160819], Section 1.10.8 [under Work Package (WP) AUZM06, Climate Infiltration and Flow], and Section I-1-1 [in Attachment I, Model Validation Plans]). In Section 4.2, four acceptance criteria (ACs) are identified for acceptance of this Model Report; only one of these (Section 4.2.1.3.6.3, AC 3) was identified in the TWP (BSC 2002 [160819], Table 3-1). These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, and drift-scale and mountain-scale coupled-process models from the UZ Flow, Transport and Coupled Processes Department in the Natural Systems Subproject of the Performance Assessment (PA) Project. The Calibrated Properties Model output will also be used by the Engineered Barrier System Department in the Engineering Systems Subproject. The Calibrated Properties Model provides input through the UZ Model and other process models of natural and engineered systems to the Total System Performance Assessment (TSPA) models, in accord with the PA Strategy and Scope in the PA Project of the Bechtel SAIC Company, LLC (BSC). The UZ process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions. UZ flow is a TSPA model component.
Future of groundwater modeling
Langevin, Christian D.; Panday, Sorab
2012-01-01
With an increasing need to better manage water resources, the future of groundwater modeling is bright and exciting. However, while the past can be described and the present is known, the future of groundwater modeling, just like a groundwater model result, is highly uncertain and any prediction is probably not going to be entirely representative. Thus we acknowledge this as we present our vision of where groundwater modeling may be headed.
Nonlinear Modeling by Assembling Piecewise Linear Models
NASA Technical Reports Server (NTRS)
Yao, Weigang; Liou, Meng-Sing
2013-01-01
To preserve nonlinearity of a full order system over a parameters range of interest, we propose a simple modeling approach by assembling a set of piecewise local solutions, including the first-order Taylor series terms expanded about some sampling states. The work by Rewienski and White inspired our use of piecewise linear local solutions. The assembly of these local approximations is accomplished by assigning nonlinear weights, through radial basis functions in this study. The efficacy of the proposed procedure is validated for a two-dimensional airfoil moving at different Mach numbers and pitching motions, under which the flow exhibits prominent nonlinear behaviors. All results confirm that our nonlinear model is accurate and stable for predicting not only aerodynamic forces but also detailed flowfields. Moreover, the model is robustness-accurate for inputs considerably different from the base trajectory in form and magnitude. This modeling preserves nonlinearity of the problems considered in a rather simple and accurate manner.
Aerosol Modeling for the Global Model Initiative
NASA Technical Reports Server (NTRS)
Weisenstein, Debra K.; Ko, Malcolm K. W.
2001-01-01
The goal of this project is to develop an aerosol module to be used within the framework of the Global Modeling Initiative (GMI). The model development work will be preformed jointly by the University of Michigan and AER, using existing aerosol models at the two institutions as starting points. The GMI aerosol model will be tested, evaluated against observations, and then applied to assessment of the effects of aircraft sulfur emissions as needed by the NASA Subsonic Assessment in 2001. The work includes the following tasks: 1. Implementation of the sulfur cycle within GMI, including sources, sinks, and aqueous conversion of sulfur. Aerosol modules will be added as they are developed and the GMI schedule permits. 2. Addition of aerosol types other than sulfate particles, including dust, soot, organic carbon, and black carbon. 3. Development of new and more efficient parameterizations for treating sulfate aerosol nucleation, condensation, and coagulation among different particle sizes and types.
Aggregation in ecosystem models and model stability
NASA Astrophysics Data System (ADS)
Giricheva, Evgeniya
2015-05-01
Using a multimodal approach to research ecosystems improves usage of available information on an object. This study presents several models of the Bering Sea ecosystem. The ecosystem is considered as a closed object, that is, the influence of the environment is not provided. We then add the links with the external medium in the models. The models differ in terms of the degree and method of grouping components. Our method is based on the differences in habitat and food source of groups, which allows us to determine the grouping of species with a greater effect on system dynamics. In particular, we determine whether benthic fish aggregation or pelagic fish aggregation can change the consumption structure of some groups of species, and consequently, the behavior of the entire model system.
ERIC Educational Resources Information Center
Masin, Sergio Cesare; Busetto, Martina
2010-01-01
The study reports empirical tests of Anderson's, Haubensak's, Helson's, and Parducci's rating models when two end anchors are used for rating. The results show that these models cannot predict the judgment effect called here the Dai Pra effect. It is shown that an extension of Anderson's model is consistent with this effect. The results confirm…
NSDL National Science Digital Library
Betty J. Blecha
This site contains 21 modular, easy to use economic models, that are appropriate for class assignments or in-class demonstrations. Students can simulate all the standard models taught in most economics courses. EconModel uses the Windows OS. The simulations were developed by William R. Parke of the University of North Carolina at Chapel Hill.
ERIC Educational Resources Information Center
Goodman, Richard E.
1970-01-01
Describes types of molecular models (ball-and-stick, framework, and space-filling) and evaluates commercially available kits. Gives instructions for constructive models from polystyrene balls and pipe-cleaners. Models are useful for class demonstrations although not sufficiently accurate for research use. Illustrations show biologically important…
PRZM3 is a modeling system that links two subordinate models - PRZM and VADOFT to predict pesticide transport and transformation down through the crop root and unsaturated zone. PRZM3 includes modeling capabilities for such phenomena as soil temperature simulation, vo...
Carlos Iván Chesñevar; Ana Gabriela Maguitman; Ronald Prescott Loui
2000-01-01
Logical models of arguement formalize commonsense reasoning while taking process and computation seriously. This survey discusses the main ideas that characterize different logical models of argument. It presents the formal features of a few features of a few main approaches to the modeling of argumentation. We trace the evolution of argumentation from the mid-1980s, when argument systems emerged as an
GTM is an economic model capable of examining global forestry land-use, management, and trade responses to policies. In responding to a policy, the model captures afforestation, forest management, and avoided deforestation behavior. The model estimates harvests in industrial fore...
Damping models in elastography
Matthew D. J. McGarry; Hans-Uwe Berger; Elijah E. W. Van Houten
2007-01-01
Current optimization based Elastography reconstruction algorithms encounter difficulties when the motion approaches resonant conditions, where the model does a poor job of approximating the real behavior of the material. Model accuracy can be improved through the addition of damping effects. These effects occur in-vivo due to the complex interaction between microstructural elements of the tissue; however reconstruction models are typically
Penny, Will
Empirical Bayes Will Penny Linear Models fMRI analysis Gradient Ascent Online learning Delta Rule Newton Method Bayesian Linear Models MAP Learning MEG Source Reconstruction Empirical Bayes Model Maximum Likelihood Augmented Form ReML Objective Function References Empirical Bayes Will Penny 3rd March
ERIC Educational Resources Information Center
Harris, Mary B.
To investigate the effect of modeling on altruism, 156 third and fifth grade children were exposed to a model who either shared with them, gave to a charity, or refused to share. The test apparatus, identified as a game, consisted of a box with signal lights and a chute through which marbles were dispensed. Subjects and the model played the game…
ERIC Educational Resources Information Center
Fedorov, Alexander
2011-01-01
The author supposed that media education models can be divided into the following groups: (1) educational-information models (the study of the theory, history, language of media culture, etc.), based on the cultural, aesthetic, semiotic, socio-cultural theories of media education; (2) educational-ethical models (the study of moral, religions,…
ERIC Educational Resources Information Center
Walsh, Jim; McGehee, Richard
2013-01-01
A dynamical systems approach to energy balance models of climate is presented, focusing on low order, or conceptual, models. Included are global average and latitude-dependent, surface temperature models. The development and analysis of the differential equations and corresponding bifurcation diagrams provides a host of appropriate material for…
Elementary Teacher Training Models.
ERIC Educational Resources Information Center
Blewett, Evelyn J., Ed.
This collection of articles contains descriptions of nine elementary teacher training program models conducted at universities throughout the United States. The articles include the following: (a) "The University of Toledo Model Program," by George E. Dickson; (b) "The Florida State University Model Program," by G. Wesley Sowards; (c) "The…
Two Cognitive Modeling Frontiers
NASA Astrophysics Data System (ADS)
Ritter, Frank E.
This paper reviews three hybrid cognitive architectures (Soar, ACT-R, and CoJACK) and how they can support including models of emotions. There remain problems creating models in these architectures, which is a research and engineering problem. Thus, the term cognitive science engineering is introduced as an area that would support making models easier to create, understand, and re-use.
Modeling and Remodeling Writing
ERIC Educational Resources Information Center
Hayes, John R.
2012-01-01
In Section 1 of this article, the author discusses the succession of models of adult writing that he and his colleagues have proposed from 1980 to the present. He notes the most important changes that differentiate earlier and later models and discusses reasons for the changes. In Section 2, he describes his recent efforts to model young…
K. Bardakci; M. B. Halpern
1971-01-01
On the basis of new representations of the projective group, we construct some new dual quark models whose spin and internal symmetry are not multiplicative. One model is a factorized theory of exotic states with broken exchange degeneracy, ninth mesons being optional. The exotic states are suppressed three units below the Pomeranchon. In another model, with spin-orbit coupling and curved
Mathew Hahn; David Rogers
A number of methods, called receptor mapping techniques, attempt to provide insight about the putative active site and to characterize receptor binding requirements. Often, receptor mapping techniques are used to generate a hypothetical model of the actual receptor site. This is known as a receptor site model. In this chapter, we describe a specific type of receptor site model called
Introduction & Scope Model & Calibration
Kuhn, Matthew R.
Introduction & Scope Principles Model & Calibration Applications Bulk Evolution of Fabric.up.edu / kuhn / papers / EMD2007.pdf LATE #12;Introduction & Scope Principles Model & Calibration Applications Outline 1 Introduction & Scope 2 Principles 3 Model & Calibration 4 Applications Kuhn -- June 4, 2007 http
Pascal Fua; Yvan G. Leclerc
1989-01-01
Standard edge detectors fail to find most relevant edges, finding either too many or too few, because they lack a geometric model to guide their search. We present a technique that integrates both photometric and geometric models with an initial estimate of the boundary. The strength of this approach lies in the ability of the geometric model to overcome various
QUALITATIVE ECOLOGICAL MODELING
Technology Transfer Automated Retrieval System (TEKTRAN)
Students construct qualitative models of an ecosystem and use the models to evaluate the direct and indirect effects that may result from perturbations to the ecosystem. Qualitative modeling is described for use in two procedures, each with different educational goals and student backgrounds in min...
NSDL National Science Digital Library
Shodor
This model can be used to create a virtual population to observe how different factors might affect the spread of a disease. Scientists often use computer models to study complicated phenomena like epidemics. This model is a simplified simulation of any disease that is spread through human contact.
ERIC Educational Resources Information Center
Fitzsimmons, Charles P.
1986-01-01
Points out the instructional applications and program possibilities of a unit on model rocketry. Describes the ways that microcomputers can assist in model rocket design and in problem calculations. Provides a descriptive listing of model rocket software for the Apple II microcomputer. (ML)
Timothy F. Cootes; Gareth J. Edwards; Christopher J. Taylor
1998-01-01
We describe a new method of matching statistical models of appearance to images. A set of model parameters control modes of shape and gray-level variation learned from a training set. We construct an efficient iterative matching algorithm by learning the relationship between perturbations in the model parameters and the induced image errors.
Crushed Salt Constitutive Model
Callahan, G.D.
1999-02-01
The constitutive model used to describe the deformation of crushed salt is presented in this report. Two mechanisms -- dislocation creep and grain boundary diffusional pressure solution -- are combined to form the basis for the constitutive model governing the deformation of crushed salt. The constitutive model is generalized to represent three-dimensional states of stress. Upon complete consolidation, the crushed-salt model reproduces the Multimechanism Deformation (M-D) model typically used for the Waste Isolation Pilot Plant (WIPP) host geological formation salt. New shear consolidation tests are combined with an existing database that includes hydrostatic consolidation and shear consolidation tests conducted on WIPP and southeastern New Mexico salt. Nonlinear least-squares model fitting to the database produced two sets of material parameter values for the model -- one for the shear consolidation tests and one for a combination of the shear and hydrostatic consolidation tests. Using the parameter values determined from the fitted database, the constitutive model is validated against constant strain-rate tests. Shaft seal problems are analyzed to demonstrate model-predicted consolidation of the shaft seal crushed-salt component. Based on the fitting statistics, the ability of the model to predict the test data, and the ability of the model to predict load paths and test data outside of the fitted database, the model appears to capture the creep consolidation behavior of crushed salt reasonably well.
Modeling rapidly rotating stars
M. Rieutord
2007-02-14
We review the quest of modeling rapidly rotating stars during the past 40 years and detail the challenges to be taken up by models facing new data from interferometry, seismology, spectroscopy... We then present the progress of the ESTER project aimed at giving a physically self-consistent model for the structure and evolution of rapidly rotating stars.
Modeling rapidly rotating stars
Rieutord, M
2007-01-01
We review the quest of modeling rapidly rotating stars during the past 40 years and detail the challenges to be taken up by models facing new data from interferometry, seismology, spectroscopy... We then present the progress of the ESTER project aimed at giving a physically self-consistent model for the structure and evolution of rapidly rotating stars.
Kuhn, Matthew R.
Model definition DEM summary Simple-shear Simulating undrained loading of sand with the discrete University of Portland EMI 2012 Conference South Bend, Indiana June 1820, 2012 National Science Foundation_Undrained.pdf #12;Model definition DEM summary Simple-shear Nevada Sand Particles and contacts Model
Model Breaking Points Conceptualized
ERIC Educational Resources Information Center
Vig, Rozy; Murray, Eileen; Star, Jon R.
2014-01-01
Current curriculum initiatives (e.g., National Governors Association Center for Best Practices and Council of Chief State School Officers 2010) advocate that models be used in the mathematics classroom. However, despite their apparent promise, there comes a point when models break, a point in the mathematical problem space where the model cannot,…
Modeling EERE Deployment Programs
Cort, K.A.; Hostick, D.J.; Belzer, D.B.; Livingston, O.V.
2007-11-01
This report compiles information and conclusions gathered as part of the “Modeling EERE Deployment Programs” project. The purpose of the project was to identify and characterize the modeling of deployment programs within the EERE Technology Development (TD) programs, address possible improvements to the modeling process, and note gaps in knowledge in which future research is needed.
General Graded Response Model.
ERIC Educational Resources Information Center
Samejima, Fumiko
This paper describes the graded response model. The graded response model represents a family of mathematical models that deal with ordered polytomous categories, such as: (1) letter grading; (2) an attitude survey with "strongly disagree, disagree, agree, and strongly agree" choices; (3) partial credit given in accord with an individual's degree…
ERIC Educational Resources Information Center
Bogiages, Christopher A.; Lotter, Christine
2011-01-01
In their research, scientists generate, test, and modify scientific models. These models can be shared with others and demonstrate a scientist's understanding of how the natural world works. Similarly, students can generate and modify models to gain a better understanding of the content, process, and nature of science (Kenyon, Schwarz, and Hug…
Jean Paelinck
1970-01-01
Urban growth can be approached from three viewpoints: empirical analysis of observed data (either through time, or cross-sectionally), theoretical analysis of mathematical models, and simulation by means of empirical models. A final synthesis would yield a full-fledged econometric analysis. In this paper the second approach is applied in order to discover certain properties of moderately disaggregated models of urban growth.
Reasoning and Formal Modelling
LÃ¶we, Benedikt
driver of the car'." #12;Reasoning and Formal Modelling for Forensic Science Lecture 10 Prof. DrReasoning and Formal Modelling for Forensic Science Lecture 10 Prof. Dr. Benedikt LÂ¨owe Reasoning and Formal Modelling for Forensic Science Lecture 10 Prof. Dr. Benedikt LÂ¨owe 2nd Semester 2010/11 #12
Jeremy Manson; William Pugh; Sarita V. Adve
2005-01-01
This paper describes the new Java memory model, which has been revised as part of Java 5.0. The model specifies the legal behaviors for a multithreaded program; it defines the semantics of multithreaded Java programs and partially determines legal implementations of Java virtual machines and compilers.The new Java model provides a simple interface for correctly synchronized programs -- it guarantees
Harrison, A.K.
1997-03-14
We have identified the Cranfill multifluid turbulence model (Cranfill, 1992) as a starting point for development of subgrid models of instability, turbulent and mixing processes. We have differenced the closed system of equations in conservation form, and coded them in the object-oriented hydrodynamics code FLAG, which is to be used as a testbed for such models.
Modeling the calcite Lysocline
David Archer
1991-01-01
A numerical model of calcite dissolution in contact with sediment pore water is used to predict the depth and shape of the calcite lysocline in the deep sea. Model results are compared with lysocline data from 13 regions in the Atlantic, Pacific, and Indian Oceans. The model lysocline shape is sensitive to the calcite dissolution rate constant, the calcite, organic
ERIC Educational Resources Information Center
Thornton, Bradley D.; Smalley, Robert A.
2008-01-01
Building information modeling (BIM) uses three-dimensional modeling concepts, information technology and interoperable software to design, construct and operate a facility. However, BIM can be more than a tool for virtual modeling--it can provide schools with a 3-D walkthrough of a project while it still is on the electronic drawing board. BIM can…
NSDL National Science Digital Library
NCTM Illuminations
2000-01-01
This tool allows the individual or the classroom to explore several representations of fractions. After selecting numerator and denominator, any number from 1 to 100, learners see the fraction itself, a visual model, as well as decimal and percent equivalents. They can choose the model to be a circle, a rectangle, or a set model.
ERIC Educational Resources Information Center
Summerlin, Lee; Borgford, Christie
1989-01-01
Described is an activity which uses a 96-well reaction plate and soda straws to construct a model of the periodic table of the elements. The model illustrates the ionization energies of the various elements. Construction of the model and related concepts are discussed. (CW)
Reasoning and Formal Modelling
Löwe, Benedikt
Reasoning and Formal Modelling for Forensic Science Prof. Dr. Benedikt L¨owe A quick overview Modelling for Forensic Science Prof. Dr. Benedikt L¨owe 2nd Semester 2010/11 #12;Reasoning and Formal Modelling for Forensic Science Prof. Dr. Benedikt L¨owe A quick overview of the development of logic Logic
Reasoning and Formal Modelling
Löwe, Benedikt
Reasoning and Formal Modelling for Forensic Science Lecture 7 Prof. Dr. Benedikt L¨owe Reasoning and Formal Modelling for Forensic Science Lecture 7 Prof. Dr. Benedikt L¨owe 2nd Semester 2010/11 #12;Reasoning and Formal Modelling for Forensic Science Lecture 7 Prof. Dr. Benedikt L¨owe Reminder: logica
NSDL National Science Digital Library
John Nielsen-Gammon
1996-01-01
This undergraduate meteorology tutorial from Texas A&M University describes the common sources of weather forecasting computer model error, ways to identify model error, and how to correct a forecast for some simple types of error. Model sensitivity to parameterization and topography are covered.
Lightning Return Stroke Models
Y. T. Lin; M. A. Uman; R. B. Standler
1980-01-01
We test the two most commonly used lightning return stroke models, Bruce-Golde and trammission line, against subsequent stroke electric and magnetic field wave forms measured simultaneously at near and distant stations and show that these models are inadequate to describe the experimental data. We then propose a new return stroke model that is physically plausible and that yields good approximations
C. Lum
2004-09-16
The purpose of this model report is to document the Rock Properties Model version 3.1 with regard to input data, model methods, assumptions, uncertainties and limitations of model results, and qualification status of the model. The report also documents the differences between the current and previous versions and validation of the model. The rock properties model provides mean matrix and lithophysae porosity, and the cross-correlated mean bulk density as direct input to the ''Saturated Zone Flow and Transport Model Abstraction'', MDL-NBS-HS-000021, REV 02 (BSC 2004 [DIRS 170042]). The constraints, caveats, and limitations associated with this model are discussed in Section 6.6 and 8.2. Model validation accomplished by corroboration with data not cited as direct input is discussed in Section 7. The revision of this model report was performed as part of activities being conducted under the ''Technical Work Plan for: The Integrated Site Model, Revision 05'' (BSC 2004 [DIRS 169635]). The purpose of this revision is to bring the report up to current procedural requirements and address the Regulatory Integration Team evaluation comments. The work plan describes the scope, objectives, tasks, methodology, and procedures for this process.
NSDL National Science Digital Library
Tony Weisstein (Truman State University; Biology)
2005-12-16
This worksheet compares user-input growth data with predictions under linear, exponential, and logistic models of growth. Students can input parameters for each model; the program graphs the results and computes a crude goodness-of-fit measure. Introduces concepts of modeling and statistical analysis that can be more thoroughly explored using standard statistics software (JMP, SAS, etc.)
Technology Transfer Automated Retrieval System (TEKTRAN)
Adsorption-desorption reactions are important processes that affect the transport of contaminants in the environment. Surface complexation models are chemical models that can account for the effects of variable chemical conditions, such as pH, on adsorption reactions. These models define specific ...
NASA Technical Reports Server (NTRS)
Jongeward, Gary
1992-01-01
The topics covered are presented in viewgraph form and include the following: the effect of the Mars environment on systems; the design of models and modeling tools; space environment interaction information dissemination; tools used in the development and validation of models; Space Exploration Initiative planning; and systems integration.
ERIC Educational Resources Information Center
Speiser, Bob; Walter, Chuck
2011-01-01
This paper explores how models can support productive thinking. For us a model is a "thing", a tool to help make sense of something. We restrict attention to specific models for whole-number multiplication, hence the wording of the title. They support evolving thinking in large measure through the ways their users redesign them. They assume new…
Technology Transfer Automated Retrieval System (TEKTRAN)
Agricultural and ecosystem simulation models valuable for technology transfer require a realistic, process-oriented plant model that can be easily applied to different crops, grasses, and woody species. The objective of this chapter was to describe a general plant model that can be easily applied i...
Micromechanical modelling of weldments using GTN model
R. Chhibber; P. Biswas; N. Arora; S. R. Gupta; B. K. Dutta
2011-01-01
Non-transferability of fracture data from specimen level to component level is a serious limitation of conventional fracture\\u000a mechanics, as fracture resistance data obtained is largely geometry dependant. The difficulty is largely overcome by GTN model\\u000a which models the drop in load carrying capacity of a material with the increase in plastic strain, considering nucleation,\\u000a growth and coalescence of micro-voids in
NSDL National Science Digital Library
Ocean-Modeling.org
The purpose of this web site is to facilitate the development and testing of the Terrain-following Ocean Modeling System (TOMS) and to provide a forum to the ocean community at large. The site provides an explanation of three-dimensional modeling, as well as an overview of the four primary types of ocean modeling methods currently in use and links to labs around the country using these modeling techniques. A collection of links to freely downloadable ocean modeling tools is provided. The site also includes links to data sources, publications, bulletin boards, chat rooms and other relevant sites.
NASA Astrophysics Data System (ADS)
Marion, Giles M.; Kargel, Jeffrey S.
Implementation of the Pitzer approach is through the FREZCHEM (FREEZING CHEMISTRY) model, which is at the core of this work. This model was originally designed to simulate salt chemistries and freezing processes at low temperatures (-54 to 25°C) and 1 atm pressure. Over the years, this model has been broadened to include more chemistries (from 16 to 58 solid phases), a broader temperature range for some chemistries (to 113°C), and incorporation of a pressure dependence (1 to 1000 bars) into the model. Implementation, parameterization, validation, and limitations of the FREZCHEM model are extensively discussed in Chapter 3.
Brown, T. W. [DESY, Hamburg, Theory Group, Notkestrasse, 85, D-22603 Hamburg (Germany)
2011-04-15
The same complex matrix model calculates both tachyon scattering for the c=1 noncritical string at the self-dual radius and certain correlation functions of operators which preserve half the supersymmetry in N=4 super-Yang-Mills theory. It is dual to another complex matrix model where the couplings of the first model are encoded in the Kontsevich-like variables of the second. The duality between the theories is mirrored by the duality of their Feynman diagrams. Analogously to the Hermitian Kontsevich-Penner model, the correlation functions of the second model can be written as sums over discrete points in subspaces of the moduli space of punctured Riemann surfaces.
Pilot model hypothesis testing
NASA Technical Reports Server (NTRS)
Broussard, J. R.; Berry, P. W.
1982-01-01
The aircraft control time history predicted by the optimal control pilot model and actual pilot tracking data obtained from NASA Langley's differential maneuvering simulator (DMS) are analyzed. The analysis is performed using a hypothesis testing scheme modified to allow for changes in the true hypothesis. A finite number of pilot models, each with different hypothesized internal model representations of the aircraft dynamics, are constructed. The hypothesis testing scheme determines the relative probability that each pilot model best matches the DMS data. By observing the changes in probabilities, it is possible to determine when the pilot changes control strategy and which hypothesized pilot model best represent's the pilot's control behavior.
Seo, Bommie F.; Lee, Jun Yong; Jung, Sung-No
2013-01-01
Keloids and hypertrophic scars are thick, raised dermal scars, caused by derailing of the normal scarring process. Extensive research on such abnormal scarring has been done; however, these being refractory disorders specific to humans, it has been difficult to establish a universal animal model. A wide variety of animal models have been used. These include the athymic mouse, rats, rabbits, and pigs. Although these models have provided valuable insight into abnormal scarring, there is currently still no ideal model. This paper reviews the models that have been developed. PMID:24078916
M. McGraw
2000-04-13
The UZ Colloid Transport model development plan states that the objective of this Analysis/Model Report (AMR) is to document the development of a model for simulating unsaturated colloid transport. This objective includes the following: (1) use of a process level model to evaluate the potential mechanisms for colloid transport at Yucca Mountain; (2) Provide ranges of parameters for significant colloid transport processes to Performance Assessment (PA) for the unsaturated zone (UZ); (3) Provide a basis for development of an abstracted model for use in PA calculations.
Ritchie, L.T.; Alpert, D.J.; Burke, R.P.; Johnson, J.D.; Ostmeyer, R.M.; Aldrich, D.C.; Blond, R.M.
1984-03-01
The CRAC2 computer code is a revised version of CRAC (Calculation of Reactor Accident Consequences) which was developed for the Reactor Safety Study. This document provides an overview of the CRAC2 code and a description of each of the models used. Significant improvements incorporated into CRAC2 include an improved weather sequence sampling technique, a new evacuation model, and new output capabilities. In addition, refinements have been made to the atmospheric transport and deposition model. Details of the modeling differences between CRAC2 and CRAC are emphasized in the model descriptions.
NSDL National Science Digital Library
This lesson instructs students on how to read station models, the symbols used on weather maps to show data (temperature, wind speed and direction, barometeric pressure, etc.) for a given reporting station. It includes a diagram of a station model, an explanation of the data conveyed by the numbers and symbols, and a table of definitions for the graphic symbols used with models. There is also a set of interactive station models students can use for practice at interpretation, and an interactive exercise in which students use real-time weather data to interpret models.
Reconstruction of Inflation Models
Ratbay Myrzakulov; Lorenzo Sebastiani; Sergio Zerbini
2015-02-16
In this paper, we reconstruct viable inflationary models by starting from spectral index and tensor-to-scalar ratio from Planck observations. We analyze three different kinds of models: scalar field theories, fluid cosmology and f(R)-modified gravity. We recover the well known R^2-inflation in Jodan frame and Einstein frame representation, the massive scalar inflaton models and two models of inhomogeneous fluid. A model of R^2-correction to Einstein's gravity plus a "cosmological constant" with an exact solution for early time acceleration is reconstructed.
Distributed fuzzy system modeling
Pedrycz, W.; Chi Fung Lam, P.; Rocha, A.F. [Univ. of Manitoba, Winnipeg, Manitoba (Canada)] [Univ. of Manitoba, Winnipeg, Manitoba (Canada)
1995-05-01
The paper introduces and studies an idea of distributed modeling treating it as a new paradigm of fuzzy system modeling and analysis. This form of modeling is oriented towards developing individual (local) fuzzy models for specific modeling landmarks (expressed as fuzzy sets) and determining the essential logical relationships between these local models. The models themselves are implemented in the form of logic processors being regarded as specialized fuzzy neural networks. The interaction between the processors is developed either in an inhibitory or excitatory way. In more descriptive way, the distributed model can be sought as a collection of fuzzy finite state machines with their individual local first or higher order memories. It is also clarified how the concept of distributed modeling narrows down a gap between purely numerical (quantitative) models and the qualitative ones originated within the realm of Artificial Intelligence. The overall architecture of distributed modeling is discussed along with the detailed learning schemes. The results of extensive simulation experiments are provided as well. 17 refs.
Collins, Lisa M.; Part, Chérie E.
2013-01-01
Simple Summary In this review paper we discuss the different modeling techniques that have been used in animal welfare research to date. We look at what questions they have been used to answer, the advantages and pitfalls of the methods, and how future research can best use these approaches to answer some of the most important upcoming questions in farm animal welfare. Abstract The use of models in the life sciences has greatly expanded in scope and advanced in technique in recent decades. However, the range, type and complexity of models used in farm animal welfare is comparatively poor, despite the great scope for use of modeling in this field of research. In this paper, we review the different modeling approaches used in farm animal welfare science to date, discussing the types of questions they have been used to answer, the merits and problems associated with the method, and possible future applications of each technique. We find that the most frequently published types of model used in farm animal welfare are conceptual and assessment models; two types of model that are frequently (though not exclusively) based on expert opinion. Simulation, optimization, scenario, and systems modeling approaches are rarer in animal welfare, despite being commonly used in other related fields. Finally, common issues such as a lack of quantitative data to parameterize models, and model selection and validation are discussed throughout the review, with possible solutions and alternative approaches suggested.
Benjamin Bahr; Frank Hellmann; Wojciech Kami?ski; Marcin Kisielowski; Jerzy Lewandowski
2010-10-22
The goal of this paper is to introduce a systematic approach to spin foams. We define operator spin foams, that is foams labelled by group representations and operators, as the main tool. An equivalence relation we impose in the set of the operator spin foams allows to split the faces and the edges of the foams. The consistency with that relation requires introduction of the (familiar for the BF theory) face amplitude. The operator spin foam models are defined quite generally. Imposing a maximal symmetry leads to a family we call natural operator spin foam models. This symmetry, combined with demanding consistency with splitting the edges, determines a complete characterization of a general natural model. It can be obtained by applying arbitrary (quantum) constraints on an arbitrary BF spin foam model. In particular, imposing suitable constraints on Spin(4) BF spin foam model is exactly the way we tend to view 4d quantum gravity, starting with the BC model and continuing with the EPRL or FK models. That makes our framework directly applicable to those models. Specifically, our operator spin foam framework can be translated into the language of spin foams and partition functions. We discuss the examples: BF spin foam model, the BC model, and the model obtained by application of our framework to the EPRL intertwiners.
Veksler, Vladislav D; Myers, Christopher W; Gluck, Kevin A
2015-10-01
A good fit of model predictions to empirical data are often used as an argument for model validity. However, if the model is flexible enough to fit a large proportion of potential empirical outcomes, finding a good fit becomes less meaningful. We propose a method for estimating the proportion of potential empirical outcomes that the model can fit: Model Flexibility Analysis (MFA). MFA aids model evaluation by providing a metric for gauging the persuasiveness of a given fit. We demonstrate that MFA can be more informative than merely discounting the fit by the number of free parameters in the model, and show how the number of free parameters does not necessarily correlate with the flexibility of the model. Additionally, we contrast MFA with other flexibility assessment techniques, including Parameter Space Partitioning, Model Mimicry, Minimum Description Length, and Prior Predictive Evaluation. Finally, we provide examples of how MFA can help to inform modeling results and discuss a variety of issues relating to the use of MFA in model validation. (PsycINFO Database Record PMID:26322547
Animal models of atherosclerosis
Kapourchali, Fatemeh Ramezani; Surendiran, Gangadaran; Chen, Li; Uitz, Elisabeth; Bahadori, Babak; Moghadasian, Mohammed H
2014-01-01
In this mini-review several commonly used animal models of atherosclerosis have been discussed. Among them, emphasis has been made on mice, rabbits, pigs and non-human primates. Although these animal models have played a significant role in our understanding of induction of atherosclerotic lesions, we still lack a reliable animal model for regression of the disease. Researchers have reported several genetically modified and transgenic animal models that replicate human atherosclerosis, however each of current animal models have some limitations. Among these animal models, the apolipoprotein (apo) E-knockout (KO) mice have been used extensively because they develop spontaneous atherosclerosis. Furthermore, atherosclerotic lesions developed in this model depending on experimental design may resemble humans’ stable and unstable atherosclerotic lesions. This mouse model of hypercholesterolemia and atherosclerosis has been also used to investigate the impact of oxidative stress and inflammation on atherogenesis. Low density lipoprotein (LDL)-r-KO mice are a model of human familial hypercholesterolemia. However, unlike apo E-KO mice, the LDL-r-KO mice do not develop spontaneous atherosclerosis. Both apo E-KO and LDL-r-KO mice have been employed to generate other relevant mouse models of cardiovascular disease through breeding strategies. In addition to mice, rabbits have been used extensively particularly to understand the mechanisms of cholesterol-induced atherosclerosis. The present review paper details the characteristics of animal models that are used in atherosclerosis research. PMID:24868511
Johnson, Douglas H.; Cook, R.D.
2013-01-01
In her AAAS News & Notes piece "Can the Southwest manage its thirst?" (26 July, p. 362), K. Wren quotes Ajay Kalra, who advocates a particular method for predicting Colorado River streamflow "because it eschews complex physical climate models for a statistical data-driven modeling approach." A preference for data-driven models may be appropriate in this individual situation, but it is not so generally, Data-driven models often come with a warning against extrapolating beyond the range of the data used to develop the models. When the future is like the past, data-driven models can work well for prediction, but it is easy to over-model local or transient phenomena, often leading to predictive inaccuracy (1). Mechanistic models are built on established knowledge of the process that connects the response variables with the predictors, using information obtained outside of an extant data set. One may shy away from a mechanistic approach when the underlying process is judged to be too complicated, but good predictive models can be constructed with statistical components that account for ingredients missing in the mechanistic analysis. Models with sound mechanistic components are more generally applicable and robust than data-driven models.
T. Ghezzehej
2004-10-04
The purpose of this model report is to document the calibrated properties model that provides calibrated property sets for unsaturated zone (UZ) flow and transport process models (UZ models). The calibration of the property sets is performed through inverse modeling. This work followed, and was planned in, ''Technical Work Plan (TWP) for: Unsaturated Zone Flow Analysis and Model Report Integration'' (BSC 2004 [DIRS 169654], Sections 1.2.6 and 2.1.1.6). Direct inputs to this model report were derived from the following upstream analysis and model reports: ''Analysis of Hydrologic Properties Data'' (BSC 2004 [DIRS 170038]); ''Development of Numerical Grids for UZ Flow and Transport Modeling'' (BSC 2004 [DIRS 169855]); ''Simulation of Net Infiltration for Present-Day and Potential Future Climates'' (BSC 2004 [DIRS 170007]); ''Geologic Framework Model'' (GFM2000) (BSC 2004 [DIRS 170029]). Additionally, this model report incorporates errata of the previous version and closure of the Key Technical Issue agreement TSPAI 3.26 (Section 6.2.2 and Appendix B), and it is revised for improved transparency.
Tobin, R.L.
1982-01-01
Many different types of transportation models are used to model coal transportation by rail. To obtain realistic results, it is usually necessary to consider other modes in addition to rail and other commodities in addition to coal. For example, to know the potential bottlenecks on the rail system it is necessary to predict the total level of freight movement on the rail system. This requires modeling the movements of other commodities in addition to coal. To predict the levels of flows of both coal and non-coal commodities on the rail system, it is necessary to predict the share of total flows carried by rail. This requires accurate modeling of competing modes. To develop accurate rate models it is also necessary to have information on competing modes. This paper presents a collection of transportation models used to model the various aspects of coal transportation by rail and shows how they interact.
Physical modelling in biomechanics.
Koehl, M A R
2003-01-01
Physical models, like mathematical models, are useful tools in biomechanical research. Physical models enable investigators to explore parameter space in a way that is not possible using a comparative approach with living organisms: parameters can be varied one at a time to measure the performance consequences of each, while values and combinations not found in nature can be tested. Experiments using physical models in the laboratory or field can circumvent problems posed by uncooperative or endangered organisms. Physical models also permit some aspects of the biomechanical performance of extinct organisms to be measured. Use of properly scaled physical models allows detailed physical measurements to be made for organisms that are too small or fast to be easily studied directly. The process of physical modelling and the advantages and limitations of this approach are illustrated using examples from our research on hydrodynamic forces on sessile organisms, mechanics of hydraulic skeletons, food capture by zooplankton and odour interception by olfactory antennules. PMID:14561350
Moffat, Harry K.; Noble, David R.; Baer, Thomas A. (Procter & Gamble Co., West Chester, OH); Adolf, Douglas Brian; Rao, Rekha Ranjana; Mondy, Lisa Ann
2008-09-01
In this report, we summarize our work on developing a production level foam processing computational model suitable for predicting the self-expansion of foam in complex geometries. The model is based on a finite element representation of the equations of motion, with the movement of the free surface represented using the level set method, and has been implemented in SIERRA/ARIA. An empirically based time- and temperature-dependent density model is used to encapsulate the complex physics of foam nucleation and growth in a numerically tractable model. The change in density with time is at the heart of the foam self-expansion as it creates the motion of the foam. This continuum-level model uses an homogenized description of foam, which does not include the gas explicitly. Results from the model are compared to temperature-instrumented flow visualization experiments giving the location of the foam front as a function of time for our EFAR model system.
Toward Scientific Numerical Modeling
NASA Technical Reports Server (NTRS)
Kleb, Bil
2007-01-01
Ultimately, scientific numerical models need quantified output uncertainties so that modeling can evolve to better match reality. Documenting model input uncertainties and verifying that numerical models are translated into code correctly, however, are necessary first steps toward that goal. Without known input parameter uncertainties, model sensitivities are all one can determine, and without code verification, output uncertainties are simply not reliable. To address these two shortcomings, two proposals are offered: (1) an unobtrusive mechanism to document input parameter uncertainties in situ and (2) an adaptation of the Scientific Method to numerical model development and deployment. Because these two steps require changes in the computational simulation community to bear fruit, they are presented in terms of the Beckhard-Harris-Gleicher change model.
Modeling agriculture in the Community Land Model
NASA Astrophysics Data System (ADS)
Drewniak, B.; Song, J.; Prell, J.; Kotamarthi, V. R.; Jacob, R.
2013-04-01
The potential impact of climate change on agriculture is uncertain. In addition, agriculture could influence above- and below-ground carbon storage. Development of models that represent agriculture is necessary to address these impacts. We have developed an approach to integrate agriculture representations for three crop types - maize, soybean, and spring wheat - into the coupled carbon-nitrogen version of the Community Land Model (CLM), to help address these questions. Here we present the new model, CLM-Crop, validated against observations from two AmeriFlux sites in the United States, planted with maize and soybean. Seasonal carbon fluxes compared well with field measurements for soybean, but not as well for maize. CLM-Crop yields were comparable with observations in countries such as the United States, Argentina, and China, although the generality of the crop model and its lack of technology and irrigation made direct comparison difficult. CLM-Crop was compared against the standard CLM3.5, which simulates crops as grass. The comparison showed improvement in gross primary productivity in regions where crops are the dominant vegetation cover. Crop yields and productivity were negatively correlated with temperature and positively correlated with precipitation, in agreement with other modeling studies. In case studies with the new crop model looking at impacts of residue management and planting date on crop yield, we found that increased residue returned to the litter pool increased crop yield, while reduced residue returns resulted in yield decreases. Using climate controls to signal planting date caused different responses in different crops. Maize and soybean had opposite reactions: when low temperature threshold resulted in early planting, maize responded with a loss of yield, but soybean yields increased. Our improvements in CLM demonstrate a new capability in the model - simulating agriculture in a realistic way, complete with fertilizer and residue management practices. Results are encouraging, with improved representation of human influences on the land surface and the potentially resulting climate impacts.
V. Chipman; J. Case
2002-12-20
The purpose of the Ventilation Model is to simulate the heat transfer processes in and around waste emplacement drifts during periods of forced ventilation. The model evaluates the effects of emplacement drift ventilation on the thermal conditions in the emplacement drifts and surrounding rock mass, and calculates the heat removal by ventilation as a measure of the viability of ventilation to delay the onset of peak repository temperature and reduce its magnitude. The heat removal by ventilation is temporally and spatially dependent, and is expressed as the fraction of heat carried away by the ventilation air compared to the fraction of heat produced by radionuclide decay. One minus the heat removal is called the wall heat fraction, or the remaining amount of heat that is transferred via conduction to the surrounding rock mass. Downstream models, such as the ''Multiscale Thermohydrologic Model'' (BSC 2001), use the wall heat fractions as outputted from the Ventilation Model to initialize their post-closure analyses. The Ventilation Model report was initially developed to analyze the effects of preclosure continuous ventilation in the Engineered Barrier System (EBS) emplacement drifts, and to provide heat removal data to support EBS design. Revision 00 of the Ventilation Model included documentation of the modeling results from the ANSYS-based heat transfer model. Revision 01 ICN 01 included the results of the unqualified software code MULTIFLUX to assess the influence of moisture on the ventilation efficiency. The purposes of Revision 02 of the Ventilation Model are: (1) To validate the conceptual model for preclosure ventilation of emplacement drifts and verify its numerical application in accordance with new procedural requirements as outlined in AP-SIII-10Q, Models (Section 7.0). (2) To satisfy technical issues posed in KTI agreement RDTME 3.14 (Reamer and Williams 2001a). Specifically to demonstrate, with respect to the ANSYS ventilation model, the adequacy of the discretization (Section 6.2.3.1), and the downstream applicability of the model results (i.e. wall heat fractions) to initialize post-closure thermal models (Section 6.6). (3) To satisfy the remainder of KTI agreement TEF 2.07 (Reamer and Williams 2001b). Specifically to provide the results of post-test ANSYS modeling of the Atlas Facility forced convection tests (Section 7.1.2). This portion of the model report also serves as a validation exercise per AP-SIII.10Q, Models, for the ANSYS ventilation model. (4) To asses the impacts of moisture on the ventilation efficiency.
Penny, Will
Bayesian Model Comparison Will Penny Bayes rule for models Bayes factors Nonlinear Models Model Comparison Will Penny June 2nd 2011 #12;Bayesian Model Comparison Will Penny Bayes rule for models Bayes factors Nonlinear Models Variational Laplace Free Energy Complexity Decompositions AIC and BIC
Generalized Multilevel Structural Equation Modeling
ERIC Educational Resources Information Center
Rabe-Hesketh, Sophia; Skrondal, Anders; Pickles, Andrew
2004-01-01
A unifying framework for generalized multilevel structural equation modeling is introduced. The models in the framework, called generalized linear latent and mixed models (GLLAMM), combine features of generalized linear mixed models (GLMM) and structural equation models (SEM) and consist of a response model and a structural model for the latent…
Inhomogeneous Universe Models: the Szekeres Model
NASA Astrophysics Data System (ADS)
Buckley, Robert; Schlegel, E. M.
2012-01-01
Observations of the luminosity distance and redshift of type Ia supernovae lead to the conclusion that the expansion of the universe is accelerating, but only when the observations are interpreted under the assumptions of homogeneity and isotropy on large scales. It has been shown that relaxing these assumptions allows for alternative universe models which match the observed distance-redshift relation without any mysterious ``dark energy''. The simplest such models, the spherically symmetric Lemaître-Tolman-Bondi (LTB) class of models, have been much studied, and have been found by many to be problematic. The Szekeres class of models is a generalization of LTB which possesses no symmetries in general. It is able to better represent the lumpiness of the universe's matter distribution, and so it should serve as a more complete and accurate framework in which to interpret our observations. We present an overview of the Szekeres model as it applies to cosmology. We show how its shape gives it advantages over LTB, such as a potential for faster structure formation, better compliance with the Copernican principle, and possibly even explaining the anomalous alignment of several cosmological observations. We briefly mention our progress on studying the CMB dipole seen by observers within a Szekeres universe. This research was supported by the Texas Space Grant Consortium.
Hammerand, Daniel Carl; Scherzinger, William Mark
2007-09-01
The Library of Advanced Materials for Engineering (LAME) provides a common repository for constitutive models that can be used in computational solid mechanics codes. A number of models including both hypoelastic (rate) and hyperelastic (total strain) constitutive forms have been implemented in LAME. The structure and testing of LAME is described in Scherzinger and Hammerand ([3] and [4]). The purpose of the present report is to describe the material models which have already been implemented into LAME. The descriptions are designed to give useful information to both analysts and code developers. Thus far, 33 non-ITAR/non-CRADA protected material models have been incorporated. These include everything from the simple isotropic linear elastic models to a number of elastic-plastic models for metals to models for honeycomb, foams, potting epoxies and rubber. A complete description of each model is outside the scope of the current report. Rather, the aim here is to delineate the properties, state variables, functions, and methods for each model. However, a brief description of some of the constitutive details is provided for a number of the material models. Where appropriate, the SAND reports available for each model have been cited. Many models have state variable aliases for some or all of their state variables. These alias names can be used for outputting desired quantities. The state variable aliases available for results output have been listed in this report. However, not all models use these aliases. For those models, no state variable names are listed. Nevertheless, the number of state variables employed by each model is always given. Currently, there are four possible functions for a material model. This report lists which of these four methods are employed in each material model. As far as analysts are concerned, this information is included only for the awareness purposes. The analyst can take confidence in the fact that model has been properly implemented and the methods necessary for achieving accurate and efficient solutions have been incorporated. The most important method is the getStress function where the actual material model evaluation takes place. Obviously, all material models incorporate this function. The initialize function is included in most material models. The initialize function is called once at the beginning of an analysis and its primary purpose is to initialize the material state variables associated with the model. Many times, there is some information which can be set once per load step. For instance, we may have temperature dependent material properties in an analysis where temperature is prescribed. Instead of setting those parameters at each iteration in a time step, it is much more efficient to set them once per time step at the beginning of the step. These types of load step initializations are performed in the loadStepInit method. The final function used by many models is the pcElasticModuli method which changes the moduli that are to be used by the elastic preconditioner in Adagio. The moduli for the elastic preconditioner are set during the initialization of Adagio. Sometimes, better convergence can be achieved by changing these moduli for the elastic preconditioner. For instance, it typically helps to modify the preconditioner when the material model has temperature dependent moduli. For many material models, it is not necessary to change the values of the moduli that are set initially in the code. Hence, those models do not have pcElasticModuli functions. All four of these methods receive information from the matParams structure as described by Scherzinger and Hammerand.
Introduction to Climate Models
NSDL National Science Digital Library
2014-09-14
This module explains how climate models work. Because the modeling of both weather and climate share many similarities, the content throughout this module draws frequent comparisons and highlights the differences. We explain not only how, but why climate models differ from weather models. To do so, we explore the difference between weather and climate, then show how models are built to simulate climate and generate the statistics that describe it. We conclude with a discussion of models are tuned and tested. Understanding how climate responds to changes in atmospheric composition and other factors drives climate research. Climate models provide a tool to understand how processes work and interact with each other. Our intended audience is the weather forecasting community: those who are already familiar with NWP models. Non-forecasters with an interest in weather and climate should also find the module useful. The content is not overly technical and the goal of this module is not to train people to develop climate models but to highlight the similarities and differences between weather and climate models.
Geochemical modeling: a review
Jenne, E.A.
1981-06-01
Two general families of geochemical models presently exist. The ion speciation-solubility group of geochemical models contain submodels to first calculate a distribution of aqueous species and to secondly test the hypothesis that the water is near equilibrium with particular solid phases. These models may or may not calculate the adsorption of dissolved constituents and simulate the dissolution and precipitation (mass transfer) of solid phases. Another family of geochemical models, the reaction path models, simulates the stepwise precipitation of solid phases as a result of reacting specified amounts of water and rock. Reaction path models first perform an aqueous speciation of the dissolved constituents of the water, test solubility hypotheses, then perform the reaction path modeling. Certain improvements in the present versions of these models would enhance their value and usefulness to applications in nuclear-waste isolation, etc. Mass-transfer calculations of limited extent are certainly within the capabilities of state-of-the-art models. However, the reaction path models require an expansion of their thermodynamic data bases and systematic validation before they are generally accepted.
NASA Technical Reports Server (NTRS)
Schoeberl, Mark; Rood, Richard B.; Hildebrand, Peter; Raymond, Carol
2003-01-01
The Earth System Model is the natural evolution of current climate models and will be the ultimate embodiment of our geophysical understanding of the planet. These models are constructed from components - atmosphere, ocean, ice, land, chemistry, solid earth, etc. models and merged together through a coupling program which is responsible for the exchange of data from the components. Climate models and future earth system models will have standardized modules, and these standards are now being developed by the ESMF project funded by NASA. The Earth System Model will have a variety of uses beyond climate prediction. The model can be used to build climate data records making it the core of an assimilation system, and it can be used in OSSE experiments to evaluate. The computing and storage requirements for the ESM appear to be daunting. However, the Japanese ES theoretical computing capability is already within 20% of the minimum requirements needed for some 2010 climate model applications. Thus it seems very possible that a focused effort to build an Earth System Model will achieve succcss.
Chen, Changyou; Buntine, Wray; Ding, Nan; Xie, Lexing; Du, Lan
2015-02-01
In applications we may want to compare different document collections: they could have shared content but also different and unique aspects in particular collections. This task has been called comparative text mining or cross-collection modeling. We present a differential topic model for this application that models both topic differences and similarities. For this we use hierarchical Bayesian nonparametric models. Moreover, we found it was important to properly model power-law phenomena in topic-word distributions and thus we used the full Pitman-Yor process rather than just a Dirichlet process. Furthermore, we propose the transformed Pitman-Yor process (TPYP) to incorporate prior knowledge such as vocabulary variations in different collections into the model. To deal with the non-conjugate issue between model prior and likelihood in the TPYP, we thus propose an efficient sampling algorithm using a data augmentation technique based on the multinomial theorem. Experimental results show the model discovers interesting aspects of different collections. We also show the proposed MCMC based algorithm achieves a dramatically reduced test perplexity compared to some existing topic models. Finally, we show our model outperforms the state-of-the-art for document classification/ideology prediction on a number of text collections. PMID:26353238
Risk modelling: which models to choose?
Csicsaky, M J; Roller, M; Pott, F
1989-01-01
Using as examples excess lung cancer mortality in coke oven workers and lung tumor induction in rats by inhalation of diesel engine emissions or cadmium chloride aerosol, the maximum likelihood estimate and the upper limit of risk were determined using a set of conventional risk models. The additional safety offered by going to the upper limit of the 95% confidence interval when deriving a unit risk value was found to be less than a factor of 5 in all but one case, and usually much less than 2. It is concluded that the selection of an adequate model is the most critical step in risk assessment, and that an additional safety factor may be required to allow for a better protection of the public in case models other than the most conservative ones come into use. PMID:2637154
Grain Scale Modeling - Impact of Constitutive Models
NASA Astrophysics Data System (ADS)
Yarrington, Cole; Thompson, Aidan P.; Shan, Tzu-Ray; Wixom, Ryan
2015-06-01
There are many model considerations that are unique to the grain-scale continuum approach. Most of these considerations revolve around the treatment of continuum model parameters, now applied to the fully dense matrix of material with dispersed discrete heterogeneous features. An example of this is how the equation of state (EOS) for a grain scale material must be the fully dense EOS, as opposed to the bulk EOS measured at lower densities. This poses unique validation and parameterization challenges, as most experimental data is gathered for bulk materials. We show how different theoretical tools for smaller length scales (MD, DFT-MD) can be used to calibrate the necessary models to achieve accurate simulation results.
Modeling agriculture in the Community Land Model
NASA Astrophysics Data System (ADS)
Drewniak, B.; Song, J.; Prell, J.; Kotamarthi, V. R.; Jacob, R.
2012-12-01
The potential impact of climate change on agriculture is uncertain. In addition, agriculture could influence above- and below-ground carbon storage. Development of models that represent agriculture is necessary to address these impacts. We have developed an approach to integrate agriculture representations for three crop types - maize, soybean, and spring wheat - into the coupled carbon-nitrogen version of the Community Land Model (CLM), to help address these questions. Here we present the new model, CLM-Crop, validated against observations from two AmeriFlux sites in the United States, planted with maize and soybean. Seasonal carbon fluxes compared well with field measurements. CLM-Crop yields were comparable with observations in some regions, although the generality of the crop model and its lack of technology and irrigation made direct comparison difficult. CLM-Crop was compared against the standard CLM3.5, which simulates crops as grass. The comparison showed improvement in gross primary productivity in regions where crops are the dominant vegetation cover. Crop yields and productivity were negatively correlated with temperature and positively correlated with precipitation. In case studies with the new crop model looking at impacts of residue management and planting date on crop yield, we found that increased residue returned to the litter pool increased crop yield, while reduced residue returns resulted in yield decreases. Using climate controls to signal planting date caused different responses in different crops. Maize and soybean had opposite reactions: when low temperature threshold resulted in early planting, maize responded with a loss of yield, but soybean yields increased. Our improvements in CLM demonstrate a new capability in the model - simulating agriculture in a realistic way, complete with fertilizer and residue management practices. Results are encouraging, with improved representation of human influences on the land surface and the potentially resulting climate impacts.
Radiation Environment Modeling for Spacecraft Design: New Model Developments
NASA Technical Reports Server (NTRS)
Barth, Janet; Xapsos, Mike; Lauenstein, Jean-Marie; Ladbury, Ray
2006-01-01
A viewgraph presentation on various new space radiation environment models for spacecraft design is described. The topics include: 1) The Space Radiatio Environment; 2) Effects of Space Environments on Systems; 3) Space Radiatio Environment Model Use During Space Mission Development and Operations; 4) Space Radiation Hazards for Humans; 5) "Standard" Space Radiation Environment Models; 6) Concerns about Standard Models; 7) Inadequacies of Current Models; 8) Development of New Models; 9) New Model Developments: Proton Belt Models; 10) Coverage of New Proton Models; 11) Comparison of TPM-1, PSB97, AP-8; 12) New Model Developments: Electron Belt Models; 13) Coverage of New Electron Models; 14) Comparison of "Worst Case" POLE, CRESELE, and FLUMIC Models with the AE-8 Model; 15) New Model Developments: Galactic Cosmic Ray Model; 16) Comparison of NASA, MSU, CIT Models with ACE Instrument Data; 17) New Model Developmemts: Solar Proton Model; 18) Comparison of ESP, JPL91, KIng/Stassinopoulos, and PSYCHIC Models; 19) New Model Developments: Solar Heavy Ion Model; 20) Comparison of CREME96 to CREDO Measurements During 2000 and 2002; 21) PSYCHIC Heavy ion Model; 22) Model Standardization; 23) Working Group Meeting on New Standard Radiation Belt and Space Plasma Models; and 24) Summary.
Teaching macromolecular modeling.
Harvey, S C; Tan, R K
1992-01-01
Training newcomers to the field of macromolecular modeling is as difficult as is training beginners in x-ray crystallography, nuclear magnetic resonance, or other methods in structural biology. In one or two lectures, the most that can be conveyed is a general sense of the relationship between modeling and other structural methods. If a full semester is available, then students can be taught how molecular structures are built, manipulated, refined, and analyzed on a computer. Here we describe a one-semester modeling course that combines lectures, discussions, and a laboratory using a commercial modeling package. In the laboratory, students carry out prescribed exercises that are coordinated to the lectures, and they complete a term project on a modeling problem of their choice. The goal is to give students an understanding of what kinds of problems can be attacked by molecular modeling methods and which problems are beyond the current capabilities of those methods. PMID:1489919
Modeling a multiprocessor architecture
Menand, J.R.; Becker, M.
1981-01-01
Instead of using a very expensive and powerful central processing unit, the hypercube F8 uses independent parallel microprocessors that are slow and inexpensive. This architecture is modeled in order to determine the proper number of microprocessors and to validate the system. The model is hierarchical. Different levels of the model are considered. Each level model corresponds to a subsystem of the hypercube F8. Most of the common analytical methods are used and direct calculations are made. For each level several methods are compared and the approximations validated. The level I direct method is a general solution of polling models. The global model gives the throughput of the system, the utilizations of the processors and the service times. The authors discuss the values of the parameters of the system. 16 references.
Probabilistic Mesomechanical Fatigue Model
NASA Technical Reports Server (NTRS)
Tryon, Robert G.
1997-01-01
A probabilistic mesomechanical fatigue life model is proposed to link the microstructural material heterogeneities to the statistical scatter in the macrostructural response. The macrostructure is modeled as an ensemble of microelements. Cracks nucleation within the microelements and grow from the microelements to final fracture. Variations of the microelement properties are defined using statistical parameters. A micromechanical slip band decohesion model is used to determine the crack nucleation life and size. A crack tip opening displacement model is used to determine the small crack growth life and size. Paris law is used to determine the long crack growth life. The models are combined in a Monte Carlo simulation to determine the statistical distribution of total fatigue life for the macrostructure. The modeled response is compared to trends in experimental observations from the literature.
Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi
2014-01-01
Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: “bats approach their prey.” Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425
NASA Technical Reports Server (NTRS)
1985-01-01
The outside users payload model which is a continuation of documents and replaces and supersedes the July 1984 edition is presented. The time period covered by this model is 1985 through 2000. The following sections are included: (1) definition of the scope of the model; (2) discussion of the methodology used; (3) overview of total demand; (4) summary of the estimated market segmentation by launch vehicle; (5) summary of the estimated market segmentation by user type; (6) details of the STS market forecast; (7) summary of transponder trends; (8) model overview by mission category; and (9) detailed mission models. All known non-NASA, non-DOD reimbursable payloads forecast to be flown by non-Soviet-block countries are included in this model with the exception of Spacelab payloads and small self contained payloads. Certain DOD-sponsored or cosponsored payloads are included if they are reimbursable launches.
2013-01-01
Animal models of disease states are valuable tools for developing new treatments and investigating underlying mechanisms. They should mimic the symptoms and pathology of the disease and importantly be predictive of effective treatments. Fibromyalgia is characterized by chronic widespread pain with associated co-morbid symptoms that include fatigue, depression, anxiety and sleep dysfunction. In this review, we present different animal models that mimic the signs and symptoms of fibromyalgia. These models are induced by a wide variety of methods that include repeated muscle insults, depletion of biogenic amines, and stress. All potential models produce widespread and long-lasting hyperalgesia without overt peripheral tissue damage and thus mimic the clinical presentation of fibromyalgia. We describe the methods for induction of the model, pathophysiological mechanisms for each model, and treatment profiles. PMID:24314231
Macklin, Paul; Cristini, Vittorio
2013-01-01
Simulating cancer behavior across multiple biological scales in space and time, i.e., multiscale cancer modeling, is increasingly being recognized as a powerful tool to refine hypotheses, focus experiments, and enable more accurate predictions. A growing number of examples illustrate the value of this approach in providing quantitative insight on the initiation, progression, and treatment of cancer. In this review, we introduce the most recent and important multiscale cancer modeling works that have successfully established a mechanistic link between different biological scales. Biophysical, biochemical, and biomechanical factors are considered in these models. We also discuss innovative, cutting-edge modeling methods that are moving predictive multiscale cancer modeling toward clinical application. Furthermore, because the development of multiscale cancer models requires a new level of collaboration among scientists from a variety of fields such as biology, medicine, physics, mathematics, engineering, and computer science, an innovative Web-based infrastructure is needed to support this growing community. PMID:21529163
NSDL National Science Digital Library
Students create a physical model illustrating soil water balance using drinking glasses to represent the soil column, and explain how the model can be used to interpret data and form predictions. Using data from the GLOBE Data Server, they calculate the potential evapotranspiration, average monthly temperatures and precipitation for their model. This is a learning activity associated with the GLOBE hydrology investigations and is supported by the Hydrology chapter of the GLOBE Teacher's Guide.
HOMER® Micropower Optimization Model
Lilienthal, P.
2005-01-01
NREL has developed the HOMER micropower optimization model. The model can analyze all of the available small power technologies individually and in hybrid configurations to identify least-cost solutions to energy requirements. This capability is valuable to a diverse set of energy professionals and applications. NREL has actively supported its growing user base and developed training programs around the model. These activities are helping to grow the global market for solar technologies.
Multistage models for carcinogenesis.
Freedman, D A; Navidi, W C
1989-01-01
The multistage model is tested on several human and animal data sets. It fits in some cases but not in others. With human lung cancer data, there is a drop in risk for ex-smokers quite different from the predictions of the model. The results are not conclusive but are compatible with the view that the multistage model provides a family of curves that often fit cancer incidence data, but may not capture the underlying biological reality. PMID:2667978
NASA Technical Reports Server (NTRS)
Kerstman, Eric; Minard, Charles; Saile, Lynn; Freiere deCarvalho, Mary; Myers, Jerry; Walton, Marlei; Butler, Douglas; Iyengar, Sriram; Johnson-Throop, Kathy; Baumann, David
2010-01-01
The goals of the Integrated Medical Model (IMM) are to develop an integrated, quantified, evidence-based decision support tool useful to crew health and mission planners and to help align science, technology, and operational activities intended to optimize crew health, safety, and mission success. Presentation slides address scope and approach, beneficiaries of IMM capabilities, history, risk components, conceptual models, development steps, and the evidence base. Space adaptation syndrome is used to demonstrate the model's capabilities.
Generating Gowdy cosmological models
A. Sanchez; A. Macias; H. Quevedo
2003-12-18
Using the analogy with stationary axisymmetric solutions, we present a method to generate new analytic cosmological solutions of Einstein's equation belonging to the class of $T^3$ Gowdy cosmological models. We show that the solutions can be generated from their data at the initial singularity and present the formal general solution for arbitrary initial data. We exemplify the method by constructing the Kantowski-Sachs cosmological model and a generalization of it that corresponds to an unpolarized $T^3$ Gowdy model.
Multidimensional reactor kinetics modeling
Diamond, D.J.
1996-11-01
There is general agreement that for many light water reactor transient calculations, it is-necessary to use a multidimensional neutron kinetics model coupled to a thermal-hydraulics model for satisfactory results. These calculations are needed for a variety of applications for licensing safety analysis, probabilistic risk assessment (PRA), operational support, and training. The latter three applications have always required best-estimate models, but in the past applications for licensing could be satisfied with relatively simple models. By using more sophisticated best-estimate models, the consequences of these calculations are better understood, and the potential for gaining relief from restrictive operating limits increases. Hence, for all of the aforementioned applications, it is important to have the ability to do best-estimate calculations with multidimensional neutron kinetics models. coupled to sophisticated thermal-hydraulic models. Specifically, this paper reviews the status of multidimensional neutron kinetics modeling which would be used in conjunction with thermal-hydraulic models to do core dynamics calculations, either coupled to a complete NSSS representation or in isolation. In addition, the paper makes recommendations as to what should be the state-of-the-art for the next ten years. The review is an update to a previous review of the status as of ten years ago. The general requirements for a core dynamics code and the modeling available for such a code, discussed in that review, are still applicable. The emphasis in the current review is on the neutron kinetics assuming that the necessary thermal-hydraulic capability exists. In addition to discussing the basic neutron kinetics, discussion is given of related modeling (other than thermal- hydraulics). The capabilities and limitations of current computer codes are presented to understand the state-of-the-art and to help clarify the future direction of model development in this area.
Area Models: Multiplying Fractions
NSDL National Science Digital Library
2012-11-05
In this lesson students will investigate relationship between area models and the concept of multiplying fractions. Students will use area model to develop understanding of the concept of multiplying fractions as well as to find the product of two common fraction. The teacher will use the free application GeoGebra (see download link under Suggested Technology) to provide students with a visual representation of how area models can be used at the time of multiplying fractions.
NASA Astrophysics Data System (ADS)
Aglietti, U. G.; Santini, P. M.
2015-06-01
By constructing the Riemann surface controlling the resonance structure of Winter model, we determine the limitations of perturbation theory. We then derive explicit non-perturbative results for various observables in the weak-coupling regime, in which the model has an infinite tower of long-lived resonant states. The problem of constructing proper initial wavefunctions coupled to single excitations of the model is also treated within perturbative and non-perturbative methods.
Thatcher
1984-01-01
The Surface-To-Air Missile (SAM) Electro-Magnetic-Pulse (EMP) (SEMP) model simulates the illumination of an entire SAM brigade with an EMP weapon. It computes probability distributions of SAM brigade performance levels after an EMP attack has occurred. Brigade performance is determined by the combination of components that survive the EMP. Accordingly, the SEMP model is separated into the component failure model and
NSDL National Science Digital Library
Robert MacKay
In this JAVA-based interactive modeling activity, students are introduced to the concept of mass balance, flow rates, and equilibrium using a simple water bucket model. Students can vary flow rate into the bucket, initial water level in the bucket, and residence time of water in the bucket. After running the model, the bucket's water level as a function of time is presented graphically and in tabular form.
Modeling Carbon Dioxide Levels
NSDL National Science Digital Library
2009-01-01
In this activity students will explore levels of Carbon Dioxide ( C02) in the atmosphere over time. There is concern that levels of C02 are rising; and finding a good mathematical model for CO2 levels is an important part of determining if this is attributable to human technology. Students draw a scatter plot, choose two points to create a linear model for the data, then use the model to make predictions.
Engel, D.W.; McGrail, B.P.
1993-11-01
The Office of Civilian Radioactive Waste Management and the Power Reactor and Nuclear Fuel Development Corporation of Japan (PNC) have supported the development of the Analytical Repository Source-Term (AREST) at Pacific Northwest Laboratory. AREST is a computer model developed to evaluate radionuclide release from an underground geologic repository. The AREST code can be used to calculate/estimate the amount and rate of each radionuclide that is released from the engineered barrier system (EBS) of the repository. The EBS is the man-made or disrupted area of the repository. AREST was designed as a system-level models to simulate the behavior of the total repository by combining process-level models for the release from an individual waste package or container. AREST contains primarily analytical models for calculating the release/transport of radionuclides to the lost rock that surrounds each waste package. Analytical models were used because of the small computational overhead that allows all the input parameters to be derived from a statistical distribution. Recently, a one-dimensional numerical model was also incorporated into AREST, to allow for more detailed modeling of the transport process with arbitrary length decay chains. The next step in modeling the EBS, is to develop a model that couples the probabilistic capabilities of AREST with a more detailed process model. This model will need to look at the reactive coupling of the processes that are involved with the release process. Such coupling would include: (1) the dissolution of the waste form, (2) the geochemical modeling of the groundwater, (3) the corrosion of the container overpacking, and (4) the backfill material, just to name a few. Several of these coupled processes are already incorporated in the current version of AREST.
U. G. Aglietti; P. M. Santini
2015-03-09
By constructing the Riemann surface controlling the resonance structure of Winter model, we determine the limitations of perturbation theory. We then derive explicit non-perturbative results for various observables in the weak-coupling regime, in which the model has an infinite tower of long-lived resonant states. The problem of constructing proper initial wavefunctions coupled to single excitations of the model is also treated within perturbative and non-perturbative methods.
Denis R. Hirschfeldt; Richard A. Shore; Theodore A. Slaman
2007-01-01
We investigate the complexity of several classical model theoretic theorems about prime and atomic models and omitting types. Some are provable in RCA0, others are equivalent to ACA0. One, that every atomic theory has an atomic model, is not provable in RCA0 but is incomparable with WKL0, more than 1 1 conservative over RCA0 and strictly weaker than all the
NSDL National Science Digital Library
2012-08-06
In this activity, learners simulate the behavior of the atmosphere. Learners working in groups of four will represent "cells" of the model (Earth, lower atmosphere, upper atmosphere, and space) and exchange energy with each other. Learners will observe how temperature fluctuates in the model. Use this activity to introduce learners to the inner-workings of the atmosphere as well as how scientists use models to understand abstract phenomena.
Making Mendel's Model Manageable
NSDL National Science Digital Library
Karen Mesmer
2006-01-01
Genetics is often a fascinating but difficult subject for middle level students. This engaging activity presents an approach that helps students understand how genotypes can translate into phenotypes using Gummi Bears and Gummi Dolphins to solve problems using Mendel's model, and then revising the model as necessary. Developing a model gives students a sense of how science works and how data translate into scientific ideas.
Prompt Fission Neutrons as Probes to Nuclear Configurations at Scission
Talou, P.; Kawano, T. [Nuclear Physics Group, Theoretical Division, Los Alamos National Laboratory, New Mexico 87545 (United States); Bonneau, L. [CENBG, Le Haut Vigneau, 33175 Gradignan (France)
2008-04-17
Prompt fission neutrons and gamma-rays emitted by excited primary fission fragments are indirect probes to the nuclear configurations present near the scission point. By studying detailed characteristics of these quantities, it is shown that one can discriminate between various assumptions regarding the sharing of the free energy at scission among the two fragments. The case of low-energy neutron-induced fission on {sup 235}U is studied and interpreted in terms of fission modes.
Using Hidden Markov Models to Model Multiple Transcription Factor Binding
Arvestad, Lars
Using Hidden Markov Models to Model Multiple Transcription Factor Binding E S T H E R - L O U P A S S Master of Science Thesis Stockholm, Sweden 2006 #12;Using Hidden Markov Models to Model Multiple- tance between them. This report presents a hidden Markov model for modeling the co-regulation of motifs
Using Markov Models and Hidden Markov Models to Find Repetitive
Karplus, Kevin
Using Markov Models and Hidden Markov Models to Find Repetitive Extragenic Palindromic Sequences for using simple Markov models and hidden Markov models hmms to search for interesting sequences for automatically constructing simple Markov models and hidden Markov models from small training sets
A component object model strategy for reusing ecosystem models
Jinxun Liu; Changhui Peng; Qinglai Dang; Mike Apps; Hong Jiang
2002-01-01
Most ecosystem simulation models are large monolithic simulation programs that are machine dependent and difficult to reuse by other modelers. One way to effectively reuse existing ecosystem models is to break the models into smaller functional parts. These parts are then reconstructed into standardized model components which can be pieced together to form a new model with the desired characteristics.
Human Motor Computational Model Through Iterative Model Reference Adaptive Control
Melbourne, University of
Human Motor Computational Model Through Iterative Model Reference Adaptive Control Shou-Han Zhou observed in the human motor adaptation. Keywords: Human Motor Computational Model; Iterative Model.burdet@imperial.ac.uk). Abstract: A computational model using mechanical impedance control in combination with an iterative model
NASA Technical Reports Server (NTRS)
Briggs, Hugh C.
2008-01-01
An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.
Fisher, G.H. (Univ. of Hawaii (United States)); Hawley, S.L. (California Univ., Riverside, CA (United States) Lawrence Livermore National Laboratory, CA (United States))
1992-01-01
The authors present 3 sets of solar flare atmospheric models, computed in 3 different limits. In all of the models, energy balance is assumed, with radiative losses from the optically thick transitions of HI, CaII, and MgII balancing flare heating from nonthermal electrons and X-rays form the flaring corona. In the 'Hydrostatic' models, they have assumed that flare heating by Coulomb collisions from a flux of nonthermal electrons has been occurring for an infinitely long time, and the corona and chromosphere have achieved both energetic and hydrostatic equilibrium. In the 'Impulsive' models, they have assumed that the atmospheric density remains frozen in its preflare state, but that the atmosphere rapidly achieves a temperature structure consistence with energy balance. In the 'Evolving' models, they have assumed a temporal variation of the nonthermal electron heating rate consistent with flare heating for time scales of 5-10 minutes, corresponding to a long lived and intense flare, continually undergoing chromospheric evaporation. In this case, the chromospheric model is in hydrostatic equilibrium, but the flare transition region is at depths that are much less than those in the 'Hydrostatic' models. They present temperature and density structures in these models atmospheres, line and continuum fluxes from each model, and a few selected line profiles.
Mathematical models for carboxyhemoglobin
NASA Astrophysics Data System (ADS)
Marcus, Allan H.
This paper describes a non-linear model for blood COHb fraction as a function of a subject's activity level, individual physiological parameters, and individual time-variable exposure to carbon monoxide. The computer solution of the model fits data on smokers and non-smokers. The simpler models of Ott and Mage (1978) and Venkatram and Louch (1979) are shown to have the correct form in low-stress environments, but the simpler models do not allow assessment of health effects for sensitive or high-risk subpopulations. Their postulated constant time-scale of 2.49 h may lead to substantial underestimation of COHb buildup.
Adaptive chemical model reduction
NASA Astrophysics Data System (ADS)
Najm, H. N.; Lee, J. C.; Valorani, M.; Goussis, D. A.; Frenklach, M.
2005-01-01
We briefly review various chemical model reduction strategies with application in reacting flow computations. We focus on systematic methods that enable automated model reduction. We highlight the specific advantages of computational singular perturbation (CSP) analysis. We outline a novel implementation of CSP, with adaptive tabulation of the basis vectors, that enables fast identification of the reduced chemical model at any point in the chemical phase space, and efficient integration of the chemical system. We describe this implementation in the context of a particular model problem that exhibits stiffness typical of chemical kinetic systems.
Graziani, F.R.
1983-01-01
A study of the large N behavior of both the O(N) linear and nonlinear sigma models is presented. The purpose is to investigate the relationship between the disordered (ordered) phase of the linear and nonlinear sigma models. Utilizing operator product expansions and stability analysis. The dimensionless renormalized quartic coupling and the limit of the linear sigma model which yields the nonlinear sigma model are investigated. It is also shown that stable large N linear sigma models with lambda < 0 (lambda is the bare quartic coupling) can exist (at least in the context of no tachyonic stages being present). A criteria valid for all dimensionalities d, less than four, is derived which determines when lambda < 0 models are tachyonic free. Arguments are given showing that the d = 4 large N linear (for lambda > 0) and nonlinear models are trivial. This result (i.e. triviality) is well known but only for one and two component models. Interestingly enough, the lambda < 0 d = 4 linear sigma model remains nontrivial and tachyonic free. The initial steps toward relating the effective gluon mass and the gluon condensate are taken. The contribution of the self-energy insertion diagram (two-loop order) to the electromagnetic polarization tensor is given. Assuming that the vertex correction gives a comparable contribution, a value for the effective gluon mass of 400-700 MeV is found.
Shkilev, V. P. [National Academy of Sciences of Ukraine, Institute of Surface Chemistry (Ukraine)], E-mail: shkilevv@ukr.net
2008-11-15
A locally nonequilibrium model of superdiffusion is proposed that is based on the partition of the set of diffusing particles into groups according to the flight length of these particles. The process of diffusion is described in terms of partial concentrations of particles belonging to different groups. As special limit cases, the model yields equations with fractional time derivative and the so-called porous medium equation. The basic equations of the model are Markov equations; therefore, they easily include reaction terms. The model can be applied to describing the types of diffusion in which the diffusing particles are in free flight most of the time.
Mathematical model of sarcoidosis.
Hao, Wenrui; Crouser, Elliott D; Friedman, Avner
2014-11-11
Sarcoidosis is a disease involving abnormal collection of inflammatory cells forming nodules, called granulomas. Such granulomas occur in the lung and the mediastinal lymph nodes, in the heart, and in other vital and nonvital organs. The origin of the disease is unknown, and there are only limited clinical data on lung tissue of patients. No current model of sarcoidosis exists. In this paper we develop a mathematical model on the dynamics of the disease in the lung and use patients' lung tissue data to validate the model. The model is used to explore potential treatments. PMID:25349384
Lightning return stroke models
NASA Technical Reports Server (NTRS)
Lin, Y. T.; Uman, M. A.; Standler, R. B.
1980-01-01
We test the two most commonly used lightning return stroke models, Bruce-Golde and transmission line, against subsequent stroke electric and magnetic field wave forms measured simultaneously at near and distant stations and show that these models are inadequate to describe the experimental data. We then propose a new return stroke model that is physically plausible and that yields good approximations to the measured two-station fields. Using the new model, we derive return stroke charge and current statistics for about 100 subsequent strokes.
NASA Technical Reports Server (NTRS)
Levy, R.; Mcginness, H.
1976-01-01
Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.
Modelling approaches in biomechanics.
Alexander, R McN
2003-01-01
Conceptual, physical and mathematical models have all proved useful in biomechanics. Conceptual models, which have been used only occasionally, clarify a point without having to be constructed physically or analysed mathematically. Some physical models are designed to demonstrate a proposed mechanism, for example the folding mechanisms of insect wings. Others have been used to check the conclusions of mathematical modelling. However, others facilitate observations that would be difficult to make on real organisms, for example on the flow of air around the wings of small insects. Mathematical models have been used more often than physical ones. Some of them are predictive, designed for example to calculate the effects of anatomical changes on jumping performance, or the pattern of flow in a 3D assembly of semicircular canals. Others seek an optimum, for example the best possible technique for a high jump. A few have been used in inverse optimization studies, which search for variables that are optimized by observed patterns of behaviour. Mathematical models range from the extreme simplicity of some models of walking and running, to the complexity of models that represent numerous body segments and muscles, or elaborate bone shapes. The simpler the model, the clearer it is which of its features is essential to the calculated effect. PMID:14561333
Brown-VanHoozer, S. A.
1999-06-02
Conscious awareness of our environment is based on a feedback loop comprised of sensory input transmitted to the central nervous system leading to construction of our ''model of the world,'' (Lewis et al, 1982). We then assimilate the neurological model at the unconscious level into information we can later consciously consider useful in identifying belief systems and behaviors for designing diverse systems. Thus, we can avoid potential problems based on our open-to-error perceived reality of the world. By understanding how our model of reality is organized, we allow ourselves to transcend content and develop insight into how effective choices and belief systems are generated through sensory derived processes. These are the processes which provide the designer the ability to meta model (build a model of a model) the user; consequently, matching the mental model of the user with that of the designer's and, coincidentally, forming rapport between the two participants. The information shared between the participants is neither assumed nor generalized, it is closer to equivocal; thus minimizing error through a sharing of each other's model of reality. How to identify individual mental mechanisms or processes, how to organize the individual strategies of these mechanisms into useful patterns, and to formulate these into models for success and knowledge based outcomes is the subject of the discussion that follows.
Andrea Ruff
2008-09-16
A quasar wind model is proposed to describe the spatial and velocity structure of the broad line region. This model requires detailed photoionization and magnetohydrodynamic simulation, as the broad line region it too small for direct spatial resolution. The emission lines are Doppler broadened, since the gas is moving at high velocity. The high velocity is attained by the gas from a combination of radiative and magnetic driving forces. Once this model is complete, the model predictions will be tested against recent microlensing data in conjunction with diverse existing observations.
Daniele S. M. Alves; Jamison Galloway; Matthew McCullough; Neal Weiner
2015-02-17
Models with Dirac gauginos provide appealing scenarios for physics beyond the standard model. They have smaller radiative corrections to the Higgs mass, a suppression of certain SUSY production processes, and ameliorated flavor constraints. Unfortunately, they also generally have tachyons, the solutions to which typically spoil these positive features. The recently proposed "Goldstone Gaugino" mechanism provides a simple solution that eliminates these tachyonic states. We provide details on this mechanism and explore models for its origin. In particular, we find SUSY QCD models that realize this idea simply, and discuss scenarios for unification.
Alves, Daniele S M; McCullough, Matthew; Weiner, Neal
2015-01-01
Models with Dirac gauginos provide appealing scenarios for physics beyond the standard model. They have smaller radiative corrections to the Higgs mass, a suppression of certain SUSY production processes, and ameliorated flavor constraints. Unfortunately, they also generally have tachyons, the solutions to which typically spoil these positive features. The recently proposed "Goldstone Gaugino" mechanism provides a simple solution that eliminates these tachyonic states. We provide details on this mechanism and explore models for its origin. In particular, we find SUSY QCD models that realize this idea simply, and discuss scenarios for unification.
NASA Technical Reports Server (NTRS)
Kalkofen, W.
1985-01-01
The assumptions of Ayres' model of the upper solar atmosphere are examined. It is found that the bistable character of his model is postulated - through the assumptions concerning the opacity sources and the effect of mechanical waves, which are allowed to destroy the CO molecules but not to heat the gas. The neglect of cooling by metal lines is based on their reduced local cooling rate, but it ignores the increased depth over which this cooling occurs. Thus, the bifurcated model of the upper solar atmosphere consists of two models, one cold at the temperature minimum, with a kinetic temperature of 2900 K, and the other hot, with a temperature of 4900 K.
Atmospheric Science Data Center
2014-04-25
... Station Instrument: Chemiluminescence UV Ozone Detector Location: Northeastern United States ... Files: NE Model Readme Hourly Surface Air Quality Ozone & Nitrogen Measurement Sites Related Data: ...
NSDL National Science Digital Library
University of Nebraska State Museum
2001-01-01
In this detailed activity (on pages 9-18), learners investigate the body parts of a parasitic ascaris worm by making and dissecting clay models. Each learner creates a model of either a male or female worm, then they swap models, predict the sex of their new worm, and dissect it to check their prediction. Labels are provided for marking the parts of the dissected models. The activity introduces learners to the structure and function of digestive and reproductive organs of organisms that can live in the human body and cause disease.
Modeling correlated healthcare costs.
Daggy, Joanne K; Thomas, Joseph; Craig, Bruce A
2011-02-01
Accurate estimation and prediction of healthcare costs play crucial roles in decisions made by healthcare agencies on policy and resource allocation. Development of a cost model allows these decision-makers the opportunity to investigate the impact of different policies and/or allocations of resources. With increased subject-specific information, longitudinal studies and the breakdown of total costs into categories comes the need for healthcare cost models to account for correlation. In this article, we review the statistical models used to fit joint costs, emphasizing the use of copulas as a flexible and relatively straightforward approach to move from marginal to joint modeling. PMID:21351862
NASA Astrophysics Data System (ADS)
Wiegelmann, Thomas; Petrie, Gordon J. D.; Riley, Pete
2015-07-01
Coronal magnetic field models use photospheric field measurements as boundary condition to model the solar corona. We review in this paper the most common model assumptions, starting from MHD-models, magnetohydrostatics, force-free and finally potential field models. Each model in this list is somewhat less complex than the previous one and makes more restrictive assumptions by neglecting physical effects. The magnetohydrostatic approach neglects time-dependent phenomena and plasma flows, the force-free approach neglects additionally the gradient of the plasma pressure and the gravity force. This leads to the assumption of a vanishing Lorentz force and electric currents are parallel (or anti-parallel) to the magnetic field lines. Finally, the potential field approach neglects also these currents. We outline the main assumptions, benefits and limitations of these models both from a theoretical (how realistic are the models?) and a practical viewpoint (which computer resources to we need?). Finally we address the important problem of noisy and inconsistent photospheric boundary conditions and the possibility of using chromospheric and coronal observations to improve the models.
NSDL National Science Digital Library
Andrew Jackson
This page from the Swiss Federal Institute of Technology explains current research modeling geomagnetic processes, including magnetic reversals and secular variation. Includes three supporting figures.
NASA Technical Reports Server (NTRS)
Merkowitz, Stephen M.
2002-01-01
The Laser Interferometer Space Antenna (LISA) space mission has unique needs that argue for an aggressive modeling effort. These models ultimately need to forecast and interrelate the behavior of the science input, structure, optics, control systems, and many other factors that affect the performance of the flight hardware. In addition, many components of these integrated models will also be used separately for the evaluation and investigation of design choices, technology development and integration and test. This article presents an overview of the LISA integrated modeling effort.
NASA Technical Reports Server (NTRS)
Sapyta, Joe; Reid, Hank; Walton, Lew
1993-01-01
The topics are presented in viewgraph form and include the following: particle bed reactor (PBR) core cross section; PBR bleed cycle; fuel and moderator flow paths; PBR modeling requirements; characteristics of PBR and nuclear thermal propulsion (NTP) modeling; challenges for PBR and NTP modeling; thermal hydraulic computer codes; capabilities for PBR/reactor application; thermal/hydralic codes; limitations; physical correlations; comparison of predicted friction factor and experimental data; frit pressure drop testing; cold frit mask factor; decay heat flow rate; startup transient simulation; and philosophy of systems modeling.
A Revised Communication Model.
ERIC Educational Resources Information Center
Emmons, John H.
1988-01-01
Presents a communication process model that distinguishes between active and passsive communication. Discusses the various stages in communication, including pretransmission, transmission, and posttransmission. (CH)
Mathematical model of sarcoidosis
Hao, Wenrui; Crouser, Elliott D.; Friedman, Avner
2014-01-01
Sarcoidosis is a disease involving abnormal collection of inflammatory cells forming nodules, called granulomas. Such granulomas occur in the lung and the mediastinal lymph nodes, in the heart, and in other vital and nonvital organs. The origin of the disease is unknown, and there are only limited clinical data on lung tissue of patients. No current model of sarcoidosis exists. In this paper we develop a mathematical model on the dynamics of the disease in the lung and use patients’ lung tissue data to validate the model. The model is used to explore potential treatments. PMID:25349384
MODELS AND HISTORY OF MODELING Hermann Schichl
Schichl, Hermann
of Cyrene, one of the first "applied mathematicians", used this knowledge to calculate the distances Earth-Sun and Earth-Moon and, best known, the circumference of the Earth by a mathemati- cal/geometric model and epicircles to predict the movement of sun,
Modeling Imports in a Keynesian Expenditure Model
ERIC Educational Resources Information Center
Findlay, David W.
2010-01-01
The author discusses several issues that instructors of introductory macroeconomics courses should consider when introducing imports in the Keynesian expenditure model. The analysis suggests that the specification of the import function should partially, if not completely, be the result of a simple discussion about the spending and import…
Performance Modeling Of Interactive Gaming
Bhulai, Sandjai
Performance Modeling Of Interactive Gaming A.F. Wattimena May 2006 #12;Performance Modeling of Interactive Gaming ii #12;Performance Modeling of Interactive Gaming iii Performance Modeling of Interactive Gaming Alexander Franciscus Wattimena
Environmental Modeling: Coping with Uncertainty
Politècnica de Catalunya, Universitat
Division Mathematical Modeling and Analysis Group Los Alamos National Laboratory C. Larry Winter (NCAR in Environmental Modeling Uncertainty in processes physical Computational uncertainty uncertainty Model Parameter. Quantitative modeling of environmental processes 2. Parametric uncertainty 3. Current approaches to uncertainty
Model selection in compositional spaces
Grosse, Roger Baker
2014-01-01
We often build complex probabilistic models by composing simpler models-using one model to generate parameters or latent variables for another model. This allows us to express complex distributions over the observed data ...
MONOIDAL MODEL CATEGORIES MARK HOVEY
Hovey, Mark
two assumptions guarantee the existence of a model structure on the category of monoids MONOIDAL MODEL CATEGORIES MARK HOVEY Abstract.A monoidal model category is a model category with a closed monoidal structure which
General background on modeling and specifics of modeling vapor intrusion are given. Three classical model applications are described and related to the problem of petroleum vapor intrusion. These indicate the need for model calibration and uncertainty analysis. Evaluation of Bi...
Turbulence Modeling: A NASA Perspective
NASA Technical Reports Server (NTRS)
Gatski, T. B.
2001-01-01
This paper presents turbulence modeling from NASA's perspective. The topics include: 1) Hierarchy of Solution Methods; 2) Turbulence Modeling Focus; 3) Linear Eddy Viscosity Models; and 4) Nonlinear Eddy Viscosity Algebraic Stress Models.
Ahmed E. Hassan
2006-01-24
Models have an inherent uncertainty. The difficulty in fully characterizing the subsurface environment makes uncertainty an integral component of groundwater flow and transport models, which dictates the need for continuous monitoring and improvement. Building and sustaining confidence in closure decisions and monitoring networks based on models of subsurface conditions require developing confidence in the models through an iterative process. The definition of model validation is postulated as a confidence building and long-term iterative process (Hassan, 2004a). Model validation should be viewed as a process not an end result. Following Hassan (2004b), an approach is proposed for the validation process of stochastic groundwater models. The approach is briefly summarized herein and detailed analyses of acceptance criteria for stochastic realizations and of using validation data to reduce input parameter uncertainty are presented and applied to two case studies. During the validation process for stochastic models, a question arises as to the sufficiency of the number of acceptable model realizations (in terms of conformity with validation data). Using a hierarchical approach to make this determination is proposed. This approach is based on computing five measures or metrics and following a decision tree to determine if a sufficient number of realizations attain satisfactory scores regarding how they represent the field data used for calibration (old) and used for validation (new). The first two of these measures are applied to hypothetical scenarios using the first case study and assuming field data consistent with the model or significantly different from the model results. In both cases it is shown how the two measures would lead to the appropriate decision about the model performance. Standard statistical tests are used to evaluate these measures with the results indicating they are appropriate measures for evaluating model realizations. The use of validation data to constrain model input parameters is shown for the second case study using a Bayesian approach known as Markov Chain Monte Carlo. The approach shows a great potential to be helpful in the validation process and in incorporating prior knowledge with new field data to derive posterior distributions for both model input and output.
Biosphere Process Model Report
J. Schmitt
2000-05-25
To evaluate the postclosure performance of a potential monitored geologic repository at Yucca Mountain, a Total System Performance Assessment (TSPA) will be conducted. Nine Process Model Reports (PMRs), including this document, are being developed to summarize the technical basis for each of the process models supporting the TSPA model. These reports cover the following areas: (1) Integrated Site Model; (2) Unsaturated Zone Flow and Transport; (3) Near Field Environment; (4) Engineered Barrier System Degradation, Flow, and Transport; (5) Waste Package Degradation; (6) Waste Form Degradation; (7) Saturated Zone Flow and Transport; (8) Biosphere; and (9) Disruptive Events. Analysis/Model Reports (AMRs) contain the more detailed technical information used to support TSPA and the PMRs. The AMRs consists of data, analyses, models, software, and supporting documentation that will be used to defend the applicability of each process model for evaluating the postclosure performance of the potential Yucca Mountain repository system. This documentation will ensure the traceability of information from its source through its ultimate use in the TSPA-Site Recommendation (SR) and in the National Environmental Policy Act (NEPA) analysis processes. The objective of the Biosphere PMR is to summarize (1) the development of the biosphere model, and (2) the Biosphere Dose Conversion Factors (BDCFs) developed for use in TSPA. The Biosphere PMR does not present or summarize estimates of potential radiation doses to human receptors. Dose calculations are performed as part of TSPA and will be presented in the TSPA documentation. The biosphere model is a component of the process to evaluate postclosure repository performance and regulatory compliance for a potential monitored geologic repository at Yucca Mountain, Nevada. The biosphere model describes those exposure pathways in the biosphere by which radionuclides released from a potential repository could reach a human receptor. Collectively, the potential human receptor and exposure pathways form the biosphere model. More detailed technical information and data about potential human receptor groups and the characteristics of exposure pathways have been developed in a series of AMRs and Calculation Reports.
Magretta, Joan
2002-05-01
"Business model" was one of the great buzz-words of the Internet boom. A company didn't need a strategy, a special competence, or even any customers--all it needed was a Web-based business model that promised wild profits in some distant, ill-defined future. Many people--investors, entrepreneurs, and executives alike--fell for the fantasy and got burned. And as the inevitable counterreaction played out, the concept of the business model fell out of fashion nearly as quickly as the .com appendage itself. That's a shame. As Joan Magretta explains, a good business model remains essential to every successful organization, whether it's a new venture or an established player. To help managers apply the concept successfully, she defines what a business model is and how it complements a smart competitive strategy. Business models are, at heart, stories that explain how enterprises work. Like a good story, a robust business model contains precisely delineated characters, plausible motivations, and a plot that turns on an insight about value. It answers certain questions: Who is the customer? How do we make money? What underlying economic logic explains how we can deliver value to customers at an appropriate cost? Every viable organization is built on a sound business model, but a business model isn't a strategy, even though many people use the terms interchangeably. Business models describe, as a system, how the pieces of a business fit together. But they don't factor in one critical dimension of performance: competition. That's the job of strategy. Illustrated with examples from companies like American Express, EuroDisney, WalMart, and Dell Computer, this article clarifies the concepts of business models and strategy, which are fundamental to every company's performance. PMID:12024761
Buried Markov Model Hidden Markov
Takiguchi, Tetsuya
Buried Markov Model , , , ( ), ( ) 1 [1] [2] [3] Hidden Markov Model (HMM) HMM HMM Buried Markov Model (BMM) [4] J. Bilmes HMM BMM BMM 2 Buried Markov Model 2.1 Buried Markov Model HMM Fig. 1 BMM A Study on Dysarthric Speech Recognition using Buried Markov Model, by Chikoto Miyamoto, Yuto Komai
Acoustic models and sonar systems
Michael B. Porter; Z MEDINA BANK; X BALTIC
1993-01-01
The basic types of acoustic models are reviewed. These include ray models, spectral integral models, normal mode models, parabolic equation modeling, and 3-D acoustic modeling. Their application to conventional sonar simulation problems is demonstrated. Examples of their use in more advanced signal processing applications are presented
Active Shape Models - 'Smart Snakes
T. F. Cootes; C. J. Taylor
1992-01-01
We describe 'Active Shape Models' which iteratively adapt to refine esti- mates of the pose, scale and shape of models of image objects. The method uses flexible models derived from sets of training examples. These models, known as Point Distribution Models, represent objects as sets of labelled points. An initial estimate of the location of the model points in an
Janet Siar; Melanie Schaffner; Karla L. Hahn
2005-01-01
Librarians and publishers alike are struggling with a plethora of new options for pricing journals. To build a shared understanding of the advantages and concerns of various common pricing models, three categories of pricing models are evaluated in paper. Variations on traditional subscriptions, tiered pricing, and consortial complexities are defined and then considered from the viewpoint of both librarians and
Dependence Modelling, Model Risk and Model Calibration in Models of Portfolio Credit Risk
Frey, Rüdiger
also have an an important impact on the performance of derivative securities, whose payoff is linked to the loss of a whole portfolio of underlying bonds or loans such as collaterized debt obligations (CBOs;models. In this part of the paper we build on recent research on dependence-modelling in risk management
Nuovo Indirizzo: Modelling and
Pugliese, Andrea
Nuovo Indirizzo: Modelling and Simulation for Biomedical Applications Eleuterio Toro, Albrto Valli. Bozzi) #12;II semestre Scientific computing (6 crediti) (Prof. Dumbser) Biomedical imaging (6 crediti linear models, in data analysis with the use of specialized software on test data. #12;Physiology
NASA Astrophysics Data System (ADS)
Taniguchi, Tadahiro; Sawaragi, Tetsuo
In this paper, a new machine-learning method, called Dual-Schemata model, is presented. Dual-Schemata model is a kind of self-organizational machine learning methods for an autonomous robot interacting with an unknown dynamical environment. This is based on Piaget's Schema model, that is a classical psychological model to explain memory and cognitive development of human beings. Our Dual-Schemata model is developed as a computational model of Piaget's Schema model, especially focusing on sensori-motor developing period. This developmental process is characterized by a couple of two mutually-interacting dynamics; one is a dynamics formed by assimilation and accommodation, and the other dynamics is formed by equilibration and differentiation. By these dynamics schema system enables an agent to act well in a real world. This schema's differentiation process corresponds to a symbol formation process occurring within an autonomous agent when it interacts with an unknown, dynamically changing environment. Experiment results obtained from an autonomous facial robot in which our model is embedded are presented; an autonomous facial robot becomes able to chase a ball moving in various ways without any rewards nor teaching signals from outside. Moreover, emergence of concepts on the target movements within a robot is shown and discussed in terms of fuzzy logics on set-subset inclusive relationships.
This task provides credible state of the art air quality models and guidance for use in implementation of National Ambient Air Quality Standards for ozone and PM. This research effort is to develop and improve air quality models, such as the Community Multiscale Air Quality (CMA...
Dasymetric Modeling and Uncertainty
Nagle, Nicholas N.; Buttenfield, Barbara P.; Leyk, Stefan; Speilman, Seth
2014-01-01
Dasymetric models increase the spatial resolution of population data by incorporating related ancillary data layers. The role of uncertainty in dasymetric modeling has not been fully addressed as of yet. Uncertainty is usually present because most population data are themselves uncertain, and/or the geographic processes that connect population and the ancillary data layers are not precisely known. A new dasymetric methodology - the Penalized Maximum Entropy Dasymetric Model (P-MEDM) - is presented that enables these sources of uncertainty to be represented and modeled. The P-MEDM propagates uncertainty through the model and yields fine-resolution population estimates with associated measures of uncertainty. This methodology contains a number of other benefits of theoretical and practical interest. In dasymetric modeling, researchers often struggle with identifying a relationship between population and ancillary data layers. The PEDM model simplifies this step by unifying how ancillary data are included. The P-MEDM also allows a rich array of data to be included, with disparate spatial resolutions, attribute resolutions, and uncertainties. While the P-MEDM does not necessarily produce more precise estimates than do existing approaches, it does help to unify how data enter the dasymetric model, it increases the types of data that may be used, and it allows geographers to characterize the quality of their dasymetric estimates. We present an application of the P-MEDM that includes household-level survey data combined with higher spatial resolution data such as from census tracts, block groups, and land cover classifications. PMID:25067846
This lecture will present AQUATOX, an aquatic ecosystem simulation model developed by Dr. Dick Park and supported by the U.S. EPA. The AQUATOX model predicts the fate of various pollutants, such as nutrients and organic chemicals, and their effects on the ecosystem, including fi...
ERIC Educational Resources Information Center
Dworkin, Nancy; Dworkin, Yehoash
The 1978 Summer Reading Institute, which served 58 Washington, D.C., elementary school children, is described in this paper. Major characteristics of the program model are first identified, along with elements that were added to the model in the preplanning stage. Numerous aspects of the program are then described, including the make-up of the…
NASA Astrophysics Data System (ADS)
Machtoub, G.
2010-12-01
This work introduces an overview of the framework and design of the developed 3D visualization software intended to model small solar system bodies( planetoids) in a novel synthetic environment. The developed tool facilitates shape and terrain modeling, and rending under arbitrary view points and lightning conditions. A hybrid algorithm is implemented on a class of available images provided by recent space missions.
The scope of modelling the behavior of pollutants in the aquatic environment is now immense. n many practical applications, there are effectively no computational constraints on what is possible. here is accordingly an increasing need for a set of principles of modelling that in ...
NSDL National Science Digital Library
David Joiner
The Space Ship Pilot model is a model of motion under Newton's laws with and without resistive forces. The first environment puts the user in control of docking a space shuttle, and the second puts the user in control of docking a boat.
NASA Astrophysics Data System (ADS)
Erpylev, N. P.; Smirnov, M. A.; Bagrov, A. V.
A night sky model is proposed. It includes different components of light polution, such as solar twilight, moon scattered light, zodiacal light, Milky Way, air glow and artificial light pollution. The model is designed for calculating the efficiency of astronomical installations.
AGRICULTURAL SIMULATION MODEL (AGSIM)
AGSIM is a large-scale econometric simulation model of regional crop and national livestock production in the United States. The model was initially developed to analyze the aggregate economic impacts of a wide variety issues facing agriculture, such as technological change, pest...
Technology Transfer Automated Retrieval System (TEKTRAN)
Models of wind erosion are used to investigate fundamental processes and guide resource management. Many models are similar in that - temporal variables control soil wind erodibility; erosion begins when friction velocity exceeds a threshold; and transport capacity for saltation/creep is proportion...
CISNET: Breast Cancer Modeling
The Breast Group is in its third round of funding. Six groups and a coordinating center are funded to model modern developments in breast prevention, early detection and treatment. A unique aspect of the current round of funding is that the groups will model breast cancer as four separate sub-types (based on molecular subtypes).
CISNET's flexible broad-based disease models incorporate a central cancer model, which is modified by the full range of cancer control interventions (i.e., changing risk factor profiles of the population, new screening modalities, and treatment regimens). Outputs can include the full range of the benefits and costs of the interventions.
NSDL National Science Digital Library
This pdf from the South Carolina Advanced Technological Education National Resource Center discusses a model for guided research to improve education for engineering and industrial technicians. The SC ATE Workplace Research Model provides a guide for interdisciplinary faculty, teacher and student teams to conduct workplace research and gain a better understanding of the technicianâ??s role in the workplace through industry site visits.
Penny, Will
Empirical Bayes Will Penny Linear Models Empirical Bayes Isotropic Covariances EM Algorithm Coding MAP Learning Self-Inhibition Receptive Fields References Empirical Bayes Will Penny Bayesian Inference Course, WTCN, UCL, March 2013 #12;Empirical Bayes Will Penny Linear Models Empirical Bayes
LANGUAGE MODELS Djoerd Hiemstra
Hiemstra, Djoerd
training data. For example, a language model based on a big English newspaper archive is expected to assign in the early 1980's [9]. Speech recognition systems use a language model to complement the results applied to information retrieval by a number of research groups in the late 1990's [4, 7, 14, 15
LONGPRO Stream Modeling Exercise
NSDL National Science Digital Library
Bill Locke
The purpose of this exercise is to integrate modeling with field data. The activity includes links to a "virtual field trip" of maps and photographs. Data from a creek is included in the field trip and students use an Excel spreadsheet model to analyze the data.
Postinstability models in elasticity
NASA Technical Reports Server (NTRS)
Zak, M.
1984-01-01
It is demonstrated that the instability caused by the failure of hyperbolicity in elasticity and associated with the problem of unpredictability in classical mechanics expresses the incompleteness of the original model of an elastic medium. The instability as well as the ill-posedness of the Cauchy problem are eliminated by reformulating the original model.
ERIC Educational Resources Information Center
Baker, William P.; Moore, Cathy Ronstadt
1998-01-01
Understanding antibody structure and function is difficult for many students. The rearrangement of constant and variable regions during antibody differentiation can be effectively simulated using a paper model. Describes a hands-on laboratory exercise which allows students to model antibody diversity using readily available resources. (PVD)
Despite the value and widespread use of the Ames test, little attention has been focused on standardizing quantitative methods of analyzing these data. In this paper, a realistic and statistically tractable model is developed for the evaluation of Ames-type data. The model assume...
ERIC Educational Resources Information Center
Ivie, Stanley D.
2007-01-01
Humanity delights in spinning conceptual models of the world. These models, in turn, mirror their respective root metaphors. Three root metaphors--spiritual, organic, and mechanical--have dominated western thought. The spiritual metaphor runs from Plato, through Hegel, and connects with Montessori. The organic metaphor extends from Aristotle,…
Reliability model generator specification
NASA Technical Reports Server (NTRS)
Cohen, Gerald C.; Mccann, Catherine
1990-01-01
The Reliability Model Generator (RMG), a program which produces reliability models from block diagrams for ASSIST, the interface for the reliability evaluation tool SURE is described. An account is given of motivation for RMG and the implemented algorithms are discussed. The appendices contain the algorithms and two detailed traces of examples.
ERIC Educational Resources Information Center
Whitaker, H. A.
Noting that linguistics and the neurological sciences have developed independently, this paper presents a coordinated approach to man's understanding of language, cognition, and mind. A neurological model is developed following a discussion of the rationale of such an approach. Chapters include: (1) the relation of neurological evidence to models…
ERIC Educational Resources Information Center
Morehouse, Jim; And Others
1983-01-01
The Data Based Gymnasium Model, a systematic approach to physical education for the severely handicapped, is described in this newsletter issue. The model was developed through a cooperative effort between Oregon State University and the Teaching Research Infant and Child Center. The "Game, Exercise, and Leisure Sport Curriculum for the Severely…
ERIC Educational Resources Information Center
Buggey, Tom; Ogle, Lindsey
2012-01-01
Video self-modeling (VSM) first appeared on the psychology and education stage in the early 1970s. The practical applications of VSM were limited by lack of access to tools for editing video, which is necessary for almost all self-modeling videos. Thus, VSM remained in the research domain until the advent of camcorders and VCR/DVD players and,…
Technology Transfer Automated Retrieval System (TEKTRAN)
Water quality models are based on some representation of hydrology and may include movement of surface water, ground water, and mixing of water in lakes and water bodies. Water quality models simulate some combination of sediment, nutrients, heavy metals, xenobiotics, and aquatic biology. Althoug...
Peter Hasenfratz; Julius Kuti
1978-01-01
The quark bag model is reviewed here with particular emphasis on spectroscopic applications and the discussion of exotic objects as baryonium, gluonium, and the quark phase of matter. The physical vacuum is pictured in the model as a two-phase medium. In normal phase of the vacuum, outside hadrons, the propagation of quark and gluon fields is forbidden. When small bubbles
Structural Equation Model Trees
ERIC Educational Resources Information Center
Brandmaier, Andreas M.; von Oertzen, Timo; McArdle, John J.; Lindenberger, Ulman
2013-01-01
In the behavioral and social sciences, structural equation models (SEMs) have become widely accepted as a modeling tool for the relation between latent and observed variables. SEMs can be seen as a unification of several multivariate analysis techniques. SEM Trees combine the strengths of SEMs and the decision tree paradigm by building tree…
ERIC Educational Resources Information Center
Flannery, Maura C.
1997-01-01
Addresses the most popular models currently being chosen for biological research and the reasons behind those choices. Among the current favorites are zebra fish, fruit flies, mice, monkeys, and yeast. Concludes with a brief examination of the ethical issues involved, and why some animals may need to be replaced in research with model systems.…
Stereolithography models. Final report
Smith, R.E.
1995-03-01
This report describes the first stereolithographic models made, which proved in a new release of ProEngineer software (Parametric Technologies, or PTC) and 3D Systems (Valencia, California) software for the SLA 250 machine. They are a model of benzene and the {alpha}-carbon backbone of the variable region of an antibody.
Modeling Mathematical Cognitive Development.
ERIC Educational Resources Information Center
Wagner, Sigrid, Ed.; And Others
The papers contained in this document were originally presented at the May 1978 conference on Modeling Mathematical Cognitive Development sponsored by the Models of Learning Mathematics Working Group of the Georgia Center for the Study of Learning and Teaching Mathematics. Most have been revised to reflect comments and suggestions made at the…
ERIC Educational Resources Information Center
Lybarger, Scott; Smith, Craig R.
1996-01-01
Reconstructs Lloyd Bitzer's situational model to serve as a guide for the generation of multiperspectival critical assessments of rhetorical discourse. Uses two of President Bush's speeches on the drug crisis to illustrate how the reconstructed model can account for such modern problems as multiple audiences, perceptions, and exigencies. (PA)
Models of technology diffusion
P. A. Geroski
2000-01-01
The literature on new technology diffusion is vast, and it spills over many conventional disciplinary boundaries. This paper surveys the literature by focusing on alternative explanations of the dominant stylized fact: that the usage of new technologies over time typically follows an S-curve. The most commonly found model which is used to account for this model is the so-called epidemic
Microreview Modelling malaria pathogenesis
Day, Troy
Microreview Modelling malaria pathogenesis OnlineOpen: This article is available free online at www the development of models of malaria pathogenesis began, we are beyond the `proof-of-concept' phase of malaria. Recent research has begun to iterate theory and data in a much more comprehensive way
motivations modeling social structure
Spirtes, Peter
diversity outline 1 motivations peptic ulcer disease bandit problems social structure 2 modeling social of Transient Diversity #12;motivations modeling social structure transient diversity peptic ulcer disease bandit problems social structure peptic ulcer disease Two nineteenth century theories about peptic ulcer
K. I. Calvert; M. B. Doar; E. W. Zegura
1997-01-01
The topology of a network, or a group of networks such as the Internet, has a strong bearing on many management and performance issues. Good models of the topological structure of a network are essential for developing and analyzing internetworking technology. This article discusses how graph-based models can be used to represent the topology of large networks, particularly aspects of
MCMC Estimation Multilevel Models
Browne, William J.
for all parameters and the deviance statistic for use with the DIC diagnostic. Estimation is via Gibbs sampling for all parameters. DIC diagnostic for this model suggests that the introduction of random slopes greatly improves the model. ->BDIC Bayesian Deviance Information Criterion (DIC) Dbar D(thetabar) pD DIC
Slicing MATLAB Simulink models
Robert Reicherdt; Sabine Glesner
2012-01-01
MATLAB Simulink is the most widely used industrial tool for developing complex embedded systems in the automotive sector. The resulting Simulink models often consist of more than ten thousand blocks and a large number of hierarchy levels. To ensure the quality of such models, automated static analyses and slicing are necessary to cope with this complexity. In particular, static analyses
Multilevel Mixture Factor Models
ERIC Educational Resources Information Center
Varriale, Roberta; Vermunt, Jeroen K.
2012-01-01
Factor analysis is a statistical method for describing the associations among sets of observed variables in terms of a small number of underlying continuous latent variables. Various authors have proposed multilevel extensions of the factor model for the analysis of data sets with a hierarchical structure. These Multilevel Factor Models (MFMs)…
Raby, Stuart [Physics Department, Ohio State University, 191 W. Woodruff Ave., Columbus, OH 43210 (United States)
2010-02-10
In this talk I review some recent progress in heterotic and F theory model building. I then consider work in progress attempting to find the F theory dual to a class of heterotic orbifold models which come quite close to the MSSM.
Raby, Stuart
2008-11-23
In this talk I discuss the evolution of SUSY GUT model building as I see it. Starting with 4 dimensional model building, I then consider orbifold GUTs in 5 dimensions and finally orbifold GUTs embedded into the E{sub 8}xE{sub 8} heterotic string.
Jacob J. Jacobson; Gretchen Matthern
2007-04-01
System Dynamics is a computer-aided approach to evaluating the interrelationships of different components and activities within complex systems. Recently, System Dynamics models have been developed in areas such as policy design, biological and medical modeling, energy and the environmental analysis, and in various other areas in the natural and social sciences. The real power of System Dynamic modeling is gaining insights into total system behavior as time, and system parameters are adjusted and the effects are visualized in real time. System Dynamic models allow decision makers and stakeholders to explore long-term behavior and performance of complex systems, especially in the context of dynamic processes and changing scenarios without having to wait decades to obtain field data or risk failure if a poor management or design approach is used. The Idaho National Laboratory recently has been developing a System Dynamic model of the US Nuclear Fuel Cycle. The model is intended to be used to identify and understand interactions throughout the entire nuclear fuel cycle and suggest sustainable development strategies. This paper describes the basic framework of the current model and presents examples of useful insights gained from the model thus far with respect to sustainable development of nuclear power.
Clemens Lange; for the ATLAS; CMS collaborations
2014-11-26
Recent LHC highlights of searches for Higgs bosons beyond the Standard Model are presented. The results by the ATLAS and CMS collaborations are based on 2011 and 2012 proton-proton collision data at centre-of-mass energies of 7 and 8 TeV, respectively. They test a wide range of theoretical models.
ERIC Educational Resources Information Center
Gabel, Dorothy; And Others
1992-01-01
Chemistry can be described on three levels: sensory, molecular, and symbolic. Proposes a particle approach to teaching chemistry that uses magnets to aid students construct molecular models and solve particle problems. Includes examples of Johnstone's model of chemistry phenomena, a problem worksheet, and a student concept mastery sheet. (MDH)
Smith, Noah
Modeling Topics Kevin Gimpel December 11, 2006 Abstract Many applications in machine learning with text documents. In this review, we discuss techniques that use latent, topical information in text docu retrieval, topic tracking, novel event detection, document classification, and language modeling. In doing
Animal models for osteoporosis.
Turner, R T; Maran, A; Lotinun, S; Hefferan, T; Evans, G L; Zhang, M; Sibonga, J D
2001-01-01
Animal models will continue to be important tools in the quest to understand the contribution of specific genes to establishment of peak bone mass and optimal bone architecture, as well as the genetic basis for a predisposition toward accelerated bone loss in the presence of co-morbidity factors such as estrogen deficiency. Existing animal models will continue to be useful for modeling changes in bone metabolism and architecture induced by well-defined local and systemic factors. However, there is a critical unfulfilled need to develop and validate better animal models to allow fruitful investigation of the interaction of the multitude of factors which precipitate senile osteoporosis. Well characterized and validated animal models that can be recommended for investigation of the etiology, prevention and treatment of several forms of osteoporosis have been listed in Table 1. Also listed are models which are provisionally recommended. These latter models have potential but are inadequately characterized, deviate significantly from the human response, require careful choice of strain or age, or are not practical for most investigators to adopt. It cannot be stressed strongly enough that the enormous potential of laboratory animals as models for osteoporosis can only be realized if great care is taken in the choice of an appropriate species, age, experimental design, and measurements. Poor choices will results in misinterpretation of results which ultimately can bring harm to patients who suffer from osteoporosis by delaying advancement of knowledge. PMID:11704974
QUAL2K (or Q2K) is a river and stream water quality model that is intended to represent a modernized version of the QUAL2E (or Q2E) model (Brown and Barnwell 1987). Q2K is similar to Q2E in the following respects: One dimensional. The channel is well-mixed vertically a...
Neal Cardwell; Stefan Savage; Thomas E. Anderson
2000-01-01
Several analytic models describe the steady-state throughput of bulk transfer TCP flows as a function of round trip time and packet loss rate. These models describe flows based on the assumption that they are long enough to sustain many packet losses. However, most TCP trans- fers across today's Internet are short enough to see few, if any, losses and consequently
COMPUTER PROCESSING AND MODELING -
Nehorai, Arye
) Quantitative Analysis of Tumor Burden in Mouse Lung via MRI Vanessa K. Tidwell,1 Joel R. Garbow,2 Alexander S lung-tumor burden and monitoring the time-course progression of individual tumors in mouse models includes development of a new, two-dimensional parametric model of the mouse lungs and a multi-faceted cost
ERIC Educational Resources Information Center
Weinburgh, Molly; Silva, Cecilia
2011-01-01
For the past five summers, the authors have taught summer school to recent immigrants and refugees. Their experiences with these fourth-grade English language learners (ELL) have taught them the value of using models to build scientific and mathematical concepts. In this article, they describe the use of different forms of 2- and 3-D models to…
Preliminary semiempirical transport models
Singer, C.E.
1983-11-01
A class of semiempirical transport models is proposed for testing against confinement data from tokamaks and for use in operations planning and machine design. A reference model is proposed to be compatible with published confinement data. Theoretical considerations are used to express the anomalous transport coefficients in terms of appropriate dimensionless parameters.
ERIC Educational Resources Information Center
Germann, Paul J.
1992-01-01
Describes a seventh-grade science process skills test entitled, "In Hot Water." Students respond in writing to a hypothetical problem and then are provided with a model response. Students apply this model response in their performance of subsequent skills. The assessment can be used for individual or group work. (PR)
NSDL National Science Digital Library
Twin Cities Public Television, Inc.
2007-01-01
In this quick activity (page 2 of PDF), learners will model how large depressions near the top of a volcano are formed by using an inflating and deflating balloon submerged in flour. The model illustrates how volcanic ground swells and collapses as pressure builds and drains from a magma reservoir. Relates to the linked video, DragonflyTV GPS: Lava Flow.
This magnetic tape contains the FORTRAN source code, sample input data, and sample output data for the Photochemical Box Model (PBM). The PBM is a simple stationary single-cell model with a variable height lid designed to provide volume-integrated hour averages of O3 and other ph...
G. Arthur Mihramber
1972-01-01
Considerable interest currently exists in the application of the systems approach to the solution of societal, political, and environmental problems. The essence of this systems approach is modeling, the capability to describe large-scale complicated interactive systems by symbolic representations so that inferences regarding the effects of alternative system configurations can be easily and rapidly structured. The modeling process is itself
Unitary Response Regression Models
ERIC Educational Resources Information Center
Lipovetsky, S.
2007-01-01
The dependent variable in a regular linear regression is a numerical variable, and in a logistic regression it is a binary or categorical variable. In these models the dependent variable has varying values. However, there are problems yielding an identity output of a constant value which can also be modelled in a linear or logistic regression with…
QUAL2K (or Q2K) is a river and stream water quality model that is intended to represent a modernized version of the QUAL2E (or Q2E) model (Brown and Barnwell 1987). Q2K is similar to Q2E in the following respects:
N. P. Erpylev; M. A. Smirnov; A. V. Bagrov
1987-01-01
A night sky model is proposed. It includes different components of light polution, such as solar twilight, moon scattered light, zodiacal light, Milky Way, air glow and artificial light pollution. The model is designed for calculating the efficiency of astronomical installations.
The spatiotemporal epidemiological modeler
Stefan B. Edlund; Matthew A. Davis; James H. Kaufman
2010-01-01
In this paper, we give an overview of the Spatiotemporal Epidemiological Model (STEM), an open source disease modeling application available for free under the Eclipse Public License. We explain why applications such as STEM can benefit from being open and available to the general research community, and describe the design and architecture of STEM, highlighting some of STEM's more important
ERIC Educational Resources Information Center
Ifenthaler, Dirk; Seel, Norbert M.
2013-01-01
In this paper, there will be a particular focus on mental models and their application to inductive reasoning within the realm of instruction. A basic assumption of this study is the observation that the construction of mental models and related reasoning is a slowly developing capability of cognitive systems that emerges effectively with proper…
ERIC Educational Resources Information Center
Parks, Melissa
2014-01-01
Model-eliciting activities (MEAs) are not new to those in engineering or mathematics, but they were new to Melissa Parks. Model-eliciting activities are simulated real-world problems that integrate engineering, mathematical, and scientific thinking as students find solutions for specific scenarios. During this process, students generate solutions…
Edward L. Wright
2001-06-22
Models of the zodiacal light are necessary to convert measured data taken from low Earth orbit into the radiation field outside the solar system. The uncertainty in these models dominates the overall uncertainty in determining the extragalactic background light for wavelengths < 100 microns.
Model Cities Training Program.
ERIC Educational Resources Information Center
Tennessee Univ., Chattanooga.
The Model Cities Training Program, the first in the country, is a 10-session course to be conducted in seminar form under the direction of the University of Tennessee at Chattanooga. The objective is to enable the 50 members of the Community Development Administration Board of Directors to: acquire knowledge of the structure of the Model Cities…
Technology Transfer Automated Retrieval System (TEKTRAN)
Pigeonpea (Cajanus cajan (L.) Millsp.) is a widely grown legume in tropical and subtropical areas. A crop simulation model that can assist in farmer decision-making was developed. The phenological module is one of the major elements of the crop model because accurate prediction of the timing of gr...
Animal models for osteoporosis
NASA Technical Reports Server (NTRS)
Turner, R. T.; Maran, A.; Lotinun, S.; Hefferan, T.; Evans, G. L.; Zhang, M.; Sibonga, J. D.
2001-01-01
Animal models will continue to be important tools in the quest to understand the contribution of specific genes to establishment of peak bone mass and optimal bone architecture, as well as the genetic basis for a predisposition toward accelerated bone loss in the presence of co-morbidity factors such as estrogen deficiency. Existing animal models will continue to be useful for modeling changes in bone metabolism and architecture induced by well-defined local and systemic factors. However, there is a critical unfulfilled need to develop and validate better animal models to allow fruitful investigation of the interaction of the multitude of factors which precipitate senile osteoporosis. Well characterized and validated animal models that can be recommended for investigation of the etiology, prevention and treatment of several forms of osteoporosis have been listed in Table 1. Also listed are models which are provisionally recommended. These latter models have potential but are inadequately characterized, deviate significantly from the human response, require careful choice of strain or age, or are not practical for most investigators to adopt. It cannot be stressed strongly enough that the enormous potential of laboratory animals as models for osteoporosis can only be realized if great care is taken in the choice of an appropriate species, age, experimental design, and measurements. Poor choices will results in misinterpretation of results which ultimately can bring harm to patients who suffer from osteoporosis by delaying advancement of knowledge.
ERIC Educational Resources Information Center
Eichinger, John
2005-01-01
Models are crucial to science teaching and learning, yet they can create unforeseen and overlooked challenges for students and teachers. For example, consider the time-tested clay volcano that relies on a vinegar and-baking-soda mixture for its "eruption." Based on a classroom demonstration of that geologic model, elementary students may interpret…
Storm Water Management Model (SWMM) is a comprehensive model for analysis of quantity and quality problems associated with urban runoff. Both single-event and continuous simulation may be performed on catchments having storm sewers, combined sewers, and natural drainage, for pred...
ERIC Educational Resources Information Center
Samberg, Laura, Comp.; Sheeran, Melyssa, Comp.
This booklet contains profiles of 26 community-school models. Although communities and states approach the development of community schools in various ways, all the models presented here reflect the shared vision of the Coalition for Community Schools, that is, a set of partnerships to establish a place where services, support, and opportunity…
Composite Load Model Evaluation
Lu, Ning; Qiao, Hong (Amy)
2007-09-30
The WECC load modeling task force has dedicated its effort in the past few years to develop a composite load model that can represent behaviors of different end-user components. The modeling structure of the composite load model is recommended by the WECC load modeling task force. GE Energy has implemented this composite load model with a new function CMPLDW in its power system simulation software package, PSLF. For the last several years, Bonneville Power Administration (BPA) has taken the lead and collaborated with GE Energy to develop the new composite load model. Pacific Northwest National Laboratory (PNNL) and BPA joint force and conducted the evaluation of the CMPLDW and test its parameter settings to make sure that: • the model initializes properly, • all the parameter settings are functioning, and • the simulation results are as expected. The PNNL effort focused on testing the CMPLDW in a 4-bus system. An exhaustive testing on each parameter setting has been performed to guarantee each setting works. This report is a summary of the PNNL testing results and conclusions.
COMMUTER EXPOSURE MODELING METHODOLOGIES
Two methodologies for modeling commuter exposures are proposed: computer-oriented approach and a manual approach. Both modeling methodologies require that major commuter routes, or pathways, be identified and that the traffic on the remainder of the roadway network be treated as ...
ERIC Educational Resources Information Center
Journal of Science and Mathematics Education in Southeast Asia, 1981
1981-01-01
Instructions (with diagrams and parts list) are provided for constructing an eye model with a pliable lens made from a plastic bottle which can vary its convexity to accommodate changing positions of an object being viewed. Also discusses concepts which the model can assist in developing. (Author/SK)
Vector Electric Geometry Model
Wang Ju-feng; LiuYun; LiBin
2008-01-01
The classical electric geometry model unifies the thunder and lightning flashover characteristic logical circuit's structure size, very good explained the line shield expiration phenomenon. But the classical electric geometry model had not considered that the lightning leader the electrical field intensity influence, is being circled strikes the segmental arc is the zero hour, the line still had circles strikes the
Pathological Gambling: Psychiatric Models
ERIC Educational Resources Information Center
Westphal, James R.
2008-01-01
Three psychiatric conceptual models: addictive, obsessive-compulsive spectrum and mood spectrum disorder have been proposed for pathological gambling. The objectives of this paper are to (1) evaluate the evidence base from the most recent reviews of each model, (2) update the evidence through 2007 and (3) summarize the status of the evidence for…
Computational Modeling of Tires
NASA Technical Reports Server (NTRS)
Noor, Ahmed K. (compiler); Tanner, John A. (compiler)
1995-01-01
This document contains presentations and discussions from the joint UVA/NASA Workshop on Computational Modeling of Tires. The workshop attendees represented NASA, the Army and Air force, tire companies, commercial software developers, and academia. The workshop objectives were to assess the state of technology in the computational modeling of tires and to provide guidelines for future research.
Mathematical models of hysteresis
NONE
1998-08-01
The ongoing research has largely been focused on the development of mathematical models of hysteretic nonlinearities with nonlocal memories. The distinct feature of these nonlinearities is that their current states depend on past histories of input variations. It turns out that memories of hysteretic nonlinearities are quite selective. Indeed, experiments show that only some past input extrema (not the entire input variations) leave their marks upon future states of hysteretic nonlinearities. Thus special mathematical tools are needed in order to describe nonlocal selective memories of hysteretic nonlinearities. The origin of such tools can be traced back to the landmark paper of Preisach. Their research has been primarily concerned with Preisach-type models of hysteresis. All these models have a common generic feature; they are constructed as superpositions of simplest hysteretic nonlinearities-rectangular loops. During the past four years, the study has been by and large centered around the following topics: (1) further development of Scalar and vector Preisach-type models of hysteresis; (2) experimental testing of Preisach-type models of hysteresis; (3) development of new models for viscosity (aftereffect) in hysteretic systems; (4) development of mathematical models for superconducting hysteresis in the case of gradual resistive transitions; (5) software implementation of Preisach-type models of hysteresis; and (6) development of new ideas which have emerged in the course of the research work. The author briefly describes the main scientific results obtained in the areas outlined above.
ERIC Educational Resources Information Center
New Mexico Univ., Albuquerque. American Indian Law Center.
The Model Children's Code was developed to provide a legally correct model code that American Indian tribes can use to enact children's codes that fulfill their legal, cultural and economic needs. Code sections cover the court system, jurisdiction, juvenile offender procedures, minor-in-need-of-care, and termination. Almost every Code section is…
Complexity regularized hydrological model selection
NASA Astrophysics Data System (ADS)
Arkesteijn, Liselot; Pande, Saket; Savenije, Hubert
2014-05-01
Ill-posed hydrological model selection problems (that may be unstable or have non-unique solutions) are regularized with hydrological model complexity as the stabilizer. We propose and apply a notion of model complexity, based on Vapnik-Chervonenkis generalization theory, to complexity regularized hydrologic model selection. Better hydrologic models (better performance on future unseen data) on small sample sizes are identified using complexity regularized model selection than when using traditional model selection (without regularization) while both converge in performance for large samples (i.e. regularized model selection is 'consistent'). Case studies using SAC-SMA, SIXPAR and flexible model structures are used to 1) compute and compare model complexities of different model structures, 2) demonstrate the 'consistency' of complexity regularized model selection and 3) demonstrate that regularized model selection identifies the best model structure (out of a set of competing structures) on small sample sizes better than un-regularized model selection.
Pupo, Amaury; Baez-Nieto, David; Martínez, Agustín; Latorre, Ramón; González, Carlos
2014-01-01
Voltage-gated proton channels are integral membrane proteins with the capacity to permeate elementary particles in a voltage and pH dependent manner. These proteins have been found in several species and are involved in various physiological processes. Although their primary topology is known, lack of details regarding their structures in the open conformation has limited analyses toward a deeper understanding of the molecular determinants of their function and regulation. Consequently, the function-structure relationships have been inferred based on homology models. In the present work, we review the existing proton channel models, their assumptions, predictions and the experimental facts that support them. Modeling proton channels is not a trivial task due to the lack of a close homolog template. Hence, there are important differences between published models. This work attempts to critically review existing proton channel models toward the aim of contributing to a better understanding of the structural features of these proteins. PMID:24755912
NASA Astrophysics Data System (ADS)
Charpentier, Arthur; Durand, Marilou
2015-07-01
In this paper, we investigate questions arising in Parsons and Geist (Bull Seismol Soc Am 102:1-11, 2012). Pseudo causal models connecting magnitudes and waiting times are considered, through generalized regression. We do use conditional model (magnitude given previous waiting time, and conversely) as an extension to joint distribution model described in Nikoloulopoulos and Karlis (Environmetrics 19: 251-269, 2008). On the one hand, we fit a Pareto distribution for earthquake magnitudes, where the tail index is a function of waiting time following previous earthquake; on the other hand, waiting times are modeled using a Gamma or a Weibull distribution, where parameters are functions of the magnitude of the previous earthquake. We use those two models, alternatively, to generate the dynamics of earthquake occurrence, and to estimate the probability of occurrence of several earthquakes within a year or a decade.
LDEF environment modeling updates
Gordon, T.; Rantanen, R.; Whitaker, A.F.
1995-02-01
An updated gas dynamics model for gas interactions around the LDEF is presented that includes improved scattering algorithms. The primary improvement is more accurate predictions of surface fluxes in the wake region. The code used is the Integrated Spacecraft Environments Model (ISEM). Additionally, initial results of a detailed ISEM prediction model of the Solar Array Passive LDEF Experiment (SAMPLE), A0171, is presented. This model includes details of the A0171 geometry and outgassing characteristics of the many surfaces on the experiment. The detailed model includes the multiple scattering that exists between the ambient atmosphere, LDEF outgassing, and atomic oxygen erosion products. Predictions are made for gas densities, surface fluxes and deposition at three different time periods of the LDEF mission.
Qiong-Tao Xie; Shuai Cui; Jun-Peng Cao; Luigi Amico; Heng Fan
2014-05-20
We define the anisotropic Rabi model as the generalization of the spin-boson Rabi model: The Hamiltonian system breaks the parity symmetry; the rotating and counter-rotating interactions are governed by two different coupling constants; a further parameter introduces a phase factor in the counter-rotating terms. The exact energy spectrum and eigenstates of the generalized model is worked out. The solution is obtained as an elaboration of a recent proposed method for the isotropic limit of the model. In this way, we provide a long sought solution of a cascade of models with immediate relevance in different physical fields, including i) quantum optics: two-level atom in single mode cross electric and magnetic fields; ii) solid state physics: electrons in semiconductors with Rashba and Dresselhaus spin-orbit coupling; iii) mesoscopic physics: Josephson junctions flux-qubit quantum circuits.
NASA Astrophysics Data System (ADS)
Xie, Qiong-Tao; Cui, Shuai; Cao, Jun-Peng; Amico, Luigi; Fan, Heng
2014-04-01
We define the anisotropic Rabi model as the generalization of the spin-boson Rabi model: The Hamiltonian system breaks the parity symmetry; the rotating and counterrotating interactions are governed by two different coupling constants; a further parameter introduces a phase factor in the counterrotating terms. The exact energy spectrum and eigenstates of the generalized model are worked out. The solution is obtained as an elaboration of a recently proposed method for the isotropic limit of the model. In this way, we provide a long-sought solution of a cascade of models with immediate relevance in different physical fields, including (i) quantum optics, a two-level atom in single-mode cross-electric and magnetic fields; (ii) solid-state physics, electrons in semiconductors with Rashba and Dresselhaus spin-orbit coupling; and (iii) mesoscopic physics, Josephson-junction flux-qubit quantum circuits.
NASA Astrophysics Data System (ADS)
Boynton, R. J.; Balikhin, M. A.; Billings, S. A.; Wei, H.
2010-12-01
A mathematical model for the dynamics of the Dst index has been identified using the NARMAX OLS-ERR methodology. The NARMAX OLS-ERR algorithm, which is widely used in the field of system identification, is able to identify a mathematical model of any system using input and output data sets. The model uses a solar wind-magnetosphere coupling function, obtained using the error reduction ratio (ERR) in the NARMAX OLS-ERR procedure, as the input and the Dst index as the output. The frequency response was then calculated from the model and analyzed to determine the details about the nonlinearities involved in the evolution of the Dst index. The data deduced model was also used to forecast the Dst index.
J.M. Scaglione
2003-03-12
The purpose of the ''Criticality Model Report'' is to validate the MCNP (CRWMS M&O 1998h) code's ability to accurately predict the effective neutron multiplication factor (k{sub eff}) for a range of conditions spanned by various critical configurations representative of the potential configurations commercial reactor assemblies stored in a waste package may take. Results of this work are an indication of the accuracy of MCNP for calculating eigenvalues, which will be used as input for criticality analyses for spent nuclear fuel (SNF) storage at the proposed Monitored Geologic Repository. The scope of this report is to document the development and validation of the criticality model. The scope of the criticality model is only applicable to commercial pressurized water reactor fuel. Valid ranges are established as part of the validation of the criticality model. This model activity follows the description in BSC (2002a).
Use dispersion modeling update
Freiman, J.P.; Hill, J. )
1992-08-01
This paper discusses EPA's long-awaited update to the Industrial Source Complex (ISC) dispersion models which provides computer-software to comply with National Ambient Air Quality Standards. Moreover, the ISC2 models's Fortran codes are available from EPA at no cost, in a form compatible with desktop computers. This is a plus for hydrocarbon processing industry (HPI) environmental control professionals. ISC2 will be used for all future regulatory applications where dispersion modeling is required for facilities in simple terrain. Process engineers sometimes use ISC models and are often called upon to assist in developing emissions estimates that the program uses to calculate air quality impacts. The model challenges users because it can represent a variety of configurations for emissions sources. Title III of the Clean Air Act Amendments is an entirely new section dealing with air toxics such as those in the HPI. EPA is required to develop a list of maximum achievable control technologies (MACT) for these compounds.
Stratiform chromite deposit model
Schulte, Ruth F.; Taylor, Ryan D.; Piatak, Nadine M.; Seal, Robert R., II
2010-01-01
Stratiform chromite deposits are of great economic importance, yet their origin and evolution remain highly debated. Layered igneous intrusions such as the Bushveld, Great Dyke, Kemi, and Stillwater Complexes, provide opportunities for studying magmatic differentiation processes and assimilation within the crust, as well as related ore-deposit formation. Chromite-rich seams within layered intrusions host the majority of the world's chromium reserves and may contain significant platinum-group-element (PGE) mineralization. This model of stratiform chromite deposits is part of an effort by the U.S. Geological Survey's Mineral Resources Program to update existing models and develop new descriptive mineral deposit models to supplement previously published models for use in mineral-resource and mineral-environmental assessments. The model focuses on features that may be common to all stratiform chromite deposits as a way to gain insight into the processes that gave rise to their emplacement and to the significant economic resources contained in them.
NASA Technical Reports Server (NTRS)
North, G. R.; Crowley, T. J.
1984-01-01
Mathematical climate modelling has matured as a discipline to the point that it is useful in paleoclimatology. As an example a new two dimensional energy balance model is described and applied to several problems of current interest. The model includes the seasonal cycle and the detailed land-sea geographical distribution. By examining the changes in the seasonal cycle when external perturbations are forced upon the climate system it is possible to construct hypotheses about the origin of midlatitude ice sheets and polar ice caps. In particular the model predicts a rather sudden potential for glaciation over large areas when the Earth's orbital elements are only slightly altered. Similarly, the drift of continents or the change of atmospheric carbon dioxide over geological time induces radical changes in continental ice cover. With the advance of computer technology and improved understanding of the individual components of the climate system, these ideas will be tested in far more realistic models in the near future.
Whole powder pattern modelling.
Scardi, P; Leoni, M
2002-03-01
A new approach for the modelling of diffraction patterns without using analytical profile functions is described and tested on ball milled f.c.c. Ni powder samples. The proposed whole powder pattern modelling (WPPM) procedure allows a one-step refinement of microstructure parameters by a direct modelling of the experimental pattern. Lattice parameter and defect content, expressed as dislocation density, outer cut-off radius, contrast factor, twin and deformation fault probabilities), can be refined together with the parameters (mean and variance) of a grain-size distribution. Different models for lattice distortions and domain size and shape can be tested to simulate or model diffraction data for systems as different as plastically deformed metals or finely dispersed crystalline powders. TEM pictures support the conclusions obtained by WPPM and confirm the validity of the proposed procedure. PMID:11832590
V. Chipman
2002-10-31
The purpose of the Ventilation Model is to simulate the heat transfer processes in and around waste emplacement drifts during periods of forced ventilation. The model evaluates the effects of emplacement drift ventilation on the thermal conditions in the emplacement drifts and surrounding rock mass, and calculates the heat removal by ventilation as a measure of the viability of ventilation to delay the onset of peak repository temperature and reduce its magnitude. The heat removal by ventilation is temporally and spatially dependent, and is expressed as the fraction of heat carried away by the ventilation air compared to the fraction of heat produced by radionuclide decay. One minus the heat removal is called the wall heat fraction, or the remaining amount of heat that is transferred via conduction to the surrounding rock mass. Downstream models, such as the ''Multiscale Thermohydrologic Model'' (BSC 2001), use the wall heat fractions as outputted from the Ventilation Model to initialize their postclosure analyses.
NASA Technical Reports Server (NTRS)
Badler, N. I.; Lee, P.; Wong, S.
1985-01-01
Strength modeling is a complex and multi-dimensional issue. There are numerous parameters to the problem of characterizing human strength, most notably: (1) position and orientation of body joints; (2) isometric versus dynamic strength; (3) effector force versus joint torque; (4) instantaneous versus steady force; (5) active force versus reactive force; (6) presence or absence of gravity; (7) body somatotype and composition; (8) body (segment) masses; (9) muscle group envolvement; (10) muscle size; (11) fatigue; and (12) practice (training) or familiarity. In surveying the available literature on strength measurement and modeling an attempt was made to examine as many of these parameters as possible. The conclusions reached at this point toward the feasibility of implementing computationally reasonable human strength models. The assessment of accuracy of any model against a specific individual, however, will probably not be possible on any realistic scale. Taken statistically, strength modeling may be an effective tool for general questions of task feasibility and strength requirements.
NASA Astrophysics Data System (ADS)
Sukhanovskii, Yu. P.
2010-09-01
A model describing rainfall erosion over the course of a long time period is proposed. The model includes: (1) a new equation of detachment of soil particles by water flows based on the Mirtskhulava equation; (2) a new equation for the transport capacity of the flow based on a modified Bagnold equation, which is used in the AGNPS model; (3) modified SCS runoff equation; (4) probability distributions for rainfall. The proposed equations agree satisfactorily with the data of on-site observations of the Moldova and Nizhnedevitsk water-balance stations. The Monte Carlo method is used for numerical modeling of random variables. The results of modeling agree satisfactorily with empirical equations developed for conditions in Russia and the United States. The effect of climatic conditions on the dependence of longtime average annual soil loss on various factors is analyzed. Minimum information is used for assigning the initial data.
NSDL National Science Digital Library
Noelle Selin
The problem set asks students to complete and discuss interactive, policy-relevant models from two case studies of models and policy that have been previously discussed in class: fisheries and chemicals. The first question asks the students to play the online Fishbanks game, which uses a model interface to illustrate management strategies for fisheries. The students are then asked to think about how the model might be used in decision-making contexts. The second question asks students to consider the case of a chemical under review in the context of the Stockholm Convention on Persistent Organic Pollutants, and to assess its overall persistence and long-range transport using an Excel-based modeling tool.
Jones, Katherine A.; Finley, Patrick D.; Moore, Thomas W.; Nozick, Linda Karen; Martin, Nathaniel; Bandlow, Alisa; Detry, Richard Joseph; Evans, Leland B.; Berger, Taylor Eugen
2013-09-01
Infectious diseases can spread rapidly through healthcare facilities, resulting in widespread illness among vulnerable patients. Computational models of disease spread are useful for evaluating mitigation strategies under different scenarios. This report describes two infectious disease models built for the US Department of Veteran Affairs (VA) motivated by a Varicella outbreak in a VA facility. The first model simulates disease spread within a notional contact network representing staff and patients. Several interventions, along with initial infection counts and intervention delay, were evaluated for effectiveness at preventing disease spread. The second model adds staff categories, location, scheduling, and variable contact rates to improve resolution. This model achieved more accurate infection counts and enabled a more rigorous evaluation of comparative effectiveness of interventions.
Newville, Matthew
The XAFS Model Compound Library contains XAFS data on model compounds. The term "model" compounds refers to compounds of homogeneous and well-known crystallographic or molecular structure. Each data file in this library has an associated atoms.inp file that can be converted to a feff.inp file using the program ATOMS. (See the related Searchable Atoms.inp Archive at http://cars9.uchicago.edu/~newville/adb/) This Library exists because XAFS data on model compounds is useful for several reasons, including comparing to unknown data for "fingerprinting" and testing calculations and analysis methods. The collection here is currently limited, but is growing. The focus to date has been on inorganic compounds and minerals of interest to the geochemical community. [Copied, with editing, from http://cars9.uchicago.edu/~newville/ModelLib/
Earth General Circulation Models
NASA Astrophysics Data System (ADS)
Dowling, T. E.
The development of Earth general circulation models (GCMs) is rapidly evolving on all fronts, with today's nonhydrostatic and global cloud resolving models (GCRMs) making an impact on par with the original introduction of GCMs in the 1960s. Here we take a look at the organizational structure of these models, including the dynamical core and physics layers, the latest horizontal and vertical grids, standardized frameworks for software components, data and metadata, and the manner in which the international scientific community systematically compares climate models. Data assimilation and the quantification of forecast skill are two well-developed Earth-GCM concepts that are beginning to see utility in planetary science. Also discussed are the philosophical questions that arise when an analysis, an optimal blend of data and model, is used in place of a pure dataset for scientific work. Fully unstructured, dynamically adapting grids optimized for stratified fluids are predicted to be the preferred framework for GCMs in 25 years.
Peskin, M.E.
1997-05-01
These lectures constitute a short course in ``Beyond the Standard Model`` for students of experimental particle physics. The author discusses the general ideas which guide the construction of models of physics beyond the Standard model. The central principle, the one which most directly motivates the search for new physics, is the search for the mechanism of the spontaneous symmetry breaking observed in the theory of weak interactions. To illustrate models of weak-interaction symmetry breaking, the author gives a detailed discussion of the idea of supersymmetry and that of new strong interactions at the TeV energy scale. He discusses experiments that will probe the details of these models at future pp and e{sup +}e{sup {minus}} colliders.
Maximally Expressive Task Modeling
NASA Technical Reports Server (NTRS)
Japp, John; Davis, Elizabeth; Maxwell, Theresa G. (Technical Monitor)
2002-01-01
Planning and scheduling systems organize "tasks" into a timeline or schedule. The tasks are defined within the scheduling system in logical containers called models. The dictionary might define a model of this type as "a system of things and relations satisfying a set of rules that, when applied to the things and relations, produce certainty about the tasks that are being modeled." One challenging domain for a planning and scheduling system is the operation of on-board experiment activities for the Space Station. The equipment used in these experiments is some of the most complex hardware ever developed by mankind, the information sought by these experiments is at the cutting edge of scientific endeavor, and the procedures for executing the experiments are intricate and exacting. Scheduling is made more difficult by a scarcity of space station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling space station experiment operations calls for a "maximally expressive" modeling schema. Modeling even the simplest of activities cannot be automated; no sensor can be attached to a piece of equipment that can discern how to use that piece of equipment; no camera can quantify how to operate a piece of equipment. Modeling is a human enterprise-both an art and a science. The modeling schema should allow the models to flow from the keyboard of the user as easily as works of literature flowed from the pen of Shakespeare. The Ground Systems Department at the Marshall Space Flight Center has embarked on an effort to develop a new scheduling engine that is highlighted by a maximally expressive modeling schema. This schema, presented in this paper, is a synergy of technological advances and domain-specific innovations.
Spiral model pilot project information model
NASA Technical Reports Server (NTRS)
1991-01-01
The objective was an evaluation of the Spiral Model (SM) development approach to allow NASA Marshall to develop an experience base of that software management methodology. A discussion is presented of the Information Model (IM) that was used as part of the SM methodology. A key concept of the SM is the establishment of an IM to be used by management to track the progress of a project. The IM is the set of metrics that is to be measured and reported throughout the life of the project. These metrics measure both the product and the process to ensure the quality of the final delivery item and to ensure the project met programmatic guidelines. The beauty of the SM, along with the IM, is the ability to measure not only the correctness of the specification and implementation of the requirements but to also obtain a measure of customer satisfaction.
Meshfree magnetotelluric modelling
NASA Astrophysics Data System (ADS)
Wittke, J.; Tezkan, B.
2014-08-01
We present a new approach for 2-D magnetotelluric forward numerical modelling in contrast to traditional numerical methods like finite elements or finite differences. The method used for solving the partial differential equations is based on a mesh-free technique which does not need an underlaying mesh or grid. We use the Meshless Local Petrov-Galerkin (MLPG) method in combination with radial basis functions to simulate the response of a given conductivity model to a plane-wave source. We compare the mesh-free solution with known simulation programs and simple analytical solutions. Furthermore, we discuss the new magnetotelluric modelling method in terms of implementation and stability. First, we study the convergence and discretization errors of the new method with a simple half-space conductivity model. Then we compare our mesh-free simulation results with simple 2-D conductivity models with the results of a well-known finite element program. In the end, we provide a smooth conductivity model calculated with the mesh-free approach. The modelling results, even with randomly distributed nodes, are in a good agreement with those obtained by the finite element method.
Biophysical models in hadrontherapy
NASA Astrophysics Data System (ADS)
Scholz, M.; Elsaesser, T.
One major rationale for the application of ion beams in tumor therapy is their increased relative biological effectiveness RBE in the Bragg peak region For dose prescription the increased effectiveness has to be taken into account in treatment planning Hence the complex dependencies of RBE on the dose level biological endpoint position in the field etc require biophysical models which have to fulfill two important criteria simplicity and quantitative precision Simplicity means that the number of free parameters should be kept at a minimum Due to the lack of precise quantitative data at least at present this requirement is incompatible with approaches aiming at the molecular modeling of the whole chain of production processing and repair of biological damages Quantitative precision is required since steep gradients in the dose response curves are observed for most tumor and normal tissues thus even small uncertainties in the estimation of the biologically effective dose can transform into large uncertainties in the clinical outcome The paper will give a general introduction into the field followed by a brief description of a specific model the so called Local Effect Model LEM This model has been successfully applied within treatment planning in the GSI pilot project for carbon ion tumor therapy over almost 10 years now The model is based on the knowledge of charged particle track structure in combination with the response of the biological objects to conventional photon radiation The model will be critically discussed with respect to other
NASA Astrophysics Data System (ADS)
Smirnova, Olga
Biologically motivated mathematical models, which describe the dynamics of the major hematopoietic lineages (the thrombocytopoietic, lymphocytopoietic, granulocytopoietic, and erythropoietic systems) in acutely/chronically irradiated humans are developed. These models are implemented as systems of nonlinear differential equations, which variables and constant parameters have clear biological meaning. It is shown that the developed models are capable of reproducing clinical data on the dynamics of these systems in humans exposed to acute radiation in the result of incidents and accidents, as well as in humans exposed to low-level chronic radiation. Moreover, the averaged value of the "lethal" dose rates of chronic irradiation evaluated within models of these four major hematopoietic lineages coincides with the real minimal dose rate of lethal chronic irradiation. The demonstrated ability of the models of the human thrombocytopoietic, lymphocytopoietic, granulocytopoietic, and erythropoietic systems to predict the dynamical response of these systems to acute/chronic irradiation in wide ranges of doses and dose rates implies that these mathematical models form an universal tool for the investigation and prediction of the dynamics of the major human hematopoietic lineages for a vast pattern of irradiation scenarios. In particular, these models could be applied for the radiation risk assessment for health of astronauts exposed to space radiation during long-term space missions, such as voyages to Mars or Lunar colonies, as well as for health of people exposed to acute/chronic irradiation due to environmental radiological events.
NASA Astrophysics Data System (ADS)
Jensen, Kristoffer
2002-11-01
A timbre model is proposed for use in multiple applications. This model, which encompasses all voiced isolated musical instruments, has an intuitive parameter set, fixed size, and separates the sounds in dimensions akin to the timbre dimensions as proposed in timbre research. The analysis of the model parameters is fully documented, and it proposes, in particular, a method for the estimation of the difficult decay/release split-point. The main parameters of the model are the spectral envelope, the attack/release durations and relative amplitudes, and the inharmonicity and the shimmer and jitter (which provide both for the slow random variations of the frequencies and amplitudes, and also for additive noises). Some of the applications include synthesis, where a real-time application is being developed with an intuitive gui, classification, and search of sounds based on the content of the sounds, and a further understanding of acoustic musical instrument behavior. In order to present the background of the model, this presentation will start with sinusoidal A/S, some timbre perception research, then present the timbre model, show the validity for individual music instrument sounds, and finally introduce some expression additions to the model.
SPAR Model Structural Efficiencies
John Schroeder; Dan Henry
2013-04-01
The Nuclear Regulatory Commission (NRC) and the Electric Power Research Institute (EPRI) are supporting initiatives aimed at improving the quality of probabilistic risk assessments (PRAs). Included in these initiatives are the resolution of key technical issues that are have been judged to have the most significant influence on the baseline core damage frequency of the NRC’s Standardized Plant Analysis Risk (SPAR) models and licensee PRA models. Previous work addressed issues associated with support system initiating event analysis and loss of off-site power/station blackout analysis. The key technical issues were: • Development of a standard methodology and implementation of support system initiating events • Treatment of loss of offsite power • Development of standard approach for emergency core cooling following containment failure Some of the related issues were not fully resolved. This project continues the effort to resolve outstanding issues. The work scope was intended to include substantial collaboration with EPRI; however, EPRI has had other higher priority initiatives to support. Therefore this project has addressed SPAR modeling issues. The issues addressed are • SPAR model transparency • Common cause failure modeling deficiencies and approaches • Ac and dc modeling deficiencies and approaches • Instrumentation and control system modeling deficiencies and approaches
Damping models in elastography
NASA Astrophysics Data System (ADS)
McGarry, Matthew D. J.; Berger, Hans-Uwe; Van Houten, Elijah E. W.
2007-03-01
Current optimization based Elastography reconstruction algorithms encounter difficulties when the motion approaches resonant conditions, where the model does a poor job of approximating the real behavior of the material. Model accuracy can be improved through the addition of damping effects. These effects occur in-vivo due to the complex interaction between microstructural elements of the tissue; however reconstruction models are typically formulated at larger scales where the structure can be treated as a continuum. Attenuation behavior in an elastic continuum can be described as a mixture of inertial and viscoelastic damping effects. In order to develop a continuum damping model appropriate for human tissue, the behavior of each aspect of this proportional, or Rayleigh damping needs to be characterized. In this paper we investigate the nature of these various damping representations with a goal of best describing in-vivo behavior of actual tissue in order to improve the accuracy and performance of optimization based elastographic reconstruction. Inertial damping effects are modelled using a complex density, where the imaginary part is equivalent to a damping coefficient, and the effects of viscoelasticity are modelled through the use of complex shear moduli, where the real and imaginary parts represent the storage and loss moduli respectively. The investigation is carried out through a combination of theoretical analysis, numerical experiment, investigation of gelatine phantoms and comparison with other continua such as porous media models.
Atmospheric Models for Aerocapture
NASA Technical Reports Server (NTRS)
Justus, C. G.; Duvall, Aleta L.; Keller, Vernon W.
2004-01-01
There are eight destinations in the solar System with sufficient atmosphere for aerocapture to be a viable aeroassist option - Venus, Earth, Mars, Jupiter, Saturn and its moon Titan, Uranus, and Neptune. Engineering-level atmospheric models for four of these targets (Earth, Mars, Titan, and Neptune) have been developed for NASA to support systems analysis studies of potential future aerocapture missions. Development of a similar atmospheric model for Venus has recently commenced. An important capability of all of these models is their ability to simulate quasi-random density perturbations for Monte Carlo analyses in developing guidance, navigation and control algorithm, and for thermal systems design. Similarities and differences among these atmospheric models are presented, with emphasis on the recently developed Neptune model and on planned characteristics of the Venus model. Example applications for aerocapture are also presented and illustrated. Recent updates to the Titan atmospheric model are discussed, in anticipation of applications for trajectory and atmospheric reconstruct of Huygens Probe entry at Titan.
Computationally modeling interpersonal trust
Lee, Jin Joo; Knox, W. Bradley; Wormwood, Jolie B.; Breazeal, Cynthia; DeSteno, David
2013-01-01
We present a computational model capable of predicting—above human accuracy—the degree of trust a person has toward their novel partner by observing the trust-related nonverbal cues expressed in their social interaction. We summarize our prior work, in which we identify nonverbal cues that signal untrustworthy behavior and also demonstrate the human mind's readiness to interpret those cues to assess the trustworthiness of a social robot. We demonstrate that domain knowledge gained from our prior work using human-subjects experiments, when incorporated into the feature engineering process, permits a computational model to outperform both human predictions and a baseline model built in naiveté of this domain knowledge. We then present the construction of hidden Markov models to investigate temporal relationships among the trust-related nonverbal cues. By interpreting the resulting learned structure, we observe that models built to emulate different levels of trust exhibit different sequences of nonverbal cues. From this observation, we derived sequence-based temporal features that further improve the accuracy of our computational model. Our multi-step research process presented in this paper combines the strength of experimental manipulation and machine learning to not only design a computational trust model but also to further our understanding of the dynamics of interpersonal trust. PMID:24363649
Functional Generalized Additive Models.
McLean, Mathew W; Hooker, Giles; Staicu, Ana-Maria; Scheipl, Fabian; Ruppert, David
2014-01-01
We introduce the functional generalized additive model (FGAM), a novel regression model for association studies between a scalar response and a functional predictor. We model the link-transformed mean response as the integral with respect to t of F{X(t), t} where F(·,·) is an unknown regression function and X(t) is a functional covariate. Rather than having an additive model in a finite number of principal components as in Müller and Yao (2008), our model incorporates the functional predictor directly and thus our model can be viewed as the natural functional extension of generalized additive models. We estimate F(·,·) using tensor-product B-splines with roughness penalties. A pointwise quantile transformation of the functional predictor is also considered to ensure each tensor-product B-spline has observed data on its support. The methods are evaluated using simulated data and their predictive performance is compared with other competing scalar-on-function regression alternatives. We illustrate the usefulness of our approach through an application to brain tractography, where X(t) is a signal from diffusion tensor imaging at position, t, along a tract in the brain. In one example, the response is disease-status (case or control) and in a second example, it is the score on a cognitive test. R code for performing the simulations and fitting the FGAM can be found in supplemental materials available online. PMID:24729671
NASA Technical Reports Server (NTRS)
Mccutcheon, Kimble D.; Gordon, Stephen S.; Thompson, Paul A.
1992-01-01
NASA uses the Variable Polarity Plasma Arc Welding (VPPAW) process extensively for fabrication of Space Shuttle External Tanks. This welding process has been in use at NASA since the late 1970's but the physics of the process have never been satisfactorily modeled and understood. In an attempt to advance the level of understanding of VPPAW, Dr. Arthur C. Nunes, Jr., (NASA) has developed a mathematical model of the process. The work described in this report evaluated and used two versions (level-0 and level-1) of Dr. Nunes' model, and a model derived by the University of Alabama at Huntsville (UAH) from Dr. Nunes' level-1 model. Two series of VPPAW experiments were done, using over 400 different combinations of welding parameters. Observations were made of VPPAW process behavior as a function of specific welding parameter changes. Data from these weld experiments was used to evaluate and suggest improvements to Dr. Nunes' model. Experimental data and correlations with the model were used to develop a multi-variable control algorithm for use with a future VPPAW controller. This algorithm is designed to control weld widths (both on the crown and root of the weld) based upon the weld parameters, base metal properties, and real-time observation of the crown width. The algorithm exhibited accuracy comparable to that of the weld width measurements for both aluminum and mild steel welds.
Animal Models of Atherosclerosis
Getz, Godfrey S.; Reardon, Catherine A.
2012-01-01
Atherosclerosis is a chronic inflammatory disorder that is the underlying cause of most cardiovascular disease. Both cells of the vessel wall and cells of the immune system participate in atherogenesis. This process is heavily influenced by plasma lipoproteins, genetics and the hemodynamics of the blood flow in the artery. A variety of small and large animal models have been used to study the atherogenic process. No model is ideal as each has its own advantages and limitations with respect to manipulation of the atherogenic process and modeling human atherosclerosis or lipoprotein profile. Useful large animal models include pigs, rabbits and non-human primates. Due in large part to the relative ease of genetic manipulation and the relatively short time frame for the development of atherosclerosis, murine models are currently the most extensively used. While not all aspects of murine atherosclerosis are identical to humans, studies using murine models have suggested potential biological processes and interactions that underlie this process. As it becomes clear that different factors may influence different stages of lesion development, the use of mouse models with the ability to turn on or delete proteins or cells in tissue specific and temporal manner will be very valuable. PMID:22383700
Functional Generalized Additive Models
McLean, Mathew W.; Hooker, Giles; Staicu, Ana-Maria; Scheipl, Fabian; Ruppert, David
2014-01-01
We introduce the functional generalized additive model (FGAM), a novel regression model for association studies between a scalar response and a functional predictor. We model the link-transformed mean response as the integral with respect to t of F{X(t), t} where F(·,·) is an unknown regression function and X(t) is a functional covariate. Rather than having an additive model in a finite number of principal components as in Müller and Yao (2008), our model incorporates the functional predictor directly and thus our model can be viewed as the natural functional extension of generalized additive models. We estimate F(·,·) using tensor-product B-splines with roughness penalties. A pointwise quantile transformation of the functional predictor is also considered to ensure each tensor-product B-spline has observed data on its support. The methods are evaluated using simulated data and their predictive performance is compared with other competing scalar-on-function regression alternatives. We illustrate the usefulness of our approach through an application to brain tractography, where X(t) is a signal from diffusion tensor imaging at position, t, along a tract in the brain. In one example, the response is disease-status (case or control) and in a second example, it is the score on a cognitive test. R code for performing the simulations and fitting the FGAM can be found in supplemental materials available online. PMID:24729671
Combining thermal comfort models
Yigit, A.
1999-07-01
Two models commonly used in thermal comfort studies were combined to develop a two-dimensional computer model that estimates the resistance to dry and evaporative heat transfer for a clothing system from fabric resistance data, fabric thickness data, and information concerning the amount of body surface area covered by different fabric layers and the amount of air trapped between fabric layers. Five different clothing ensembles with different total thermal insulation and very different distributions of the insulation on the body were simulated with 16 sedentary subjects. This paper first evaluates total thermal insulation predictions from the Fanger steady-state model and then uses these data in the Gagge two-compartment (or two-node) model. The combined model uses the transient heat balance of each segment and the whole body. It estimates total insulation value and then uses this value to calculate transient temperature and wettedness. By application of the combined model, predictions of human responses to a wide range of thermal conditions are compared with the responses of human subjects as described in reports of laboratory experiments. Possible reasons for discrepancies between the observed data and predictions of the model are briefly discussed.
NASA Astrophysics Data System (ADS)
Hill, Mary; Ye, Ming; Foglia, Laura; Lu, Dan
2015-04-01
Modeling frameworks include many ideas about, for example, how to parameterize models, conduct sensitivity analysis (including identifying observations and parameters important to calibration and prediction), quantify uncertainty, and so on. Of concern in this talk is meaningful testing of how ideas proposed for any modeling framework perform. The design of meaningful tests depends on the aspect of the framework being tested and the timing of system dynamics. Consider a situation in which the aspect being tested is prediction accuracy and the quantities of concern are readily measured and change quickly, such as for precipitation, floods, or hurricanes. In such cases meaningful tests involve comparing simulated and measured values and tests can be conducted daily, hourly or even more frequently. Though often challenged by measurement difficulties, this remains the simplest circumstance for conducting meaningful tests of modeling frameworks. If measurements are not readily available and(or) the system responds to changes over decades or centuries, as generally occurs for climate change, saltwater intrusion of groundwater systems, and dewatering of aquifers, prediction accuracy needs to be evaluated in other ways. Often these require high performance computing. For example, complex and simple models can be compared or cross-validation experiments can be conducted. Both can require massive computational resources for any but the simplest of problems. Testing other aspects of a modeling framework can require different types of tests. For example, testing methods of identifying observations or parameters important to model calibration or predictions might entail evaluation of many circumstances for methods that are themselves commonly computationally demanding. Again, high performance computing is needed even when the goal is to include computationally frugal methods in the modeling framework. In this talk we discuss the importance of such testing, stress the need to design and implement tests when any modeling framework is developed, and provide examples of tests from several recent publications.
NASA Astrophysics Data System (ADS)
Fisher, G. H.; Hawley, S. L.
1992-05-01
We present 3 sets of solar flare atmospheric models, computed in 3 different limits. In all of the models, energy balance is assumed, with radiative losses from the optically thick transitions of HI, CaII, and MgII balancing flare heating from nonthermal electrons and X-rays from the flaring corona. In the ``Hydrostatic'' models, we have assumed that flare heating by Coulomb collisions from a flux of nonthermal electrons has been occurring for an infinitely long time, and the corona and chromosphere have achieved both energetic and hydrostatic equilibrium. In the ``Impulsive'' models, we have assumed that the atmospheric density remains frozen in its preflare state, but that the atmosphere rapidly achieves a temperature structure consistent with energy balance. In the ``Evolving'' models, we have assumed a temporal variation of the nonthermal electron heating rate consistent with flare heating for timescales of 5-10 minutes, corresponding to a long lived and intense flare, continually undergoing chromospheric evaporation. In this case, the chromospheric model is in hydrostatic equilibrium, but the flare transition region is at depths that are much less than those in the ``Hydrostatic'' models. We present temperature and density structures in these model atmospheres, line and continuum fluxes from each model, and a few selected line profiles. G.H.F. is supported by AFOSR grant AFOSR-91-116, NASA grant NAGW-2969, and NSF grant ATM91-06052. S.L.H. is supported in part by a Hubble Fellowship from STSI, and in part by Lawrence Livermore National Laboratory. Lawrence Livermore is supported by the US Department of Energy under contract number W-7405-ENG-48.
NASA Technical Reports Server (NTRS)
Rubinstein, R. (Editor); Rumsey, C. L. (Editor); Salas, M. D. (Editor); Thomas, J. L. (Editor); Bushnell, Dennis M. (Technical Monitor)
2001-01-01
Advances in turbulence modeling are needed in order to calculate high Reynolds number flows near the onset of separation and beyond. To this end, the participants in this workshop made the following recommendations. (1) A national/international database and standards for turbulence modeling assessment should be established. Existing experimental data sets should be reviewed and categorized. Advantage should be taken of other efforts already under-way, such as that of the European Research Community on Flow, Turbulence, and Combustion (ERCOFTAC) consortium. Carefully selected "unit" experiments will be needed, as well as advances in instrumentation, to fill the gaps in existing datasets. A high priority should be given to document existing turbulence model capabilities in a standard form, including numerical implementation issues such as grid quality and resolution. (2) NASA should support long-term research on Algebraic Stress Models and Reynolds Stress Models. The emphasis should be placed on improving the length-scale equation, since it is the least understood and is a key component of two-equation and higher models. Second priority should be given to the development of improved near-wall models. Direct Numerical Simulations (DNS) and Large Eddy Simulations (LES) would provide valuable guidance in developing and validating new Reynolds-averaged Navier-Stokes (RANS) models. Although not the focus of this workshop, DNS, LES, and hybrid methods currently represent viable approaches for analysis on a limited basis. Therefore, although computer limitations require the use of RANS methods for realistic configurations at high Reynolds number in the foreseeable future, a balanced effort in turbulence modeling development, validation, and implementation should include these approaches as well.
Wang, Panqu; Cottrell, Garrison
2015-09-01
"The Model" (a.k.a. "TM", Dailey and Cottrell, 1999) is a biologically-plausible neurocomputational model designed for face and object recognition. Developed over the last 25 years, TM has been successfully used to model many cognitive phenomena, such as facial expression perception (Dailey et al., 2002), recruitment of the FFA for other categories of expertise (Tong et al., 2008), and the experience moderation effect on the correlation between face and object recognition (Wang et al., 2014). However, as TM is a "shallow" model, it cannot develop rich feature representations needed for challenging computer vision tasks. Meanwhile, the recent deep convolutional neural network techniques produce state-of-the-art results for many computer vision benchmarks, but they have not been used in cognitive modeling. The deep architecture allows the network to develop rich high level features, which generalize really well to other novel visual tasks. However, the deep learning models use a fully supervised training approach, which seems implausible for early visual system. Here, "The Deep Model" (TDM) tries to bridge TM and deep learning models together to create a "gradually" supervised deep architecture which can be both biologically-plausible and perform well on computer vision tasks. We show that, by using the sparse PCA and RICA algorithms on natural image datasets, we can obtain center surround color-opponent receptive field that represent LGN cells, and Gabor-like filters that represent V1 simple cells. This suggests that the unsupervised learning approach is what is used in the development of the early visual system. We employ this insight to develop a gradually supervised deep neural network and test it on some standard computer vision and cognitive modeling tasks. Meeting abstract presented at VSS 2015. PMID:26326779
Spin models as microfoundation of macroscopic market models
NASA Astrophysics Data System (ADS)
Krause, Sebastian M.; Bornholdt, Stefan
2013-09-01
Macroscopic price evolution models are commonly used for investment strategies. There are first promising achievements in defining microscopic agent based models for the same purpose. Microscopic models allow a deeper understanding of mechanisms in the market than the purely phenomenological macroscopic models, and thus bear the chance for better models for market regulation. However microscopic models and macroscopic models are commonly studied separately. Here, we exemplify a unified view of a microscopic and a macroscopic market model in a case study, deducing a macroscopic Langevin equation from a microscopic spin market model closely related to the Ising model. The interplay of the microscopic and the macroscopic view allows for a better understanding and adjustment of the microscopic model, as well, and may guide the construction of agent based market models as basis of macroscopic models.
Model compilation: An approach to automated model derivation
NASA Technical Reports Server (NTRS)
Keller, Richard M.; Baudin, Catherine; Iwasaki, Yumi; Nayak, Pandurang; Tanaka, Kazuo
1990-01-01
An approach is introduced to automated model derivation for knowledge based systems. The approach, model compilation, involves procedurally generating the set of domain models used by a knowledge based system. With an implemented example, how this approach can be used to derive models of different precision and abstraction is illustrated, and models are tailored to different tasks, from a given set of base domain models. In particular, two implemented model compilers are described, each of which takes as input a base model that describes the structure and behavior of a simple electromechanical device, the Reaction Wheel Assembly of NASA's Hubble Space Telescope. The compilers transform this relatively general base model into simple task specific models for troubleshooting and redesign, respectively, by applying a sequence of model transformations. Each transformation in this sequence produces an increasingly more specialized model. The compilation approach lessens the burden of updating and maintaining consistency among models by enabling their automatic regeneration.
Computer Modeling Of Atomization
NASA Technical Reports Server (NTRS)
Giridharan, M.; Ibrahim, E.; Przekwas, A.; Cheuch, S.; Krishnan, A.; Yang, H.; Lee, J.
1994-01-01
Improved mathematical models based on fundamental principles of conservation of mass, energy, and momentum developed for use in computer simulation of atomization of jets of liquid fuel in rocket engines. Models also used to study atomization in terrestrial applications; prove especially useful in designing improved industrial sprays - humidifier water sprays, chemical process sprays, and sprays of molten metal. Because present improved mathematical models based on first principles, they are minimally dependent on empirical correlations and better able to represent hot-flow conditions that prevail in rocket engines and are too severe to be accessible for detailed experimentation.
Aviation Safety Simulation Model
NASA Technical Reports Server (NTRS)
Houser, Scott; Yackovetsky, Robert (Technical Monitor)
2001-01-01
The Aviation Safety Simulation Model is a software tool that enables users to configure a terrain, a flight path, and an aircraft and simulate the aircraft's flight along the path. The simulation monitors the aircraft's proximity to terrain obstructions, and reports when the aircraft violates accepted minimum distances from an obstruction. This model design facilitates future enhancements to address other flight safety issues, particularly air and runway traffic scenarios. This report shows the user how to build a simulation scenario and run it. It also explains the model's output.
Modeling microtubule oscillations
Jobs, Elmar [Hoechstleistungsrechenzentrum, Forschungszentrum Juelich GmbH, D-52425 Juelich (Germany); Wolf, Dietrich E. [Theoretical Physics FB10, Gerhard-Mercator-University, D-47048 Duisburg (Germany); Flyvbjerg, Henrik [Condensed Matter Physics and Chemistry Department, Risoe Rise National Laboratory, DK-4000 Roskilde (Denmark); The Niels Bohr Institute, Blegdamsvej 17, DK-2100 Copenhagen Oe (Denmark)
1999-10-05
Synchronization of molecular reactions in a macroscopic volume may cause the volume's physical properties to change dynamically and thus reveal much about the reactions. As an example, experimental time series for so-called microtubule oscillations are analyzed in terms of a minimal model for this complex polymerization-depolymerization cycle. The model reproduces well the qualitatively different time series that result from different experimental conditions, and illuminates the role and importance of individual processes in the cycle. Simple experiments are suggested that can further test and define the model and the polymer's reaction cycle.
NASA Astrophysics Data System (ADS)
Rokni Lamooki, Gholam Reza; Shirazi, Amir H.; Mani, Ali R.
2015-05-01
Thyroid's main chemical reactions are employed to develop a mathematical model. The presented model is based on differential equations where their dynamics reflects many aspects of thyroid's behavior. Our main focus here is the well known, but not well understood, phenomenon so called as Wolff-Chaikoff effect. It is shown that the inhibitory effect of intake iodide on the rate of one single enzyme causes a similar effect as Wolff-Chaikoff. Besides this issue, the presented model is capable of revealing other complex phenomena of thyroid hormones homeostasis.
Stochastic ontogenetic growth model
NASA Astrophysics Data System (ADS)
West, B. J.; West, D.
2012-02-01
An ontogenetic growth model (OGM) for a thermodynamically closed system is generalized to satisfy both the first and second law of thermodynamics. The hypothesized stochastic ontogenetic growth model (SOGM) is shown to entail the interspecies allometry relation by explicitly averaging the basal metabolic rate and the total body mass over the steady-state probability density for the total body mass (TBM). This is the first derivation of the interspecies metabolic allometric relation from a dynamical model and the asymptotic steady-state distribution of the TBM is fit to data and shown to be inverse power law.
Modeling regional power transfers
Kavicky, J.A.; Veselka, T.D.
1994-03-01
The Spot Market Network (SMN) model was used to estimate spot market transactions and prices between various North American Electric Reliability Council (NERC) regions for summer on-peak situations. A preliminary analysis of new or proposed additions to the transmission network was performed. The effects of alternative exempt wholesale generator (EWG) options on spot market transactions and the transmission system are also studied. This paper presents the SMN regional modelling approach and summarizes simulation results. Although the paper focuses on a regional network representation, a discussion of how the SMN model was used to represent a detailed utility-level network is also presented.