NASA Astrophysics Data System (ADS)
Lemaître, J.-F.; Dubray, N.; Hilaire, S.; Panebianco, S.; Sida, J.-L.
2013-12-01
Our purpose is to determine fission fragments characteristics in a framework of a scission point model named SPY for Scission Point Yields. This approach can be considered as a theoretical laboratory to study fission mechanism since it gives access to the correlation between the fragments properties and their nuclear structure, such as shell correction, pairing, collective degrees of freedom, odd-even effects. Which ones are dominant in final state? What is the impact of compound nucleus structure? The SPY model consists in a statistical description of the fission process at the scission point where fragments are completely formed and well separated with fixed properties. The most important property of the model relies on the nuclear structure of the fragments which is derived from full quantum microscopic calculations. This approach allows computing the fission final state of extremely exotic nuclei which are inaccessible by most of the fission model available on the market.
New statistical scission-point model to predict fission fragment observables
NASA Astrophysics Data System (ADS)
Lemaître, Jean-François; Panebianco, Stefano; Sida, Jean-Luc; Hilaire, Stéphane; Heinrich, Sophie
2015-09-01
The development of high performance computing facilities makes possible a massive production of nuclear data in a full microscopic framework. Taking advantage of the individual potential calculations of more than 7000 nuclei, a new statistical scission-point model, called SPY, has been developed. It gives access to the absolute available energy at the scission point, which allows the use of a parameter-free microcanonical statistical description to calculate the distributions and the mean values of all fission observables. SPY uses the richness of microscopy in a rather simple theoretical framework, without any parameter except the scission-point definition, to draw clear answers based on perfect knowledge of the ingredients involved in the model, with very limited computing cost.
SPY: a new scission-point model based on microscopic inputs to predict fission fragment properties
NASA Astrophysics Data System (ADS)
Panebianco, Stefano; Dubray, Nöel; Goriely, Stéphane; Hilaire, Stéphane; Lemaître, Jean-François; Sida, Jean-Luc
2014-04-01
Despite the difficulty in describing the whole fission dynamics, the main fragment characteristics can be determined in a static approach based on a so-called scission-point model. Within this framework, a new Scission-Point model for the calculations of fission fragment Yields (SPY) has been developed. This model, initially based on the approach developed by Wilkins in the late seventies, consists in performing a static energy balance at scission, where the two fragments are supposed to be completely separated so that their macroscopic properties (mass and charge) can be considered as fixed. Given the knowledge of the system state density, averaged quantities such as mass and charge yields, mean kinetic and excitation energy can then be extracted in the framework of a microcanonical statistical description. The main advantage of the SPY model is the introduction of one of the most up-to-date microscopic descriptions of the nucleus for the individual energy of each fragment and, in the future, for their state density. These quantities are obtained in the framework of HFB calculations using the Gogny nucleon-nucleon interaction, ensuring an overall coherence of the model. Starting from a description of the SPY model and its main features, a comparison between the SPY predictions and experimental data will be discussed for some specific cases, from light nuclei around mercury to major actinides. Moreover, extensive predictions over the whole chart of nuclides will be discussed, with particular attention to their implication in stellar nucleosynthesis. Finally, future developments, mainly concerning the introduction of microscopic state densities, will be briefly discussed.
Characterization of the scission point from fission-fragment velocities
M. Caamaño; F. Farget; O. Delaune; K. -H. Schmidt; C. Schmitt; L. Audouin; C. -O. Bacri; J. Benlliure; E. Casarejos; X. Derkx; B. Fernández-Domínguez; L. Gaudefroy; C. Golabek; B. Jurado; A. Lemasson; D. Ramos; C. Rodríguez-Tajes; T. Roger; A. Shrivastava
2015-07-15
The isotopic-yield distributions and kinematic properties of fragments produced in transfer-induced fission of 240Pu and fusion-induced fission of 250Cf, with 9 MeV and 45 MeV of excitation energy respectively, were measured in inverse kinematics with the spectrometer VAMOS. The kinematic properties of identified fission fragments allow to derive properties of the scission configuration such as the distance between fragments, the total kinetic energy, the neutron multiplicity, the total excitation energy, and, for the first time, the proton- and neutron-number sharing during the emergence of the fragments. These properties of the scission point are studied as functions of the fragment atomic number. The correlation between these observables, gathered in one single experiment and for two different fissioning systems at different excitation energies, give valuable information for the understanding and modeling of the fission process.
Compound nucleus decay: Comparison between saddle point and scission point barriers
Santos, T. J.; Carlson, B. V.
2014-11-11
One of the principal characteristics of nuclear multifragmentation is the emission of complex fragments of intermediate mass. An extension of the statistical multifragmentation model has been developed, in which the process can be interpreted as the near simultaneous limit of a series of sequential binary decays. In this extension, intermediate mass fragment emissions are described by expressions almost identical to those of light particle emission. At lower temperatures, similar expressions have been shown to furnish a good description of very light intermediate mass fragment emission but not of the emission of heavier fragments, which seems to be determined by the transition density at the saddle-point rather than at the scission point. Here, we wish to compare these different formulations of intermediate fragmment emission and analyze the extent to which they remain distinguishable at high excitation energy.
Kadmensky, S. G.; Bunakov, V. E.; Kadmensky, S. S.
2012-11-15
It is shown that the emergence of anisotropies in the angular distributions of fragments originating from the spontaneous and induced fission of oriented actinide nuclei is possible only if nonuniformities in the population of the projectionsM (K) of the fissile-nucleus spin onto the z axis of the laboratory frame (fissile-nucleus symmetry axis) appear simultaneously in the vicinity of the scission point but not in the vicinity of the outer saddle point of the deformation potential. The possibilities for creating the orientation of fissile nuclei for spontaneous and induced fission and the effect of these orientations on the anisotropies under analysis are considered. The role of Coriolis interaction as a unique source of the mixing of different-K fissile-nucleus states at all stages of the fission process is studied with allowance for the dynamical enhancement of this interaction for excited thermalized states of the nucleus involved that is characterized by a high energy density. It is shown that the absence of thermalization of excited states of the fissile nucleus that appear because of the effect of nonadiabaticity of its collective deformation motion in the vicinity of the scission point is a condition of conservation of the influence that transition fission states formed at the inner and outer fission barriers exerts on the distribution of the spin projections K for lowenergy spontaneous nuclear fission. It is confirmed that anisotropies observed in the angular distributions of fragments originating from the fission of nuclei that is induced by fast light particles (multiply charged ions) are due to the appearance of strongly excited equilibrium(nonequilibrium) states of the fissile nucleus in the vicinity of its scission point that have a Gibbs (non-Gibbs) distribution of projections K.
Fission yield calculation using toy model based on Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Jubaidah, Kurniadi, Rizal
2015-09-01
Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (Rc), mean of left curve (?L) and mean of right curve (?R), deviation of left curve (?L) and deviation of right curve (?R). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in ? or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90
Fission fragment mass distribution studies in 30Si +180Hf reaction
NASA Astrophysics Data System (ADS)
Shamlath, A.; Shareef, M.; Prasad, E.; Sugathan, P.; Thomas, R. G.; Jhingan, A.; Appannababu, S.; Nasirov, A. K.; Vinodkumar, A. M.; Varier, K. M.; Yadav, C.; Babu, B. R. S.; Nath, S.; Mohanto, G.; Mukul, Ish; Singh, D.; Kailas, S.
2016-01-01
Fission fragment mass-angle and mass ratio distributions have been measured for the 30Si + 180Hf reaction in the beam energy range 128-148 MeV. Quasifission signature is observed in this reaction, forming the compound system 210Rn. The results are compared with a very asymmetric reaction 16O + 194Pt, forming the same compound nucleus. Calculations assuming saddle point, scission point and DNS models have been performed to interpret the experimental results. The results strongly suggest the entrance channel dependence of quasifission in heavy ion collisions.
Killeen, P R
1999-01-01
Models are tools; they need to fit both the hand and the task. Presence or absence of a feature such as a pacemaker or a cascade is not in itself good. Or bad. Criteria for model evaluation involve benefit-cost ratios, with the numerator a function of the range of phenomena explained, goodness of fit, consistency with other nearby models, and intangibles such as beauty. The denominator is a function of complexity, the number of phenomena that must be ignored, and the effort necessary to incorporate the model into one's parlance. Neither part of the ratio can yet be evaluated for MTS, whose authors provide some cogent challenges to SET. PMID:10220934
Study of Fission Barrier Heights of Uranium Isotopes by the Macroscopic-Microscopic Method
NASA Astrophysics Data System (ADS)
Zhong, Chun-Lai; Fan, Tie-Shuan
2014-09-01
Potential energy surfaces of uranium nuclei in the range of mass numbers 229 through 244 are investigated in the framework of the macroscopic-microscopic model and the heights of static fission barriers are obtained in terms of a double-humped structure. The macroscopic part of the nuclear energy is calculated according to Lublin—Strasbourg-drop (LSD) model. Shell and pairing corrections as the microscopic part are calculated with a folded-Yukawa single-particle potential. The calculation is carried out in a five-dimensional parameter space of the generalized Lawrence shapes. In order to extract saddle points on the potential energy surface, a new algorithm which can effectively find an optimal fission path leading from the ground state to the scission point is developed. The comparison of our results with available experimental data and others' theoretical results confirms the reliability of our calculations.
Models, Fiction, and Fictional Models
NASA Astrophysics Data System (ADS)
Liu, Chuang
2014-03-01
The following sections are included: * Introduction * Why Most Models in Science Are Not Fictional * Typically Fictional Models in Science * Modeling the Unobservable * Fictional Models for the Unobservable? * References
Niche Modeling: Model Evaluation
Peterson, A. Townsend
2012-08-29
Ecological niche modeling has become a very popular tool in ecological and biogeographic studies across broad extents. The tool is used in hundreds of publications each year now, but some fundamental aspects of the approach ...
Mental Models, Conceptual Models, and Modelling.
ERIC Educational Resources Information Center
Greca, Ileana Maria; Moreira, Marco Antonio
2000-01-01
Reviews science education research into representations constructed by students in their interactions with the world, its phenomena, and artefacts. Features discussions of mental models, conceptual models, and the activity of modeling. (Contains 30 references.) (Author/WRM)
Quantum Circuit Model Topological Model
Rowell, Eric C.
Quantum Circuit Model Topological Model Comparison of Models Topological Quantum Computation Eric Rowell Texas A&M University October 2010 Eric Rowell Topological Quantum Computation #12;Quantum Circuit Model Topological Model Comparison of Models Outline 1 Quantum Circuit Model Gates, Circuits
MODEL DEVELOPMENT - DOSE MODELS
Model Development
Humans are exposed to mixtures of chemicals from multiple pathways and routes. These exposures may result from a single event or may accumulate over time if multiple exposure events occur. The traditional approach of assessing risk from a single chemica...
NASA Technical Reports Server (NTRS)
Pyle, J. A.; Butler, D. M.; Cariolle, D.; Garcia, R. R.; Grose, W. L.; Guthrie, P. D.; Ko, M.; Owens, A. J.; Plumb, R. A.; Prather, M. J.
1985-01-01
The types of models used in assessment of possible chemical perturbations to the stratosphere are reviewed. The statue of one and two dimensional models are discussed. The problem of model validation is covered before the status of photochemical modeling efforts is discussed. A hierarchy of tests for photochemical models is presented.
ERIC Educational Resources Information Center
Levenson, Harold E.; Hurni, Andre
1978-01-01
Suggests building models as a way to reinforce and enhance related subjects such as architectural drafting, structural carpentry, etc., and discusses time, materials, scales, tools or equipment needed, how to achieve realistic special effects, and the types of projects that can be built (model of complete building, a panoramic model, and model…
Models-3 is a third generation air quality modeling system that contains a variety of tools to perform research and analysis of critical environmental questions and problems. These tools provide regulatory analysts and scientists with quicker results, greater scientific accuracy ...
This presentation presented information on entrainment models. Entrainment models use entrainment hypotheses to express the continuity equation. The advantage is that plume boundaries are known. A major disadvantage is that the problems that can be solved are rather simple. The ...
NASA Technical Reports Server (NTRS)
Rubesin, Morris W.
1987-01-01
Recent developments at several levels of statistical turbulence modeling applicable to aerodynamics are briefly surveyed. Emphasis is on examples of model improvements for transonic, two-dimensional flows. Experience with the development of these improved models is cited to suggest methods of accelerating the modeling process necessary to keep abreast of the rapid movement of computational fluid dynamics into the computation of complex three-dimensional flows.
Phoenix (formerly referred to as the Second Generation Model or SGM) is a global general equilibrium model designed to analyze energy-economy-climate related questions and policy implications in the medium- to long-term. This model disaggregates the global economy into 26 industr...
Sadar, A.J.
1993-03-01
Mathematical modeling of air pollution dispersion has been performed for many years to estimate the impact of source emissions on air quality. EPA provides guidance on choosing appropriate computer models, such as COMPLEX I for regulator applications. The agency says several models are suitable for predicting air quality impacts for most situations.
ERIC Educational Resources Information Center
James, W. G. G.
1970-01-01
Discusses the historical development of both the wave and the corpuscular photon model of light. Suggests that students should be informed that the two models are complementary and that each model successfully describes a wide range of radiation phenomena. Cites 19 references which might be of interest to physics teachers and students. (LC)
Hydrological models are mediating models
NASA Astrophysics Data System (ADS)
Babel, L. V.; Karssenberg, D.
2013-08-01
Despite the increasing role of models in hydrological research and decision-making processes, only few accounts of the nature and function of models exist in hydrology. Earlier considerations have traditionally been conducted while making a clear distinction between physically-based and conceptual models. A new philosophical account, primarily based on the fields of physics and economics, transcends classes of models and scientific disciplines by considering models as "mediators" between theory and observations. The core of this approach lies in identifying models as (1) being only partially dependent on theory and observations, (2) integrating non-deductive elements in their construction, and (3) carrying the role of instruments of scientific enquiry about both theory and the world. The applicability of this approach to hydrology is evaluated in the present article. Three widely used hydrological models, each showing a different degree of apparent physicality, are confronted to the main characteristics of the "mediating models" concept. We argue that irrespective of their kind, hydrological models depend on both theory and observations, rather than merely on one of these two domains. Their construction is additionally involving a large number of miscellaneous, external ingredients, such as past experiences, model objectives, knowledge and preferences of the modeller, as well as hardware and software resources. We show that hydrological models convey the role of instruments in scientific practice by mediating between theory and the world. It results from these considerations that the traditional distinction between physically-based and conceptual models is necessarily too simplistic and refers at best to the stage at which theory and observations are steering model construction. The large variety of ingredients involved in model construction would deserve closer attention, for being rarely explicitly presented in peer-reviewed literature. We believe that devoting more importance to identifying and communicating on the many factors involved in model development might increase transparency of model building.
Model Experiments and Model Descriptions
NASA Technical Reports Server (NTRS)
Jackman, Charles H.; Ko, Malcolm K. W.; Weisenstein, Debra; Scott, Courtney J.; Shia, Run-Lie; Rodriguez, Jose; Sze, N. D.; Vohralik, Peter; Randeniya, Lakshman; Plumb, Ian
1999-01-01
The Second Workshop on Stratospheric Models and Measurements Workshop (M&M II) is the continuation of the effort previously started in the first Workshop (M&M I, Prather and Remsberg [1993]) held in 1992. As originally stated, the aim of M&M is to provide a foundation for establishing the credibility of stratospheric models used in environmental assessments of the ozone response to chlorofluorocarbons, aircraft emissions, and other climate-chemistry interactions. To accomplish this, a set of measurements of the present day atmosphere was selected. The intent was that successful simulations of the set of measurements should become the prerequisite for the acceptance of these models as having a reliable prediction for future ozone behavior. This section is divided into two: model experiment and model descriptions. In the model experiment, participant were given the charge to design a number of experiments that would use observations to test whether models are using the correct mechanisms to simulate the distributions of ozone and other trace gases in the atmosphere. The purpose is closely tied to the needs to reduce the uncertainties in the model predicted responses of stratospheric ozone to perturbations. The specifications for the experiments were sent out to the modeling community in June 1997. Twenty eight modeling groups responded to the requests for input. The first part of this section discusses the different modeling group, along with the experiments performed. Part two of this section, gives brief descriptions of each model as provided by the individual modeling groups.
Model Reduction in Groundwater Modeling
NASA Astrophysics Data System (ADS)
Yeh, W. W. G.
2014-12-01
Model reduction has been shown to be a very effective method for reducing the computational burden of large-scale simulations. Model reduction techniques preserve much of the physical knowledge of the system and primarily seek to remove components from the model that do not provide significant information of interest. Proper Orthogonal Decomposition (POD) is a model reduction technique by which a system of ordinary equations is projected onto a much smaller subspace in such a way that the span of the subspace is equal to the span of the original full model space. Basically, the POD technique selects a small number of orthonormal basis functions (principal components) that span the spatial variability of the solutions. In this way the state variable (head) is approximated by a linear combination of these basis functions and, using a Galerkin projection, the dimension of the problem is significantly reduced. It has been shown that for a highly discritized model, the reduced model can be two to three orders of magnitude smaller than the original model and runs 1,000 faster. More importantly, the reduced model captures the dominating characteristics of the full model and produces sufficiently accurate solutions. One of the major tasks in the development of the reduced model is the selection of snapshots which are used to determine the dominant eigenvectors. This paper discusses ways to optimize the snapshot selection. Additionally, the paper also discusses applications of the reduced model to parameter estimation, Monte Carlo simulation and experimental design in groundwater modeling.
Phillips, C.K.
1985-12-01
This lecture provides a survey of the methods used to model fast magnetosonic wave coupling, propagation, and absorption in tokamaks. The validity and limitations of three distinct types of modelling codes, which will be contrasted, include discrete models which utilize ray tracing techniques, approximate continuous field models based on a parabolic approximation of the wave equation, and full field models derived using finite difference techniques. Inclusion of mode conversion effects in these models and modification of the minority distribution function will also be discussed. The lecture will conclude with a presentation of time-dependent global transport simulations of ICRF-heated tokamak discharges obtained in conjunction with the ICRF modelling codes. 52 refs., 15 figs.
Towards an improved evaluation of neutron-induced fission cross sections on actinides
Goriely, S.; Hilaire, S.; Koning, A. J.; Capote, R.
2011-03-15
Mean-field calculations can now provide all the nuclear ingredients required to describe the fission path from the equilibrium deformation up to the nuclear scission point. The information obtained from microscopic mean-field models has been included in the TALYS reaction code to improve the predictions of neutron-induced fission cross sections. The nuclear inputs concern not only the details of the energy surface along the fission path, but also the coherent estimate of the nuclear level density derived within the combinatorial approach on the basis of the same single-particle properties, in particular at the fission saddle points. The predictive power of such a microscopic approach is tested on the experimental data available for the uranium isotopic chain. It is also shown that the various inputs can be tuned to reproduce, at best, experimental data in one unique coherent framework, so that in a close future it should become possible to make, on the basis of such models, accurate fission-cross-section calculations and the corresponding estimates for nuclei, energy ranges, or reaction channels for which no data exist. Such model uncertainties are usually not taken into account in data evaluations.
H. Yang
1999-11-04
The purpose of this analysis and model report (AMR) for the Ventilation Model is to analyze the effects of pre-closure continuous ventilation in the Engineered Barrier System (EBS) emplacement drifts and provide heat removal data to support EBS design. It will also provide input data (initial conditions, and time varying boundary conditions) for the EBS post-closure performance assessment and the EBS Water Distribution and Removal Process Model. The objective of the analysis is to develop, describe, and apply calculation methods and models that can be used to predict thermal conditions within emplacement drifts under forced ventilation during the pre-closure period. The scope of this analysis includes: (1) Provide a general description of effects and heat transfer process of emplacement drift ventilation. (2) Develop a modeling approach to simulate the impacts of pre-closure ventilation on the thermal conditions in emplacement drifts. (3) Identify and document inputs to be used for modeling emplacement ventilation. (4) Perform calculations of temperatures and heat removal in the emplacement drift. (5) Address general considerations of the effect of water/moisture removal by ventilation on the repository thermal conditions. The numerical modeling in this document will be limited to heat-only modeling and calculations. Only a preliminary assessment of the heat/moisture ventilation effects and modeling method will be performed in this revision. Modeling of moisture effects on heat removal and emplacement drift temperature may be performed in the future.
NASA Technical Reports Server (NTRS)
Druyan, Leonard M.
2012-01-01
Climate models is a very broad topic, so a single volume can only offer a small sampling of relevant research activities. This volume of 14 chapters includes descriptions of a variety of modeling studies for a variety of geographic regions by an international roster of authors. The climate research community generally uses the rubric climate models to refer to organized sets of computer instructions that produce simulations of climate evolution. The code is based on physical relationships that describe the shared variability of meteorological parameters such as temperature, humidity, precipitation rate, circulation, radiation fluxes, etc. Three-dimensional climate models are integrated over time in order to compute the temporal and spatial variations of these parameters. Model domains can be global or regional and the horizontal and vertical resolutions of the computational grid vary from model to model. Considering the entire climate system requires accounting for interactions between solar insolation, atmospheric, oceanic and continental processes, the latter including land hydrology and vegetation. Model simulations may concentrate on one or more of these components, but the most sophisticated models will estimate the mutual interactions of all of these environments. Advances in computer technology have prompted investments in more complex model configurations that consider more phenomena interactions than were possible with yesterday s computers. However, not every attempt to add to the computational layers is rewarded by better model performance. Extensive research is required to test and document any advantages gained by greater sophistication in model formulation. One purpose for publishing climate model research results is to present purported advances for evaluation by the scientific community.
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Moncrieff, Mitchell; Einaud, Franco (Technical Monitor)
2001-01-01
Numerical cloud models have been developed and applied extensively to study cloud-scale and mesoscale processes during the past four decades. The distinctive aspect of these cloud models is their ability to treat explicitly (or resolve) cloud-scale dynamics. This requires the cloud models to be formulated from the non-hydrostatic equations of motion that explicitly include the vertical acceleration terms since the vertical and horizontal scales of convection are similar. Such models are also necessary in order to allow gravity waves, such as those triggered by clouds, to be resolved explicitly. In contrast, the hydrostatic approximation, usually applied in global or regional models, does allow the presence of gravity waves. In addition, the availability of exponentially increasing computer capabilities has resulted in time integrations increasing from hours to days, domain grids boxes (points) increasing from less than 2000 to more than 2,500,000 grid points with 500 to 1000 m resolution, and 3-D models becoming increasingly prevalent. The cloud resolving model is now at a stage where it can provide reasonably accurate statistical information of the sub-grid, cloud-resolving processes poorly parameterized in climate models and numerical prediction models.
V. Chipman
2002-10-05
The purpose of the Ventilation Model is to simulate the heat transfer processes in and around waste emplacement drifts during periods of forced ventilation. The model evaluates the effects of emplacement drift ventilation on the thermal conditions in the emplacement drifts and surrounding rock mass, and calculates the heat removal by ventilation as a measure of the viability of ventilation to delay the onset of peak repository temperature and reduce its magnitude. The heat removal by ventilation is temporally and spatially dependent, and is expressed as the fraction of heat carried away by the ventilation air compared to the fraction of heat produced by radionuclide decay. One minus the heat removal is called the wall heat fraction, or the remaining amount of heat that is transferred via conduction to the surrounding rock mass. Downstream models, such as the ''Multiscale Thermohydrologic Model'' (BSC 2001), use the wall heat fractions as outputted from the Ventilation Model to initialize their post-closure analyses. The Ventilation Model report was initially developed to analyze the effects of preclosure continuous ventilation in the Engineered Barrier System (EBS) emplacement drifts, and to provide heat removal data to support EBS design. Revision 00 of the Ventilation Model included documentation of the modeling results from the ANSYS-based heat transfer model. The purposes of Revision 01 of the Ventilation Model are: (1) To validate the conceptual model for preclosure ventilation of emplacement drifts and verify its numerical application in accordance with new procedural requirements as outlined in AP-SIII-10Q, Models (Section 7.0). (2) To satisfy technical issues posed in KTI agreement RDTME 3.14 (Reamer and Williams 2001a). Specifically to demonstrate, with respect to the ANSYS ventilation model, the adequacy of the discretization (Section 6.2.3.1), and the downstream applicability of the model results (i.e. wall heat fractions) to initialize post-closure thermal models (Section 6.6). (3) To satisfy the remainder of KTI agreement TEF 2.07 (Reamer and Williams 2001b). Specifically to provide the results of post-test ANSYS modeling of the Atlas Facility forced convection tests (Section 7.1.2). This portion of the model report also serves as a validation exercise per AP-SIII.10Q, Models, for the ANSYS ventilation model. (4) To further satisfy KTI agreements RDTME 3.01 and 3.14 (Reamer and Williams 2001a) by providing the source documentation referred to in the KTI Letter Report, ''Effect of Forced Ventilation on Thermal-Hydrologic Conditions in the Engineered Barrier System and Near Field Environment'' (Williams 2002). Specifically to provide the results of the MULTIFLUX model which simulates the coupled processes of heat and mass transfer in and around waste emplacement drifts during periods of forced ventilation. This portion of the model report is presented as an Alternative Conceptual Model with a numerical application, and also provides corroborative results used for model validation purposes (Section 6.3 and 6.4).
New Fission Fragment Distributions and r-Process Origin of the Rare-Earth Elements
NASA Astrophysics Data System (ADS)
Goriely, S.; Sida, J.-L.; Lemaître, J.-F.; Panebianco, S.; Dubray, N.; Hilaire, S.; Bauswein, A.; Janka, H.-T.
2013-12-01
Neutron star (NS) merger ejecta offer a viable site for the production of heavy r-process elements with nuclear mass numbers A?140. The crucial role of fission recycling is responsible for the robustness of this site against many astrophysical uncertainties, but calculations sensitively depend on nuclear physics. In particular, the fission fragment yields determine the creation of 110?A?170 nuclei. Here, we apply a new scission-point model, called SPY, to derive the fission fragment distribution (FFD) of all relevant neutron-rich, fissioning nuclei. The model predicts a doubly asymmetric FFD in the abundant A?278 mass region that is responsible for the final recycling of the fissioning material. Using ejecta conditions based on relativistic NS merger calculations, we show that this specific FFD leads to a production of the A?165 rare-earth peak that is nicely compatible with the abundance patterns in the Sun and metal-poor stars. This new finding further strengthens the case of NS mergers as possible dominant origin of r nuclei with A?140.
New fission fragment distributions and r-process origin of the rare-earth elements.
Goriely, S; Sida, J-L; Lemaître, J-F; Panebianco, S; Dubray, N; Hilaire, S; Bauswein, A; Janka, H-T
2013-12-13
Neutron star (NS) merger ejecta offer a viable site for the production of heavy r-process elements with nuclear mass numbers A?140. The crucial role of fission recycling is responsible for the robustness of this site against many astrophysical uncertainties, but calculations sensitively depend on nuclear physics. In particular, the fission fragment yields determine the creation of 110?A?170 nuclei. Here, we apply a new scission-point model, called SPY, to derive the fission fragment distribution (FFD) of all relevant neutron-rich, fissioning nuclei. The model predicts a doubly asymmetric FFD in the abundant A?278 mass region that is responsible for the final recycling of the fissioning material. Using ejecta conditions based on relativistic NS merger calculations, we show that this specific FFD leads to a production of the A?165 rare-earth peak that is nicely compatible with the abundance patterns in the Sun and metal-poor stars. This new finding further strengthens the case of NS mergers as possible dominant origin of r nuclei with A?140. PMID:24483647
Model Selection for Geostatistical Models
Hoeting, Jennifer A.; Davis, Richard A.; Merton, Andrew A.; Thompson, Sandra E.
2006-02-01
We consider the problem of model selection for geospatial data. Spatial correlation is typically ignored in the selection of explanatory variables and this can influence model selection results. For example, the inclusion or exclusion of particular explanatory variables may not be apparent when spatial correlation is ignored. To address this problem, we consider the Akaike Information Criterion (AIC) as applied to a geostatistical model. We offer a heuristic derivation of the AIC in this context and provide simulation results that show that using AIC for a geostatistical model is superior to the often used approach of ignoring spatial correlation in the selection of explanatory variables. These ideas are further demonstrated via a model for lizard abundance. We also employ the principle of minimum description length (MDL) to variable selection for the geostatistical model. The effect of sampling design on the selection of explanatory covariates is also explored.
ERIC Educational Resources Information Center
Oh, Phil Seok; Oh, Sung Jin
2013-01-01
Modeling in science has been studied by education researchers for decades and is now being applied broadly in school. It is among the scientific practices featured in the "Next Generation Science Standards" ("NGSS") (Achieve Inc. 2013). This article describes modeling activities in an extracurricular science club in a high…
ERIC Educational Resources Information Center
Ballard, W.L.
1968-01-01
The article discusses models of synchronic and diachronic phonology and suggests changes in them. The basic generative model of phonology is outlined with the author's reinterpretations. The systematic phonemic level is questioned in terms of its unreality with respect to linguistic performance and its lack of validity with respect to historical…
NASA Technical Reports Server (NTRS)
Bardina, Jorge E.
1995-01-01
The objective of this work is to develop, verify, and incorporate the baseline two-equation turbulence models which account for the effects of compressibility into the three-dimensional Reynolds averaged Navier-Stokes (RANS) code and to provide documented descriptions of the models and their numerical procedures so that they can be implemented into 3-D CFD codes for engineering applications.
ERIC Educational Resources Information Center
Budiansky, Stephen
1980-01-01
This article discusses the need for more accurate and complete input data and field verification of the various models of air pollutant dispension. Consideration should be given to changing the form of air quality standards based on enhanced dispersion modeling techniques. (Author/RE)
Perumalla, Kalyan S
2007-01-01
A computer software-based model is typically designed to produce a trace of system evolution over time. The actual process of computing the model state and producing the state values as the simulation time is advanced is called model execution. Models could be designed with a specific execution technique in mind, or could be generally amenable to multiple different execution techniques. Two popular methods that are used to execute models are: time-stepped method and discrete-event method. Each of these methods could in turn be executed either sequentially (on a single processor), or in parallel (using multiple processors concurrently). In this chapter, we describe the time-stepped and discrete event execution methods and outline some of the common approaches to their sequential and parallel execution. Execution concepts common to the methods are described followed by implementation details of the methods.
Veronica J. Rutledge
2013-01-01
The absence of industrial scale nuclear fuel reprocessing in the U.S. has precluded the necessary driver for developing the advanced simulation capability now prevalent in so many other countries. Thus, it is essential to model complex series of unit operations to simulate, understand, and predict inherent transient behavior and feedback loops. A capability of accurately simulating the dynamic behavior of advanced fuel cycle separation processes will provide substantial cost savings and many technical benefits. The specific fuel cycle separation process discussed in this report is the off-gas treatment system. The off-gas separation consists of a series of scrubbers and adsorption beds to capture constituents of interest. Dynamic models are being developed to simulate each unit operation involved so each unit operation can be used as a stand-alone model and in series with multiple others. Currently, an adsorption model has been developed within Multi-physics Object Oriented Simulation Environment (MOOSE) developed at the Idaho National Laboratory (INL). Off-gas Separation and REcoverY (OSPREY) models the adsorption of off-gas constituents for dispersed plug flow in a packed bed under non-isothermal and non-isobaric conditions. Inputs to the model include gas, sorbent, and column properties, equilibrium and kinetic data, and inlet conditions. The simulation outputs component concentrations along the column length as a function of time from which breakthrough data is obtained. The breakthrough data can be used to determine bed capacity, which in turn can be used to size columns. It also outputs temperature along the column length as a function of time and pressure drop along the column length. Experimental data and parameters were input into the adsorption model to develop models specific for krypton adsorption. The same can be done for iodine, xenon, and tritium. The model will be validated with experimental breakthrough curves. Customers will be given access to OSPREY to used and evaluate the model.
Braby, L A
1991-01-01
The biological effects of ionizing radiation exposure are the result of a complex sequence of physical, chemical, biochemical, and physiological interactions which are modified by characteristics of the radiation, the timing of its administration, the chemical and physical environment, and the nature of the biological system. However, it is generally agreed that the health effects in animals originate from changes in individual cells, or possibly small groups of cells, and that these cellular changes are initiated by ionizations and excitations produced by the passage of charged particles through the cells. One way to begin a search for an understanding of health effects of radiation is through the development of phenomenological models of the response. Many models have been presented and tested in the slowly evolving process of characterizing cellular response. Different phenomena (LET dependence, dose rate effect, oxygen effect etc.) and different end points (cell survival, aberration formation, transformation, etc.) have been observed, and no single model has been developed to cover all of them. Instead, a range of models covering different end points and phenomena have developed in parallel. Many of these models employ similar assumptions about some underlying processes while differing about the nature of others. An attempt is made to organize many of the models into groups with similar features and to compare the consequences of those features with the actual experimental observations. It is assumed that by showing that some assumptions are inconsistent with experimental observations, the job of devising and testing mechanistic models can be simplified. PMID:1811477
Modular Modeling System Model Builder
McKim, C.S.; Matthews, M.T.
1996-12-31
The latest release of the Modular Modeling System (MMS) Model Builder adds still more time-saving features to an already powerful MMS dynamic-simulation tool set. The Model Builder takes advantage of 32-bit architecture within the Microsoft Windows 95/NT{trademark} Operating Systems to better integrate a mature library of power-plant components. In addition, the MMS Library of components can now be modified and extended with a new tool named MMS CompGen{trademark}. The MMS Model Builder allows the user to quickly build a graphical schematic representation for a plant by selecting from a library of predefined power plant components to dynamically simulate their operation. In addition, each component has a calculation subroutine stored in a dynamic-link library (DLL), which facilitates the determination of a steady-state condition and performance of routine calculations for the component. These calculations, termed auto-parameterization, help avoid repetitive and often tedious hand calculations for model initialization. In striving to meet the needs for large models and increase user productivity, the MMS Model Builder has been completely revamped to make power plant model creation and maintainability easier and more efficient.
Energy models characterize the energy system, its evolution, and its interactions with the broader economy. The energy system consists of primary resources, including both fossil fuels and renewables; power plants, refineries, and other technologies to process and convert these r...
Daniel, David J; Mc Pherson, Allen; Thorp, John R; Barrett, Richard; Clay, Robert; De Supinski, Bronis; Dube, Evi; Heroux, Mike; Janssen, Curtis; Langer, Steve; Laros, Jim
2011-01-14
A programming model is a set of software technologies that support the expression of algorithms and provide applications with an abstract representation of the capabilities of the underlying hardware architecture. The primary goals are productivity, portability and performance.
Curtis, S.B.
1990-09-01
Several models and theories are reviewed that incorporate the idea of radiation-induced lesions (repairable and/or irreparable) that can be related to molecular lesions in the DNA molecule. Usually the DNA double-strand or chromatin break is suggested as the critical lesion. In the models, the shoulder on the low-LET survival curve is hypothesized as being due to one (or more) of the following three mechanisms: (1) ``interaction`` of lesions produced by statistically independent particle tracks; (2) nonlinear (i.e., linear-quadratic) increase in the yield of initial lesions, and (3) saturation of repair processes at high dose. Comparisons are made between the various approaches. Several significant advances in model development are discussed; in particular, a description of the matrix formulation of the Markov versions of the RMR and LPL models is given. The more advanced theories have incorporated statistical fluctuations in various aspects of the energy-loss and lesion-formation process. An important direction is the inclusion of physical and chemical processes into the formulations by incorporating relevant track structure theory (Monte Carlo track simulations) and chemical reactions of radiation-induced radicals. At the biological end, identification of repair genes and how they operate as well as a better understanding of how DNA misjoinings lead to lethal chromosome aberrations are needed for appropriate inclusion into the theories. More effort is necessary to model the complex end point of radiation-induced carcinogenesis.
Curtis, S.B.
1990-09-01
Several models and theories are reviewed that incorporate the idea of radiation-induced lesions (repairable and/or irreparable) that can be related to molecular lesions in the DNA molecule. Usually the DNA double-strand or chromatin break is suggested as the critical lesion. In the models, the shoulder on the low-LET survival curve is hypothesized as being due to one (or more) of the following three mechanisms: (1) interaction'' of lesions produced by statistically independent particle tracks; (2) nonlinear (i.e., linear-quadratic) increase in the yield of initial lesions, and (3) saturation of repair processes at high dose. Comparisons are made between the various approaches. Several significant advances in model development are discussed; in particular, a description of the matrix formulation of the Markov versions of the RMR and LPL models is given. The more advanced theories have incorporated statistical fluctuations in various aspects of the energy-loss and lesion-formation process. An important direction is the inclusion of physical and chemical processes into the formulations by incorporating relevant track structure theory (Monte Carlo track simulations) and chemical reactions of radiation-induced radicals. At the biological end, identification of repair genes and how they operate as well as a better understanding of how DNA misjoinings lead to lethal chromosome aberrations are needed for appropriate inclusion into the theories. More effort is necessary to model the complex end point of radiation-induced carcinogenesis.
NASA Technical Reports Server (NTRS)
Denning, Peter J.
1990-01-01
Although powerful computers have allowed complex physical and manmade hardware systems to be modeled successfully, we have encountered persistent problems with the reliability of computer models for systems involving human learning, human action, and human organizations. This is not a misfortune; unlike physical and manmade systems, human systems do not operate under a fixed set of laws. The rules governing the actions allowable in the system can be changed without warning at any moment, and can evolve over time. That the governing laws are inherently unpredictable raises serious questions about the reliability of models when applied to human situations. In these domains, computers are better used, not for prediction and planning, but for aiding humans. Examples are systems that help humans speculate about possible futures, offer advice about possible actions in a domain, systems that gather information from the networks, and systems that track and support work flows in organizations.
Woosley, S.E.; Weaver, T.A.
1980-01-01
Recent progress in understanding the observed properties of Type I supernovae as a consequence of the thermonuclear detonation of white dwarf stars and the ensuing decay of the /sup 56/Ni produced therein is reviewed. Within the context of this model for Type I explosions and the 1978 model for Type II explosions, the expected nucleosynthesis and gamma-line spectra from both kinds of supernovae are presented. Finally, a qualitatively new approach to the problem of massive star death and Type II supernovae based upon a combination of rotation and thermonuclear burning is discussed.
ERIC Educational Resources Information Center
Ebert, James R.; Elliott, Nancy A.; Hurteau, Laura; Schulz, Amanda
2004-01-01
Students must understand the fundamental process of convection before they can grasp a wide variety of Earth processes, many of which may seem abstract because of the scales on which they operate. Presentation of a very visual, concrete model prior to instruction on these topics may facilitate students' understanding of processes that are largely…
Although air quality models have been applied historically to address issues specific to ambient air quality standards (i.e., one criteria pollutant at a time) or welfare (e.g.. acid deposition or visibility impairment). they are inherently multipollutant based. Therefore. in pri...
NASA Astrophysics Data System (ADS)
Brdiczka, Oliver; Crowley, James L.; Cu?ín, Jan; Kleindienst, Jan
CHIL services are intended to anticipate the needs of their users. An important step toward this is to model and understand human behavior. Human activity can be sensed and recognized (as described in Chapter 11). However, a higher-level representation of human actions and human relationships (social context) is necessary to effectively describe human behavior and detect human needs.
ERIC Educational Resources Information Center
Goodwyn, Lauren; Salm, Sarah
2007-01-01
Teaching the anatomy of the muscle system to high school students can be challenging. Students often learn about muscle anatomy by memorizing information from textbooks or by observing plastic, inflexible models. Although these mediums help students learn about muscle placement, the mediums do not facilitate understanding regarding integration of…
Ensemble forecasting has been used for operational numerical weather prediction in the United States and Europe since the early 1990s. An ensemble of weather or climate forecasts is used to characterize the two main sources of uncertainty in computer models of physical systems: ...
A. Alsaed
2004-09-14
The ''Disposal Criticality Analysis Methodology Topical Report'' (YMP 2003) presents the methodology for evaluating potential criticality situations in the monitored geologic repository. As stated in the referenced Topical Report, the detailed methodology for performing the disposal criticality analyses will be documented in model reports. Many of the models developed in support of the Topical Report differ from the definition of models as given in the Office of Civilian Radioactive Waste Management procedure AP-SIII.10Q, ''Models'', in that they are procedural, rather than mathematical. These model reports document the detailed methodology necessary to implement the approach presented in the Disposal Criticality Analysis Methodology Topical Report and provide calculations utilizing the methodology. Thus, the governing procedure for this type of report is AP-3.12Q, ''Design Calculations and Analyses''. The ''Criticality Model'' is of this latter type, providing a process evaluating the criticality potential of in-package and external configurations. The purpose of this analysis is to layout the process for calculating the criticality potential for various in-package and external configurations and to calculate lower-bound tolerance limit (LBTL) values and determine range of applicability (ROA) parameters. The LBTL calculations and the ROA determinations are performed using selected benchmark experiments that are applicable to various waste forms and various in-package and external configurations. The waste forms considered in this calculation are pressurized water reactor (PWR), boiling water reactor (BWR), Fast Flux Test Facility (FFTF), Training Research Isotope General Atomic (TRIGA), Enrico Fermi, Shippingport pressurized water reactor, Shippingport light water breeder reactor (LWBR), N-Reactor, Melt and Dilute, and Fort Saint Vrain Reactor spent nuclear fuel (SNF). The scope of this analysis is to document the criticality computational method. The criticality computational method will be used for evaluating the criticality potential of configurations of fissionable materials (in-package and external to the waste package) within the repository at Yucca Mountain, Nevada for all waste packages/waste forms. The criticality computational method is also applicable to preclosure configurations. The criticality computational method is a component of the methodology presented in ''Disposal Criticality Analysis Methodology Topical Report'' (YMP 2003). How the criticality computational method fits in the overall disposal criticality analysis methodology is illustrated in Figure 1 (YMP 2003, Figure 3). This calculation will not provide direct input to the total system performance assessment for license application. It is to be used as necessary to determine the criticality potential of configuration classes as determined by the configuration probability analysis of the configuration generator model (BSC 2003a).
Models, Part V: Composition Models.
ERIC Educational Resources Information Center
Callison, Daniel
2003-01-01
Describes four models: The Authoring Cycle, a whole language approach that reflects the inquiry process; I-Search, an approach to research that uses the power of student interests; Cultural Celebration, using local heritage topics; and Science Lab Report, for the composition of a lab report. (LRW)
NASA Astrophysics Data System (ADS)
Herrmann, H. J.; Kun, F.
2007-12-01
Fibre models have been introduced as simple models to describe failure. They are based on the probability distribution of broken fibres. The load redistribution after a fibre yields can be global or local and the first case can often be solved analytically. We will present an interpolation between these the local and the global case and apply it to experimental situations like the compression of granular packings. Introducing viscoelastic fibres allows to describe the creep of wood. It is even possible to deal analytically with a gradual degradation of fibres and consider damage as well as healing. In this way Basquin's law of fatigue can be reproduced and new universalities concerning the histograms of bursts and waiting times can be uncovered.
NASA Technical Reports Server (NTRS)
Dill, David L.
1995-01-01
Automatic formal verification methods for finite-state systems, also known as model-checking, successfully reduce labor costs since they are mostly automatic. Model checkers explicitly or implicitly enumerate the reachable state space of a system, whose behavior is described implicitly, perhaps by a program or a collection of finite automata. Simple properties, such as mutual exclusion or absence of deadlock, can be checked by inspecting individual states. More complex properties, such as lack of starvation, require search for cycles in the state graph with particular properties. Specifications to be checked may consist of built-in properties, such as deadlock or 'unspecified receptions' of messages, another program or implicit description, to be compared with a simulation, bisimulation, or language inclusion relation, or an assertion in one of several temporal logics. Finite-state verification tools are beginning to have a significant impact in commercial designs. There are many success stories of verification tools finding bugs in protocols or hardware controllers. In some cases, these tools have been incorporated into design methodology. Research in finite-state verification has been advancing rapidly, and is showing no signs of slowing down. Recent results include probabilistic algorithms for verification, exploitation of symmetry and independent events, and the use symbolic representations for Boolean functions and systems of linear inequalities. One of the most exciting areas for further research is the combination of model-checking with theorem-proving methods.
Plimpton, Steven James; Heffernan, Julieanne; Sasaki, Darryl Yoshio; Frischknecht, Amalie Lucile; Stevens, Mark Jackson; Frink, Laura J. Douglas
2005-11-01
Understanding the properties and behavior of biomembranes is fundamental to many biological processes and technologies. Microdomains in biomembranes or ''lipid rafts'' are now known to be an integral part of cell signaling, vesicle formation, fusion processes, protein trafficking, and viral and toxin infection processes. Understanding how microdomains form, how they depend on membrane constituents, and how they act not only has biological implications, but also will impact Sandia's effort in development of membranes that structurally adapt to their environment in a controlled manner. To provide such understanding, we created physically-based models of biomembranes. Molecular dynamics (MD) simulations and classical density functional theory (DFT) calculations using these models were applied to phenomena such as microdomain formation, membrane fusion, pattern formation, and protein insertion. Because lipid dynamics and self-organization in membranes occur on length and time scales beyond atomistic MD, we used coarse-grained models of double tail lipid molecules that spontaneously self-assemble into bilayers. DFT provided equilibrium information on membrane structure. Experimental work was performed to further help elucidate the fundamental membrane organization principles.
NASA Astrophysics Data System (ADS)
Holmes, Jon L.
1999-06-01
Molecular modeling has trickled down from the realm of pharmaceutical and research laboratories into the realm of undergraduate chemistry instruction. It has opened avenues for the visualization of chemical concepts that previously were difficult or impossible to convey. I am sure that many of you have developed exercises using the various molecular modeling tools. It is the desire of this Journal to become an avenue for you to share these exercises among your colleagues. It is to this end that Ron Starkey has agreed to edit such a column and to publish not only the description of such exercises, but also the software documents they use. The WWW is the obvious medium to distribute this combination and so accepted submissions will appear online as a feature of JCE Internet. Typical molecular modeling exercise: finding conformation energies. Molecular Modeling Exercises and Experiments is the latest feature column of JCE Internet, joining Conceptual Questions and Challenge Problems, Hal's Picks, and Mathcad in the Chemistry Curriculum. JCE Internet continues to seek submissions in these areas of interest and submissions of general interest. If you have developed materials and would like to submit them, please see our Guide to Submissions for more information. The Chemical Education Resource Shelf, Equipment Buyers Guide, and WWW Site Review would also like to hear about chemistry textbooks and software, equipment, and WWW sites, respectively. Please consult JCE Internet Features to learn more about these resources at JCE Online. Email Announcements Would you like to be informed by email when the latest issue of the Journal is available online? when a new JCE Software title is shipping? when a new JCE Internet article has been published or is available for Open Review? when your subscription is about to expire? A new feature of JCE Online makes this possible. Visit our Guestbook to learn how. When you submit the form on this page, which includes your email address, you may choose to receive an email notice about a Journal event that interests you. Currently such events include availability of the latest issue of the Journal at JCE Online, expiration of your Journal subscription, shipment of a new JCE Software issue, publication of a new JCE Internet article or its availability for Open Review, and other announcements from the Journal. You may choose any number of these options independently. JCE Online Guestbook. Your Privacy JCE Online promises to you that we will not use the information that you provide in our Guestbook for anything other than our own internal information. We will not provide this information to third parties. We will use the information you provide only in our effort to help make the JCE serve you better. You only need to provide your email address to take advantage of this service; the other information you provide is optional. Molecular Modeling Exercises and Experiments: Mission Statement We are seeking in this JCE Internet feature column to publish molecular modeling exercises and experiments that have been used successfully in undergraduate instruction. The exercises will be published here on JCE Internet. An abstract of published submissions will appear in print in the Journal of Chemical Education. Acceptable exercises could be used in either a chemistry laboratory or a chemistry computer laboratory. The exercise could cover any area of chemistry, but should be limited to undergraduate instructional applications. We envision that most of the exercises/experiments will utilize one of the popular instructional molecular modeling software programs (e.g. HyperChem, Spartan, CAChe, PC Model). Exercises that are specific to a particular modeling program are acceptable, but those usable with any modeling program are preferred. Ideally the exercises/experiments will be of the type where the "correct"answer is not obvious so
Vincent, Julian F V
2003-01-01
Biomimetics is seen as a path from biology to engineering. The only path from engineering to biology in current use is the application of engineering concepts and models to biological systems. However, there is another pathway: the verification of biological mechanisms by manufacture, leading to an iterative process between biology and engineering in which the new understanding that the engineering implementation of a biological system can bring is fed back into biology, allowing a more complete and certain understanding and the possibility of further revelations for application in engineering. This is a pathway as yet unformalized, and one that offers the possibility that engineers can also be scientists. PMID:14561351
Self-consistent calculations of fission barriers in the Fm region M. Warda,1,2
Pomorski, Krzysztof
micro- scopic way up to the scission point. The analysis is based on the constrained Hartree-Fock and in the spontaneous fission half-lives of heavy nuclei was found ex- perimentally see, e.g., the review articles in 1Self-consistent calculations of fission barriers in the Fm region M. Warda,1,2 J. L. Egido,1 L. M
Modeling uncertainty: quicksand for water temperature modeling
Bartholow, John M.
2003-01-01
Uncertainty has been a hot topic relative to science generally, and modeling specifically. Modeling uncertainty comes in various forms: measured data, limited model domain, model parameter estimation, model structure, sensitivity to inputs, modelers themselves, and users of the results. This paper will address important components of uncertainty in modeling water temperatures, and discuss several areas that need attention as the modeling community grapples with how to incorporate uncertainty into modeling without getting stuck in the quicksand that prevents constructive contributions to policy making. The material, and in particular the reference, are meant to supplement the presentation given at this conference.
Analysis Meets Modeling Analysis Meets Modeling
Peterson, James K
on directed graphs of computational objects (DGs) Simple sigmoidal neurons Integrate and fire neurons Hodgkin Of Computational Nodes Implementation: Abstract Neuron Circuits 5 West Nile Virus Models Model Setup Model DynamicsAnalysis Meets Modeling Analysis Meets Modeling The Intersection of Mathematics, Computing
Pre-Modeling Ensures Accurate Solid Models
ERIC Educational Resources Information Center
Gow, George
2010-01-01
Successful solid modeling requires a well-organized design tree. The design tree is a list of all the object's features and the sequential order in which they are modeled. The solid-modeling process is faster and less prone to modeling errors when the design tree is a simple and geometrically logical definition of the modeled object. Few high…
Modelling intonational structure using hidden markov models.
Wright, Helen; Taylor, Paul A
1997-01-01
A method is introduced for using hidden Markov models (HMMs) to model intonational structure. HMMs are probabilistic and can capture the variability in structure which previous finite state network models lack. We show ...
Al Hanbali, Ahmad
Background Definition of problem Different models Meso-scale Population Analysis Comparison Conclusions SidIntroduction Detailed model Population model Comparison and conclusion Modeling the Neocortex with Meso-scale Models and Population Models Sid Visser January 14, 2010 Sid Visser Modeling the Neocortex
Chemistry Old Models New Models The Mayo-Lewis Copolymerization Model
Ponomarenko, Vadim
Chemistry Old Models New Models The Mayo-Lewis Copolymerization Model Vadim Ponomarenko Department://www-rohan.sdsu.edu/vadim/mayolewis.pdf #12;Chemistry Old Models New Models Outline Chemistry Old Models New Models #12;Chemistry Old Models New Models Outline Chemistry Old Models New Models #12;Chemistry Old Models New Models Basics
Analytic Modeling Birth-Death Model
Shihada, Basem
Analytic Modeling Birth-Death Model 1 A Review -Random Variables · A variable representing on Exponential Distribution 11 Birth-Death Model 12 #12;Birth-Death Model · Queuing system with a single service State Dependent Arrival Rate 14 #12;State Dependent Service Rate 15 Definition of Birth-Death Process 16
John A. Schroeder
2012-06-01
The Standardized Plant Analysis Risk (SPAR) models for the U.S. commercial nuclear power plants currently have very limited instrumentation and control (I&C) modeling [1]. Most of the I&C components in the operating plant SPAR models are related to the reactor protection system. This was identified as a finding during the industry peer review of SPAR models. While the Emergency Safeguard Features (ESF) actuation and control system was incorporated into the Peach Bottom Unit 2 SPAR model in a recent effort [2], various approaches to expend resources for detailed I&C modeling in other SPAR models are investigated.
CISNET: Standardized Model Documents
Modeling is a complex endeavor, and often it is very difficult to reconcile results from different models. To aid in this process of model description and comparison, CISNET has developed and implemented standardized model documentation. Model profiles are standardized descriptions that facilitate the comparison of models and their results. Users can read documentation about a single model or read side-by-side descriptions that contrast how models address different components of the process.
Comparisons of debris environment model breakup models
NASA Astrophysics Data System (ADS)
Jonas, F.; Yates, K.; Evans, R.
1993-01-01
This paper presents a comparison of current spacecraft breakup models used in orbital (space) debris computational environment models. The breakup models to be compared come from the NASA EVOLVE (Evolutionary) model long term debris model, the IMPACT code developed by Aerospace Corp., and the Fragmentation Algorithms for Satellite Targets (FAST) developed by Kaman Sciences. The comparison will show the methodologies and results obtained for each model such as mass versus fragment number distributions. Implications for debris cloud formation will be discussed in terms of the environments produced. No attempt is made to recommend any one model over the other as each were designed and employed for specific purposes in the environment models they are part of or contribute to. The comparisons are intended to provide researchers both quantitative and qualitative information on the models for use in their own research activities.
Model selection for logistic regression models
NASA Astrophysics Data System (ADS)
Duller, Christine
2012-09-01
Model selection for logistic regression models decides which of some given potential regressors have an effect and hence should be included in the final model. The second interesting question is whether a certain factor is heterogeneous among some subsets, i.e. whether the model should include a random intercept or not. In this paper these questions will be answered with classical as well as with Bayesian methods. The application show some results of recent research projects in medicine and business administration.
ERIC Educational Resources Information Center
Frees, Edward W.; Kim, Jee-Seon
2006-01-01
Multilevel models are proven tools in social research for modeling complex, hierarchical systems. In multilevel modeling, statistical inference is based largely on quantification of random variables. This paper distinguishes among three types of random variables in multilevel modeling--model disturbances, random coefficients, and future response…
ERIC Educational Resources Information Center
Willden, Jeff
2001-01-01
"Bohr's Atomic Model" is a small interactive multimedia program that introduces the viewer to a simplified model of the atom. This interactive simulation lets students build an atom using an atomic construction set. The underlying design methodology for "Bohr's Atomic Model" is model-centered instruction, which means the central model of the…
Mons-Hainaut, UniversitÃ© de
Model Theory and Quantum Groups Sonia L'Innocente Model Theory and Quantum Groups Sonia L'Innocente (University of Mons) Model Theory and Quantum Groups 1 / 40 #12;Model Theory and Quantum Groups Sonia L Theory and Quantum Groups 2 / 40 #12;Model Theory and Quantum Groups Sonia L'Innocente Seminar's aim We
Model solution State variable model: differential equation
Limburg, Karin E.
2/26/2014 1 Model solution State variable model: differential equation Models a rate of change up the general solution to a differential equation in a book Solve for initial and boundary in repeating the theory on how to integrate a differential equation." Is anyone else wholly unsatisfied
Modeling transient rootzone salinity (SWS Model)
Technology Transfer Automated Retrieval System (TEKTRAN)
The combined, water quality criteria for irrigation, water and ion processes in soils, and plant and soil response is sufficiently complex that adequate analysis requires computer models. Models for management are also needed but these models must consider that the input requirements must be reasona...
FINITE-STATE MODEL, DATAFLOW MODEL, ENTITY-RELATIONSHIP MODEL
Kundu, Sukhamay
in the model either. #12;3.5 ENTITY vs. AN ATTRIBUTE · An attribute stands only in the context of an entity in the new ER-model? #12;3.6 EXERCISE 1. Table 10.1 in the text book lists the following ER modeling con association between two or more entities Why can't we talk about relationship types and relationship val- ues
MODEL CONSERVATION STANDARD INTRODUCTION
MODEL CONSERVATION STANDARD INTRODUCTION As directed by the Northwest Power Act, the Council has designed model conservation standards to produce all electricity savings that are cost believes the measures used to achieve the model conservation standards should provide reliable savings
Educating with Aircraft Models
ERIC Educational Resources Information Center
Steele, Hobie
1976-01-01
Described is utilization of aircraft models, model aircraft clubs, and model aircraft magazines to promote student interest in aerospace education. The addresses for clubs and magazines are included. (SL)
Modeling of geothermal systems
Bodvarsson, G.S.; Pruess, K.; Lippmann, M.J.
1985-03-01
During the last decade the use of numerical modeling for geothermal resource evaluation has grown significantly, and new modeling approaches have been developed. In this paper we present a summary of the present status in numerical modeling of geothermal systems, emphasizing recent developments. Different modeling approaches are described and their applicability discussed. The various modeling tasks, including natural-state, exploitation, injection, multi-component and subsidence modeling, are illustrated with geothermal field examples. 99 refs., 14 figs.
Geologic Framework Model Analysis Model Report
R. Clayton
2000-12-19
The purpose of this report is to document the Geologic Framework Model (GFM), Version 3.1 (GFM3.1) with regard to data input, modeling methods, assumptions, uncertainties, limitations, and validation of the model results, qualification status of the model, and the differences between Version 3.1 and previous versions. The GFM represents a three-dimensional interpretation of the stratigraphy and structural features of the location of the potential Yucca Mountain radioactive waste repository. The GFM encompasses an area of 65 square miles (170 square kilometers) and a volume of 185 cubic miles (771 cubic kilometers). The boundaries of the GFM were chosen to encompass the most widely distributed set of exploratory boreholes (the Water Table or WT series) and to provide a geologic framework over the area of interest for hydrologic flow and radionuclide transport modeling through the unsaturated zone (UZ). The depth of the model is constrained by the inferred depth of the Tertiary-Paleozoic unconformity. The GFM was constructed from geologic map and borehole data. Additional information from measured stratigraphy sections, gravity profiles, and seismic profiles was also considered. This interim change notice (ICN) was prepared in accordance with the Technical Work Plan for the Integrated Site Model Process Model Report Revision 01 (CRWMS M&O 2000). The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The GFM is one component of the Integrated Site Model (ISM) (Figure l), which has been developed to provide a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site. The ISM consists of three components: (1) Geologic Framework Model (GFM); (2) Rock Properties Model (RPM); and (3) Mineralogic Model (MM). The ISM merges the detailed project stratigraphy into model stratigraphic units that are most useful for the primary downstream models and the repository design. These downstream models include the hydrologic flow models and the radionuclide transport models. All the models and the repository design, in turn, will be incorporated into the Total System Performance Assessment (TSPA) of the potential radioactive waste repository block and vicinity to determine the suitability of Yucca Mountain as a host for the repository. The interrelationship of the three components of the ISM and their interface with downstream uses are illustrated in Figure 2.
Glosup, J.
1992-07-23
The class of gene linear models is extended to develop a class of nonparametric regression models known as generalized smooth models. The technique of local scoring is used to estimate a generalized smooth model and the estimation procedure based on locally weighted regression is shown to produce local likelihood estimates. The asymptotically correct distribution of the deviance difference is derived and its use in comparing the fits of generalized linear models and generalized smooth models is illustrated. The relationship between generalized smooth models and generalized additive models is discussed, also.
Interfacing materials models with fire field models
Nicolette, V.F.; Tieszen, S.R.; Moya, J.L.
1995-12-01
For flame spread over solid materials, there has traditionally been a large technology gap between fundamental combustion research and the somewhat simplistic approaches used for practical, real-world applications. Recent advances in computational hardware and computational fluid dynamics (CFD)-based software have led to the development of fire field models. These models, when used in conjunction with material burning models, have the potential to bridge the gap between research and application by implementing physics-based engineering models in a transient, multi-dimensional tool. This paper discusses the coupling that is necessary between fire field models and burning material models for the simulation of solid material fires. Fire field models are capable of providing detailed information about the local fire environment. This information serves as an input to the solid material combustion submodel, which subsequently calculates the impact of the fire environment on the material. The response of the solid material (in terms of thermal response, decomposition, charring, and off-gassing) is then fed back into the field model as a source of mass, momentum and energy. The critical parameters which must be passed between the field model and the material burning model have been identified. Many computational issues must be addressed when developing such an interface. Some examples include the ability to track multiple fuels and species, local ignition criteria, and the need to use local grid refinement over the burning material of interest.
Scaled models, scaled frequencies, and model fitting
NASA Astrophysics Data System (ADS)
Roxburgh, Ian W.
2015-12-01
I show that given a model star of mass M, radius R, and density profile ?(x) [x = r/R], there exists a two parameter family of models with masses Mk, radii Rk, density profile ?k(x) = ??(x) and frequencies ?kn? = ?1/2?n?, where ?,Rk/RA are scaling factors. These models have different internal structures, but all have the same value of separation ratios calculated at given radial orders n, and all exactly satisfy a frequency matching algorithm with an offset function determined as part of the fitting procedure. But they do not satisfy ratio matching at given frequencies nor phase shift matching. This illustrates that erroneous results may be obtained when model fitting with ratios at given n values or frequency matching. I give examples from scaled models and from non scaled evolutionary models.
ROCK PROPERTIES MODEL ANALYSIS MODEL REPORT
Clinton Lum
2002-02-04
The purpose of this Analysis and Model Report (AMR) is to document Rock Properties Model (RPM) 3.1 with regard to input data, model methods, assumptions, uncertainties and limitations of model results, and qualification status of the model. The report also documents the differences between the current and previous versions and validation of the model. The rock properties models are intended principally for use as input to numerical physical-process modeling, such as of ground-water flow and/or radionuclide transport. The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. This work was conducted in accordance with the following planning documents: WA-0344, ''3-D Rock Properties Modeling for FY 1998'' (SNL 1997, WA-0358), ''3-D Rock Properties Modeling for FY 1999'' (SNL 1999), and the technical development plan, Rock Properties Model Version 3.1, (CRWMS M&O 1999c). The Interim Change Notice (ICNs), ICN 02 and ICN 03, of this AMR were prepared as part of activities being conducted under the Technical Work Plan, TWP-NBS-GS-000003, ''Technical Work Plan for the Integrated Site Model, Process Model Report, Revision 01'' (CRWMS M&O 2000b). The purpose of ICN 03 is to record changes in data input status due to data qualification and verification activities. These work plans describe the scope, objectives, tasks, methodology, and implementing procedures for model construction. The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The work scope for this activity consists of the following: (1) Conversion of the input data (laboratory measured porosity data, x-ray diffraction mineralogy, petrophysical calculations of bound water, and petrophysical calculations of porosity) for each borehole into stratigraphic coordinates; (2) Re-sampling and merging of data sets; (3) Development of geostatistical simulations of porosity; (4) Generation of derivative property models via linear coregionalization with porosity; (5) Post-processing of the simulated models to impart desired secondary geologic attributes and to create summary and uncertainty models; and (6) Conversion of the models into real-world coordinates. The conversion to real world coordinates is performed as part of the integration of the RPM into the Integrated Site Model (ISM) 3.1; this activity is not part of the current analysis. The ISM provides a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site and consists of three components: (1) Geologic Framework Model (GFM); (2) RPM, which is the subject of this AMR; and (3) Mineralogic Model. The interrelationship of the three components of the ISM and their interface with downstream uses are illustrated in Figure 1. Figure 2 shows the geographic boundaries of the RPM and other component models of the ISM.
M. A. Wasiolek
2003-10-27
The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), the TSPA-LA. The ERMYN model provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs), the reference biosphere, the human receptor, and assumptions (Section 6.2 and Section 6.3); (3) Building a mathematical model using the biosphere conceptual model and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN model compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN model by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); and (8) Validating the ERMYN model by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).
D. W. Wu
2003-07-16
The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), the TSPA-LA. The ERMYN model provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs), the reference biosphere, the human receptor, and assumptions (Section 6.2 and Section 6.3); (3) Building a mathematical model using the biosphere conceptual model and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN model compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN model by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); and (8) Validating the ERMYN model by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).
ERIC Educational Resources Information Center
Linnell, Robert H.; Bottomley, Wayne N.
A description of the University of Southern California (USC) faculty model and some of the results obtained from using the model are presented. The model traces faculty cohorts from 1974 to 1984, under given policies of hiring, tenure, promotion, and retirement. The model takes into account resignations, deaths, anticipated enrollments, and costs.…
Generative Models of Disfluency
ERIC Educational Resources Information Center
Miller, Timothy A.
2010-01-01
This thesis describes a generative model for representing disfluent phenomena in human speech. This model makes use of observed syntactic structure present in disfluent speech, and uses a right-corner transform on syntax trees to model this structure in a very natural way. Specifically, the phenomenon of speech repair is modeled by explicitly…
Analog Circuits, Graphical Models,
Reyzin, Lev
AND OR AND G1 I1 G2 I2V [ACCW '06] 4 #12;+ Motivation for The Model To model gene regulatory networks as Boolean networks to represent gene expressions and disruptions Previous gene regulatory network model be manipulated. Only the output is observable. Value Injection Query model [AACW '06] Fully controllable. Only
NASA Astrophysics Data System (ADS)
Rahmani, Fouad Lazhar
2010-11-01
The aim of this paper is to present mathematical modelling of the spread of infection in the context of the transmission of the human immunodeficiency virus (HIV) and the acquired immune deficiency syndrome (AIDS). These models are based in part on the models suggested in the field of th AIDS mathematical modelling as reported by ISHAM [6].
2015-09-01
The Biomass Scenario Model (BSM) is a unique, carefully validated, state-of-the-art dynamic model of the domestic biofuels supply chain which explicitly focuses on policy issues, their feasibility, and potential side effects. It integrates resource availability, physical/technological/economic constraints, behavior, and policy. The model uses a system dynamics simulation (not optimization) to model dynamic interactions across the supply chain.
Derivative scalar coupling model versus ?-? model
NASA Astrophysics Data System (ADS)
Zhang, Jian-Kang; Onley, D. S.
1991-11-01
The relativistic derivative scalar coupling model of Zimanyi and Moszkowski, which is based on the original ?-? model, is investigated both for infinite symmetric nuclear matter and finite spherical nuclei. We find that while this model yields satisfactory compressibility in nuclear matter, and consequently gives good total binding energies for finite nuclei, it however fails to give the right spin-orbit interaction in finite nuclei. Calculated results are shown for 16O and compared to the ?-? model and experimental data; the origin of the discrepancy is discussed.
Palo, P.A.; Meggitt, D.J.; Nordell, W.J.
1983-05-01
This paper presents a summary of the development and validation of undersea cable dynamics computer models by the Naval Civil Engineering Laboratory (NCEL) under the sponsorship of the Naval Facilities Engineering Command. These models allow for the analysis of both small displacement (strumming) and large displacement (static and dynamic) deformations of arbitrarily configured cable structures. All of the large displacement models described in this paper are available to the public. This paper does not emphasize the theoretical development of the models (this information is available in other references) but emphasizes the various features of the models, the comparisons between model output and experimental data, and applications for which the models have been used.
C.F. Ahlers, H.H. Liu
2001-12-18
The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the AMR Development Plan for U0035 Calibrated Properties Model REV00 (CRWMS M&O 1999c). These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions.
C. Ahlers; H. Liu
2000-03-12
The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the ''AMR Development Plan for U0035 Calibrated Properties Model REV00. These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions.
Stable models of superacceleration
Kaplinghat, Manoj; Rajaraman, Arvind
2007-05-15
We discuss an instability in a large class of models where dark energy is coupled to matter. In these models the mass of the scalar field is much larger than the expansion rate of the Universe. We find models in which this instability is absent, and show that these models generically predict an apparent equation of state for dark energy smaller than -1, i.e., superacceleration. These models have no acausal behavior or ghosts.
Independent modeling efforts often yield disparate results that are difficult to reconcile. A comparative modeling approach explores differences between models in a systematic way. In joint collaborations, a set of common population inputs is shared across all models (e.g., dissemination patterns of screening and treatment, mortality from non-cancer causes), and common sets of intermediate and final outputs are developed. Results are then compared across models.
The CISNET lung group was initiated in the second round of CISNET I and consists of five modeling teams and two affiliate members. The groups' interests include areas such as tobacco control policies, screening, and genetic susceptibility. The models incorporate the association between smoking and lung cancer in various ways, from epidemiologic models to more mechanistic models, including various versions of the two-stage clonal expansion model of carcinogenesis.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-13
...Boeing Company Model 757 Airplanes, Model 767 Airplanes, and Model 777-200 and -300...for certain Model 757 airplanes, Model 767 airplanes, and Model 777-200 and -300...apply to certain Model 757 airplanes, Model 767 airplanes, and Model 777-200 and...
WASP TRANSPORT MODELING AND WASP ECOLOGICAL MODELING
A combination of lectures, demonstrations, and hands-on excercises will be used to introduce pollutant transport modeling with the U.S. EPA's general water quality model, WASP (Water Quality Analysis Simulation Program). WASP features include a user-friendly Windows-based interfa...
14. Quark model 1 14. QUARK MODEL
Krusche, Bernd
14. Quark model 1 14. QUARK MODEL Revised December 2005 by C. Amsler (University of Z¨urich), T. DeGrand (University of Colorado, Boulder) and B. Krusche (University of Basel). 14.1. Quantum numbers of the quarks Quarks are strongly interacting fermions with spin 1/2 and, by convention, positive parity
1. Quark model 1 1. QUARK MODEL
Krusche, Bernd
1. Quark model 1 1. QUARK MODEL Revised December 2005 by C. Amsler (University of Z¨urich), T. DeGrand (University of Colorado, Boulder) and B. Krusche (University of Basel). 1.1. Quantum numbers of the quarks Quarks are strongly interacting fermions with spin 1/2 and, by convention, positive parity
3 Human vs. model 2 Salience model
Peters, Rob
· Collected new eye movement data -- different subjects -- different aerial images · Fitted the contour model parameters to match these new data · Tested the fitted model against the original data fixation cross -- 1. filter output connection kernel input output leak recurrent excitation modulation of inhibition oriented
MODELS AND HISTORY OF MODELING Hermann Schichl
Schichl, Hermann
the Ancient Near East and with Ancient Greek. The recognizable models werenumbers; counting ``writing models played a role, already about 4.000 well known that 2.000 at three cultures (Babylon, Egypt, India of mathematics independently application. Thales brought knowledge from Egypt, predicted solar eclipse
Trapped Radiation Model Uncertainties: Model-Data and Model-Model Comparisons
NASA Technical Reports Server (NTRS)
Armstrong, T. W.; Colborn, B. L.
2000-01-01
The standard AP8 and AE8 models for predicting trapped proton and electron environments have been compared with several sets of flight data to evaluate model uncertainties. Model comparisons are made with flux and dose measurements made on various U.S. low-Earth orbit satellites (APEX, CRRES, DMSP, LDEF, NOAA) and Space Shuttle flights, on Russian satellites (Photon-8, Cosmos-1887, Cosmos-2044), and on the Russian Mir Space Station. This report gives the details of the model-data comparisons-summary results in terms of empirical model uncertainty factors that can be applied for spacecraft design applications are given in a combination report. The results of model-model comparisons are also presented from standard AP8 and AE8 model predictions compared with the European Space Agency versions of AP8 and AE8 and with Russian-trapped radiation models.
Trapped Radiation Model Uncertainties: Model-Data and Model-Model Comparisons
NASA Technical Reports Server (NTRS)
Armstrong, T. W.; Colborn, B. L.
2000-01-01
The standard AP8 and AE8 models for predicting trapped proton and electron environments have been compared with several sets of flight data to evaluate model uncertainties. Model comparisons are made with flux and dose measurements made on various U.S. low-Earth orbit satellites (APEX, CRRES, DMSP. LDEF, NOAA) and Space Shuttle flights, on Russian satellites (Photon-8, Cosmos-1887, Cosmos-2044), and on the Russian Mir space station. This report gives the details of the model-data comparisons -- summary results in terms of empirical model uncertainty factors that can be applied for spacecraft design applications are given in a companion report. The results of model-model comparisons are also presented from standard AP8 and AE8 model predictions compared with the European Space Agency versions of AP8 and AE8 and with Russian trapped radiation models.
Model Validation Status Review
E.L. Hardin
2001-11-28
The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M&O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and engineered barriers, plus the TSPA model itself Description of the model areas is provided in Section 3, and the documents reviewed are described in Section 4. The responsible manager for the Model Validation Status Review was the Chief Science Officer (CSO) for Bechtel-SAIC Co. (BSC). The team lead was assigned by the CSO. A total of 32 technical specialists were engaged to evaluate model validation status in the 21 model areas. The technical specialists were generally independent of the work reviewed, meeting technical qualifications as discussed in Section 5.
Hickerson, Michael J
2014-06-01
As the field of phylogeography has continued to move in the model-based direction, researchers continue struggling to construct useful models for inference. These models must be both simple enough to be tractable yet contain enough of the complexity of the natural world to make meaningful inference. Beyond constructing such models for inference, researchers explore model space and test competing models with the data on hand, with the goal of improving the understanding of the natural world and the processes underlying natural biological communities. Approximate Bayesian computation (ABC) has increased in recent popularity as a tool for evaluating alternative historical demographic models given population genetic samples. As a thorough demonstration, Pelletier & Carstens (2014) use ABC to test 143 phylogeographic submodels given geographically widespread genetic samples from the salamander species Plethodon idahoensis (Carstens et al. 2014) and, in so doing, demonstrate how the results of the ABC model choice procedure are dependent on the model set one chooses to evaluate. PMID:24931159
Reusable Architectural Decision Model for Model and Metadata Repositories
Dustdar, Schahram
Reusable Architectural Decision Model for Model and Metadata Repositories Christine Mayr, Uwe Zdun, and Schahram Dustdar Distributed Systems Group Information System Institute Vienna University of Technology. Model repositories support this trend by managing these model artifacts. While setting up model
Reiter, E.R.
1980-01-01
A highly sophisticated and accurate approach is described to compute on an hourly or daily basis the energy consumption for space heating by individual buildings, urban sectors, and whole cities. The need for models and specifically weather-sensitive models, composite models, and space-heating models are discussed. Development of the Colorado State University Model, based on heat-transfer equations and on a heuristic, adaptive, self-organizing computation learning approach, is described. Results of modeling energy consumption by the city of Minneapolis and Cheyenne are given. Some data on energy consumption in individual buildings are included.
Geller, Michael; Telem, Ofri
2015-05-15
We present the first realization of a "twin Higgs" model as a holographic composite Higgs model. Uniquely among composite Higgs models, the Higgs potential is protected by a new standard model (SM) singlet elementary "mirror" sector at the sigma model scale f and not by the composite states at m_{KK}, naturally allowing for m_{KK} beyond the LHC reach. As a result, naturalness in our model cannot be constrained by the LHC, but may be probed by precision Higgs measurements at future lepton colliders, and by direct searches for Kaluza-Klein excitations at a 100 TeV collider. PMID:26024160
NASA Astrophysics Data System (ADS)
Reiter, E. R.
A sophisticated and accurate approach is described to compute on an hourly or daily basis the energy consumption for space heating by individual buildings, urban sectors, and whole cities. The need for models and specifically weather-sensitive models, composite models, and space heating models are discussed. Development of the Colorado State University Model, based on heat transfer equations and on a heuristic, adaptive, self-organizing computation learning approach, is described. Results of modeling energy consumption by the city of Minneapolis and Cheyenne are given. Some data on energy consumption in individual buildings are included.
NASA Technical Reports Server (NTRS)
Savage, M.; Caldwell, R. J.; Wisor, G. D.; Lewicki, D. G.
1986-01-01
A computer model has been constructed to simulate the compliance and load sharing in a spur gear mesh. The model adds the effect of rim deflections to previously developed state-of-the-art gear tooth deflection models. The effects of deflections on mesh compliance and load sharing are examined. The model can treat gear meshes composed to two external gears or an external gear driving an internal gear. The model includes deflection contributions from the bending and shear in the teeth, the Hertzian contact deformations, and primary and secondary rotations of the gear rims. The model shows that rimmed gears increase mesh compliance and, in some cases, improve load sharing.
NASA Technical Reports Server (NTRS)
Savage, M.; Caldwell, R. J.; Wisor, G. D.; Lewicki, D. G.
1987-01-01
A computer model has been constructed to simulate the compliance and load sharing in a spur gear mesh. The model adds the effect of rim deflections to previously developed state-of-the-art gear tooth deflection models. The effects of deflections on mesh compliance and load sharing are examined. The model can treat gear meshes composed of two external gears or an external gear driving an internal gear. The model includes deflection contributions from the bending and shear in the teeth, the Hertzian contact deformations, and primary and secondary rotations of the gear rims. The model shows that rimmed gears increase mesh compliance and, in some cases, improve load sharing.
Modeling the transition region
NASA Technical Reports Server (NTRS)
Singer, Bart A.
1993-01-01
The current status of transition-region models is reviewed in this report. To understand modeling problems, various flow features that influence the transition process are discussed first. Then an overview of the different approaches to transition-region modeling is given. This is followed by a detailed discussion of turbulence models and the specific modifications that are needed to predict flows undergoing laminar-turbulent transition. Methods for determining the usefulness of the models are presented, and an outlook for the future of transition-region modeling is suggested.
Antibody modeling assessment II. Structures and models.
Teplyakov, Alexey; Luo, Jinquan; Obmolova, Galina; Malia, Thomas J; Sweet, Raymond; Stanfield, Robyn L; Kodangattil, Sreekumar; Almagro, Juan Carlos; Gilliland, Gary L
2014-08-01
To assess the state-of-the-art in antibody structure modeling, a blinded study was conducted. Eleven unpublished Fab crystal structures were used as a benchmark to compare Fv models generated by seven structure prediction methodologies. In the first round, each participant submitted three non-ranked complete Fv models for each target. In the second round, CDR-H3 modeling was performed in the context of the correct environment provided by the crystal structures with CDR-H3 removed. In this report we describe the reference structures and present our assessment of the models. Some of the essential sources of errors in the predictions were traced to the selection of the structure template, both in terms of the CDR canonical structures and VL/VH packing. On top of this, the errors present in the Protein Data Bank structures were sometimes propagated in the current models, which emphasized the need for the curated structural database devoid of errors. Modeling non-canonical structures, including CDR-H3, remains the biggest challenge for antibody structure prediction. PMID:24633955
D.W. Wu; A.J. Smith
2004-11-08
The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), TSPA-LA. The ERMYN provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs) (Section 6.2), the reference biosphere (Section 6.1.1), the human receptor (Section 6.1.2), and approximations (Sections 6.3.1.4 and 6.3.2.4); (3) Building a mathematical model using the biosphere conceptual model (Section 6.3) and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); (8) Validating the ERMYN by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).
NASA Technical Reports Server (NTRS)
Ensey, Tyler S.
2013-01-01
During my internship at NASA, I was a model developer for Ground Support Equipment (GSE). The purpose of a model developer is to develop and unit test model component libraries (fluid, electrical, gas, etc.). The models are designed to simulate software for GSE (Ground Special Power, Crew Access Arm, Cryo, Fire and Leak Detection System, Environmental Control System (ECS), etc. .) before they are implemented into hardware. These models support verifying local control and remote software for End-Item Software Under Test (SUT). The model simulates the physical behavior (function, state, limits and 110) of each end-item and it's dependencies as defined in the Subsystem Interface Table, Software Requirements & Design Specification (SRDS), Ground Integrated Schematic (GIS), and System Mechanical Schematic.(SMS). The software of each specific model component is simulated through MATLAB's Simulink program. The intensiv model development life cycle is a.s follows: Identify source documents; identify model scope; update schedule; preliminary design review; develop model requirements; update model.. scope; update schedule; detailed design review; create/modify library component; implement library components reference; implement subsystem components; develop a test script; run the test script; develop users guide; send model out for peer review; the model is sent out for verifictionlvalidation; if there is empirical data, a validation data package is generated; if there is not empirical data, a verification package is generated; the test results are then reviewed; and finally, the user. requests accreditation, and a statement of accreditation is prepared. Once each component model is reviewed and approved, they are intertwined together into one integrated model. This integrated model is then tested itself, through a test script and autotest, so that it can be concluded that all models work conjointly, for a single purpose. The component I was assigned, specifically, was a fluid component, a discrete pressure switch. The switch takes a fluid pressure input, and if the pressure is greater than a designated cutoff pressure, the switch would stop fluid flow.
Modeling error in Approximate Deconvolution Models
Adrian Dunca; Roger Lewandowski
2012-10-09
We investigate the assymptotic behaviour of the modeling error in approximate deconvolution model in the 3D periodic case, when the order $N$ of deconvolution goes to $\\infty$. We consider successively the generalised Helmholz filters of order $p$ and the Gaussian filter. For Helmholz filters, we estimate the rate of convergence to zero thanks to energy budgets, Gronwall's Lemma and sharp inequalities about Fouriers coefficients of the residual stress. We next show why the same analysis does not allow to conclude convergence to zero of the error modeling in the case of Gaussian filter, leaving open issues.
Modeling Guru: Knowledge Base for NASA Modelers
NASA Astrophysics Data System (ADS)
Seablom, M. S.; Wojcik, G. S.; van Aartsen, B. H.
2009-05-01
Modeling Guru is an on-line knowledge-sharing resource for anyone involved with or interested in NASA's scientific models or High End Computing (HEC) systems. Developed and maintained by the NASA's Software Integration and Visualization Office (SIVO) and the NASA Center for Computational Sciences (NCCS), Modeling Guru's combined forums and knowledge base for research and collaboration is becoming a repository for the accumulated expertise of NASA's scientific modeling and HEC communities. All NASA modelers and associates are encouraged to participate and provide knowledge about the models and systems so that other users may benefit from their experience. Modeling Guru is divided into a hierarchy of communities, each with its own set forums and knowledge base documents. Current modeling communities include those for space science, land and atmospheric dynamics, atmospheric chemistry, and oceanography. In addition, there are communities focused on NCCS systems, HEC tools and libraries, and programming and scripting languages. Anyone may view most of the content on Modeling Guru (available at http://modelingguru.nasa.gov/), but you must log in to post messages and subscribe to community postings. The site offers a full range of "Web 2.0" features, including discussion forums, "wiki" document generation, document uploading, RSS feeds, search tools, blogs, email notification, and "breadcrumb" links. A discussion (a.k.a. forum "thread") is used to post comments, solicit feedback, or ask questions. If marked as a question, SIVO will monitor the thread, and normally respond within a day. Discussions can include embedded images, tables, and formatting through the use of the Rich Text Editor. Also, the user can add "Tags" to their thread to facilitate later searches. The "knowledge base" is comprised of documents that are used to capture and share expertise with others. The default "wiki" document lets users edit within the browser so others can easily collaborate on the same document, even allowing the author to select those who may edit and approve the document. To maintain knowledge integrity, all documents are moderated before they are visible to the public. Modeling Guru, running on Clearspace by Jive Software, has been an active resource to the NASA modeling and HEC communities for more than a year and currently has more than 100 active users. SIVO will soon install live instant messaging support, as well as a user-customizable homepage with social-networking features. In addition, SIVO plans to implement a large dataset/file storage capability so that users can quickly and easily exchange datasets and files with one another. Continued active community participation combined with periodic software updates and improved features will ensure that Modeling Guru remains a vibrant, effective, easy-to-use tool for the NASA scientific community.
ERIC Educational Resources Information Center
Callison, Daniel
2002-01-01
Defines models and describes information search models that can be helpful to instructional media specialists in meeting users' abilities and information needs. Explains pathfinders and Kuhlthau's information search process, including the pre-writing information search process. (LRW)
Modeling EERE deployment programs
Cort, K. A.; Hostick, D. J.; Belzer, D. B.; Livingston, O. V.
2007-11-01
The purpose of the project was to identify and characterize the modeling of deployment programs within the EERE Technology Development (TD) programs, address possible improvements to the modeling process, and note gaps in knowledge for future research.
We developed a simplified spreadsheet modeling approach for characterizing and prioritizing sources of sediment loadings from watersheds in the United States. A simplified modeling approach was developed to evaluate sediment loadings from watersheds and selected land segments. ...
Endoh, Shinsuke
1982-01-01
Introduction: The threat of midair collisions is one of the most serious problems facing the air traffic control system and has been studied by many researchers. The gas model is one of the models which describe the expected ...
Agena, S M; Pusey, M L; Bogle, I D
1999-07-20
A thermodynamic framework (UNIQUAC model with temperature dependent parameters) is applied to model the salt-induced protein crystallization equilibrium, i.e., protein solubility. The framework introduces a term for the solubility product describing protein transfer between the liquid and solid phase and a term for the solution behavior describing deviation from ideal solution. Protein solubility is modeled as a function of salt concentration and temperature for a four-component system consisting of a protein, pseudo solvent (water and buffer), cation, and anion (salt). Two different systems, lysozyme with sodium chloride and concanavalin A with ammonium sulfate, are investigated. Comparison of the modeled and experimental protein solubility data results in an average root mean square deviation of 5.8%, demonstrating that the model closely follows the experimental behavior. Model calculations and model parameters are reviewed to examine the model and protein crystallization process. PMID:10397850
PERSISTENCE IN MODEL ECOSYSTEMS
Mathematical models aid in understanding environmental systems and in developing testable hypotheses relevant to the fate and ecological effects of toxic substances in such systems. Within the framework of microcosm or laboratory ecosystem modeling, some differential equation mod...
Energy Science and Technology Software Center (ESTSC)
2010-03-01
The System Advisor Model (SAM) is a performance and economic model designed to facilitate decision making for people involved in the renewable energy industry, ranging from project managers and engineers to incentive program designers, technology developers, and researchers.
Consistent model driven architecture
NASA Astrophysics Data System (ADS)
Niepostyn, Stanis?aw J.
2015-09-01
The goal of the MDA is to produce software systems from abstract models in a way where human interaction is restricted to a minimum. These abstract models are based on the UML language. However, the semantics of UML models is defined in a natural language. Subsequently the verification of consistency of these diagrams is needed in order to identify errors in requirements at the early stage of the development process. The verification of consistency is difficult due to a semi-formal nature of UML diagrams. We propose automatic verification of consistency of the series of UML diagrams originating from abstract models implemented with our consistency rules. This Consistent Model Driven Architecture approach enables us to generate automatically complete workflow applications from consistent and complete models developed from abstract models (e.g. Business Context Diagram). Therefore, our method can be used to check practicability (feasibility) of software architecture models.
Bounding Species Distribution Models
NASA Technical Reports Server (NTRS)
Stohlgren, Thomas J.; Jarnevich, Cahterine S.; Morisette, Jeffrey T.; Esaias, Wayne E.
2011-01-01
Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for "clamping" model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART) and maximum entropy (Maxent) models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used [Current Zoology 57 (5): 642-647, 2011].
Bounding species distribution models
Stohlgren, T.J.; Jarnevich, C.S.; Esaias, W.E.; Morisette, J.T.
2011-01-01
Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for "clamping" model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART) and maximum entropy (Maxent) models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used. ?? 2011 Current Zoology.
Dingle, Brent Michael
2007-09-17
This dissertation presents a robust method of modeling objects and forces for computer animation. Within this method objects and forces are represented as particles. As in most modeling systems, the movement of objects is ...
METEOROLOGICAL AND TRANSPORT MODELING
Advanced air quality simulation models, such as CMAQ, as well as other transport and dispersion models, require accurate and detailed meteorology fields. These meteorology fields include primary 3-dimensional dynamical and thermodynamical variables (e.g., winds, temperature, mo...
Chou, Danielle, 1981-
2004-01-01
The drive behind improved friction models has been better prediction and control of dynamic systems. The earliest model was of classical Coulomb friction; however, the discontinuity during force reversal of the Coulomb ...
NASA Technical Reports Server (NTRS)
Holland, L. D.; Walsh, J. R., Jr.; Wetherington, R. D.
1971-01-01
This report presents the results of work on communications systems modeling and covers three different areas of modeling. The first of these deals with the modeling of signals in communication systems in the frequency domain and the calculation of spectra for various modulations. These techniques are applied in determining the frequency spectra produced by a unified carrier system, the down-link portion of the Command and Communications System (CCS). The second modeling area covers the modeling of portions of a communication system on a block basis. A detailed analysis and modeling effort based on control theory is presented along with its application to modeling of the automatic frequency control system of an FM transmitter. A third topic discussed is a method for approximate modeling of stiff systems using state variable techniques.
Until recently, sediment geochemical models (diagenetic models) have been only able to explain sedimentary flux and concentration profiles for a few simplified geochemical cycles (e.g., nitrogen, carbon and sulfur). However with advances in numerical methods, increased accuracy ...
NASA Technical Reports Server (NTRS)
Agena, S. M.; Pusey, M. L.; Bogle, I. D.
1999-01-01
A thermodynamic framework (UNIQUAC model with temperature dependent parameters) is applied to model the salt-induced protein crystallization equilibrium, i.e., protein solubility. The framework introduces a term for the solubility product describing protein transfer between the liquid and solid phase and a term for the solution behavior describing deviation from ideal solution. Protein solubility is modeled as a function of salt concentration and temperature for a four-component system consisting of a protein, pseudo solvent (water and buffer), cation, and anion (salt). Two different systems, lysozyme with sodium chloride and concanavalin A with ammonium sulfate, are investigated. Comparison of the modeled and experimental protein solubility data results in an average root mean square deviation of 5.8%, demonstrating that the model closely follows the experimental behavior. Model calculations and model parameters are reviewed to examine the model and protein crystallization process. Copyright 1999 John Wiley & Sons, Inc.
Sauper, Christina Joan
We present a probabilistic topic model for jointly identifying properties and attributes of social media review snippets. Our model simultaneously learns a set of properties of a product and captures aggregate user sentiments ...
ERIC Educational Resources Information Center
Brinner, Bonnie
1992-01-01
Presents an activity in which models help students visualize both the DNA process and transcription. After constructing DNA, RNA messenger, and RNA transfer molecules; students model cells, protein synthesis, codons, and RNA movement. (MDH)
Christoudias, Chris Mario
2003-04-18
Statistical shape and texture appearance models are powerful image representations, but previously had been restricted to 2D or simple 3D shapes. In this paper we present a novel 3D morphable model based on image-based ...
Quantile Models with Endogeneity
Chernozhukov, Victor V.
In this article, we review quantile models with endogeneity. We focus on models that achieve identification through the use of instrumental variables and discuss conditions under which partial and point identification are ...
Silicon Baroreceptors: Modeling Cardiovascular
Lazzaro, John
Silicon Baroreceptors: Modeling Cardiovascular Pressure Transduction in Analog VLSI John Lazzaro of the baroreceptors in the carotid vessel. Inspired by re- cent work in silicon models of the cochlea [3
CISNET: Esophageal Cancer Modeling
The CISNET esophageal cancer group was formed in 2010 in the third round of CISNET funding with three distinct modeling teams focused on collaboratively modeling the incidence and mortality of esophageal adenocarcinoma (EAC) in the US population. The group’s work will include performing collaborative modeling of the natural history models of esophageal adenocarcinoma which will include precursor states such as Barrett’s esophagus and dysplasia that are calibrated to US SEER data.
NASA Astrophysics Data System (ADS)
Tashiro, Tohru
2014-03-01
We propose a new model about diffusion of a product which includes a memory of how many adopters or advertisements a non-adopter met, where (non-)adopters mean people (not) possessing the product. This effect is lacking in the Bass model. As an application, we utilize the model to fit the iPod sales data, and so the better agreement is obtained than the Bass model.
Mathematical circulatory system model
NASA Technical Reports Server (NTRS)
Lakin, William D. (Inventor); Stevens, Scott A. (Inventor)
2010-01-01
A system and method of modeling a circulatory system including a regulatory mechanism parameter. In one embodiment, a regulatory mechanism parameter in a lumped parameter model is represented as a logistic function. In another embodiment, the circulatory system model includes a compliant vessel, the model having a parameter representing a change in pressure due to contraction of smooth muscles of a wall of the vessel.
NASA Technical Reports Server (NTRS)
Hildreth, W. W.
1978-01-01
A determination of the state of the art in soil moisture transport modeling based on physical or physiological principles was made. It was found that soil moisture models based on physical principles have been under development for more than 10 years. However, these models were shown to represent infiltration and redistribution of soil moisture quite well. Evapotranspiration has not been as adequately incorporated into the models.
Elementary models Numerical Techniques
Kim, Yong Jung
-Shah is reduced to the Potts model choosing fi = |ci - u0|2. #12;The GAC (Geodesic active contour) model Give an image u0, the GAC model is trying to find a contour to minimize min g(| u|) + || g is an edge;Combining Mumford-Shah with GAC It is possible to combine these two popular models together and it has been
Future of groundwater modeling
Langevin, Christian D.; Panday, Sorab
2012-01-01
With an increasing need to better manage water resources, the future of groundwater modeling is bright and exciting. However, while the past can be described and the present is known, the future of groundwater modeling, just like a groundwater model result, is highly uncertain and any prediction is probably not going to be entirely representative. Thus we acknowledge this as we present our vision of where groundwater modeling may be headed.
Nonlinear Modeling by Assembling Piecewise Linear Models
NASA Technical Reports Server (NTRS)
Yao, Weigang; Liou, Meng-Sing
2013-01-01
To preserve nonlinearity of a full order system over a parameters range of interest, we propose a simple modeling approach by assembling a set of piecewise local solutions, including the first-order Taylor series terms expanded about some sampling states. The work by Rewienski and White inspired our use of piecewise linear local solutions. The assembly of these local approximations is accomplished by assigning nonlinear weights, through radial basis functions in this study. The efficacy of the proposed procedure is validated for a two-dimensional airfoil moving at different Mach numbers and pitching motions, under which the flow exhibits prominent nonlinear behaviors. All results confirm that our nonlinear model is accurate and stable for predicting not only aerodynamic forces but also detailed flowfields. Moreover, the model is robustness-accurate for inputs considerably different from the base trajectory in form and magnitude. This modeling preserves nonlinearity of the problems considered in a rather simple and accurate manner.
Aerosol Modeling for the Global Model Initiative
NASA Technical Reports Server (NTRS)
Weisenstein, Debra K.; Ko, Malcolm K. W.
2001-01-01
The goal of this project is to develop an aerosol module to be used within the framework of the Global Modeling Initiative (GMI). The model development work will be preformed jointly by the University of Michigan and AER, using existing aerosol models at the two institutions as starting points. The GMI aerosol model will be tested, evaluated against observations, and then applied to assessment of the effects of aircraft sulfur emissions as needed by the NASA Subsonic Assessment in 2001. The work includes the following tasks: 1. Implementation of the sulfur cycle within GMI, including sources, sinks, and aqueous conversion of sulfur. Aerosol modules will be added as they are developed and the GMI schedule permits. 2. Addition of aerosol types other than sulfate particles, including dust, soot, organic carbon, and black carbon. 3. Development of new and more efficient parameterizations for treating sulfate aerosol nucleation, condensation, and coagulation among different particle sizes and types.
Aggregation in ecosystem models and model stability
NASA Astrophysics Data System (ADS)
Giricheva, Evgeniya
2015-05-01
Using a multimodal approach to research ecosystems improves usage of available information on an object. This study presents several models of the Bering Sea ecosystem. The ecosystem is considered as a closed object, that is, the influence of the environment is not provided. We then add the links with the external medium in the models. The models differ in terms of the degree and method of grouping components. Our method is based on the differences in habitat and food source of groups, which allows us to determine the grouping of species with a greater effect on system dynamics. In particular, we determine whether benthic fish aggregation or pelagic fish aggregation can change the consumption structure of some groups of species, and consequently, the behavior of the entire model system.
Modeling EERE Deployment Programs
Cort, K. A.; Hostick, D. J.; Belzer, D. B.; Livingston, O. V.
2007-11-01
This report compiles information and conclusions gathered as part of the “Modeling EERE Deployment Programs” project. The purpose of the project was to identify and characterize the modeling of deployment programs within the EERE Technology Development (TD) programs, address possible improvements to the modeling process, and note gaps in knowledge in which future research is needed.
QUALITATIVE ECOLOGICAL MODELING
Technology Transfer Automated Retrieval System (TEKTRAN)
Students construct qualitative models of an ecosystem and use the models to evaluate the direct and indirect effects that may result from perturbations to the ecosystem. Qualitative modeling is described for use in two procedures, each with different educational goals and student backgrounds in min...
Reasoning and Formal Modelling
Löwe, Benedikt
Reasoning and Formal Modelling for Forensic Science Lecture 7 Prof. Dr. Benedikt L¨owe Reasoning and Formal Modelling for Forensic Science Lecture 7 Prof. Dr. Benedikt L¨owe 2nd Semester 2010/11 #12;Reasoning and Formal Modelling for Forensic Science Lecture 7 Prof. Dr. Benedikt L¨owe Reminder: logica
ERIC Educational Resources Information Center
Bogiages, Christopher A.; Lotter, Christine
2011-01-01
In their research, scientists generate, test, and modify scientific models. These models can be shared with others and demonstrate a scientist's understanding of how the natural world works. Similarly, students can generate and modify models to gain a better understanding of the content, process, and nature of science (Kenyon, Schwarz, and Hug…
Sharlemann, E.T.
1994-07-01
We are developing a DIAL performance model for CALIOPE at LLNL. The intent of the model is to provide quick and interactive parameter sensitivity calculations with immediate graphical output. A brief overview of the features of the performance model is given, along with an example of performance calculations for a non-CALIOPE application.
Succession Model Landscape Stochasticity
100 1000 10000 patch sizes birth rate both Disturbance Model Landscape Stochasticity Low Control High" accomplished by incrementing the patch birth rate (Control: s = a = 10) A simple model of species viabilitySuccession Model Landscape Stochasticity Low Control High Very High ThresholdMultiplier 0.1 1 10
Crushed Salt Constitutive Model
Callahan, G.D.
1999-02-01
The constitutive model used to describe the deformation of crushed salt is presented in this report. Two mechanisms -- dislocation creep and grain boundary diffusional pressure solution -- are combined to form the basis for the constitutive model governing the deformation of crushed salt. The constitutive model is generalized to represent three-dimensional states of stress. Upon complete consolidation, the crushed-salt model reproduces the Multimechanism Deformation (M-D) model typically used for the Waste Isolation Pilot Plant (WIPP) host geological formation salt. New shear consolidation tests are combined with an existing database that includes hydrostatic consolidation and shear consolidation tests conducted on WIPP and southeastern New Mexico salt. Nonlinear least-squares model fitting to the database produced two sets of material parameter values for the model -- one for the shear consolidation tests and one for a combination of the shear and hydrostatic consolidation tests. Using the parameter values determined from the fitted database, the constitutive model is validated against constant strain-rate tests. Shaft seal problems are analyzed to demonstrate model-predicted consolidation of the shaft seal crushed-salt component. Based on the fitting statistics, the ability of the model to predict the test data, and the ability of the model to predict load paths and test data outside of the fitted database, the model appears to capture the creep consolidation behavior of crushed salt reasonably well.
California at Berkeley, University of
PTOLEMY II HETEROGENEOUS CONCURRENT MODELING AND DESIGN IN JAVA Edited by: Christopher Hylands of California at Berkeley http://ptolemy.eecs.berkeley.edu Document Version 2.0.1 for use with Ptolemy II 2 Concurrent Modeling and Design Contents Part 1: Using Ptolemy II 1. Introduction 1-1 1.1.Modeling and Design
Appendix W to 40CFR Part 51 (Guideline on Air Quality Models) specifies the models to be used for purposes of permitting, PSD, and SIPs. Through a formal regulatory process this modeling guidance is periodically updated to reflect current science. In the most recent action, thr...
Technology Transfer Automated Retrieval System (TEKTRAN)
Adsorption-desorption reactions are important processes that affect the transport of contaminants in the environment. Surface complexation models are chemical models that can account for the effects of variable chemical conditions, such as pH, on adsorption reactions. These models define specific ...
JB6 Mouse Model The mouse Balb/C JB6 model (1) is the only well characterized model of genetic variants for a neoplastic transformation response to tumor promoters. These cells are not differentially sensitive to tumor promoter induced mitogenesis or diff
One of the environmental and economic models that the U.S. EPA uses to assess climate change policies is the Second Generation Model (SGM). SGM is a 13 region, 24 sector computable general equilibrium (CGE) model of the world that can be used to estimate the domestic and intern...
Two Cognitive Modeling Frontiers
NASA Astrophysics Data System (ADS)
Ritter, Frank E.
This paper reviews three hybrid cognitive architectures (Soar, ACT-R, and CoJACK) and how they can support including models of emotions. There remain problems creating models in these architectures, which is a research and engineering problem. Thus, the term cognitive science engineering is introduced as an area that would support making models easier to create, understand, and re-use.
ERIC Educational Resources Information Center
Thornton, Bradley D.; Smalley, Robert A.
2008-01-01
Building information modeling (BIM) uses three-dimensional modeling concepts, information technology and interoperable software to design, construct and operate a facility. However, BIM can be more than a tool for virtual modeling--it can provide schools with a 3-D walkthrough of a project while it still is on the electronic drawing board. BIM can…
ERIC Educational Resources Information Center
Speiser, Bob; Walter, Chuck
2011-01-01
This paper explores how models can support productive thinking. For us a model is a "thing", a tool to help make sense of something. We restrict attention to specific models for whole-number multiplication, hence the wording of the title. They support evolving thinking in large measure through the ways their users redesign them. They assume new…
P. Huang; Yong-Chang Huang
2012-12-30
We suggest a holographic energy model in which the energy coming from spatial curvature, matter and radiation can be obtained by using the particle horizon for the infrared cut-off. We show the consistency between the holographic dark-energy model and the holographic energy model proposed in this paper. Then, we give a holographic description of the universe.
Modeling and Remodeling Writing
ERIC Educational Resources Information Center
Hayes, John R.
2012-01-01
In Section 1 of this article, the author discusses the succession of models of adult writing that he and his colleagues have proposed from 1980 to the present. He notes the most important changes that differentiate earlier and later models and discusses reasons for the changes. In Section 2, he describes his recent efforts to model young…
Model Breaking Points Conceptualized
ERIC Educational Resources Information Center
Vig, Rozy; Murray, Eileen; Star, Jon R.
2014-01-01
Current curriculum initiatives (e.g., National Governors Association Center for Best Practices and Council of Chief State School Officers 2010) advocate that models be used in the mathematics classroom. However, despite their apparent promise, there comes a point when models break, a point in the mathematical problem space where the model cannot,…
ERIC Educational Resources Information Center
Harris, Mary B.
To investigate the effect of modeling on altruism, 156 third and fifth grade children were exposed to a model who either shared with them, gave to a charity, or refused to share. The test apparatus, identified as a game, consisted of a box with signal lights and a chute through which marbles were dispensed. Subjects and the model played the game…
ERIC Educational Resources Information Center
Fitzsimmons, Charles P.
1986-01-01
Points out the instructional applications and program possibilities of a unit on model rocketry. Describes the ways that microcomputers can assist in model rocket design and in problem calculations. Provides a descriptive listing of model rocket software for the Apple II microcomputer. (ML)
Kuhn, Matthew R.
Model definition DEM summary Simple-shear Simulating undrained loading of sand with the discrete University of Portland EMI 2012 Conference South Bend, Indiana June 1820, 2012 National Science Foundation_Undrained.pdf #12;Model definition DEM summary Simple-shear Nevada Sand Particles and contacts Model
Modelling a Suspension Bridge.
ERIC Educational Resources Information Center
Rawlins, Phil
1991-01-01
The quadratic function can be modeled in real life by a suspension bridge that supports a uniform weight. This activity uses concrete models and computer generated graphs to discover the mathematical model of the shape of the main cable of a suspension bridge. (MDH)
GTM is an economic model capable of examining global forestry land-use, management, and trade responses to policies. In responding to a policy, the model captures afforestation, forest management, and avoided deforestation behavior. The model estimates harvests in industrial fore...
ERIC Educational Resources Information Center
Walsh, Jim; McGehee, Richard
2013-01-01
A dynamical systems approach to energy balance models of climate is presented, focusing on low order, or conceptual, models. Included are global average and latitude-dependent, surface temperature models. The development and analysis of the differential equations and corresponding bifurcation diagrams provides a host of appropriate material for…
C. Lum
2004-09-16
The purpose of this model report is to document the Rock Properties Model version 3.1 with regard to input data, model methods, assumptions, uncertainties and limitations of model results, and qualification status of the model. The report also documents the differences between the current and previous versions and validation of the model. The rock properties model provides mean matrix and lithophysae porosity, and the cross-correlated mean bulk density as direct input to the ''Saturated Zone Flow and Transport Model Abstraction'', MDL-NBS-HS-000021, REV 02 (BSC 2004 [DIRS 170042]). The constraints, caveats, and limitations associated with this model are discussed in Section 6.6 and 8.2. Model validation accomplished by corroboration with data not cited as direct input is discussed in Section 7. The revision of this model report was performed as part of activities being conducted under the ''Technical Work Plan for: The Integrated Site Model, Revision 05'' (BSC 2004 [DIRS 169635]). The purpose of this revision is to bring the report up to current procedural requirements and address the Regulatory Integration Team evaluation comments. The work plan describes the scope, objectives, tasks, methodology, and procedures for this process.
ERIC Educational Resources Information Center
McNamara, James F.
1996-01-01
Uses R.A. Ackoff's connotations to define "model" as noun, adjective, and verb. Researchers should use various types of models (iconic, analogue, or symbolic) for three purposes: to reveal reality, to explain the past and present, and to predict and control the future. Herbert Simon's process model for administrative decision making has widespread…
Generalized Latent Trait Models.
ERIC Educational Resources Information Center
Moustaki, Irini; Knott, Martin
2000-01-01
Discusses a general model framework within which manifest variables with different distributions in the exponential family can be analyzed with a latent trait model. Presents a unified maximum likelihood method for estimating the parameters of the generalized latent trait model and discusses the scoring of individuals on the latent dimensions.…
ERIC Educational Resources Information Center
Summerlin, Lee; Borgford, Christie
1989-01-01
Described is an activity which uses a 96-well reaction plate and soda straws to construct a model of the periodic table of the elements. The model illustrates the ionization energies of the various elements. Construction of the model and related concepts are discussed. (CW)
Models of scientific explanation
Sutton, Peter Andrew
2005-08-29
V EXPLANATION VS. INFERENCE?????????????.? 54 Jeffrey on Explanation and Inference??????????... 55 Salmon on Explanation and Inference???????.???. 66 v CHAPTER Page VI.... The Statistical Relevance Model Hempel?s requirement of maximal specificity leads naturally into our next model, which is Wesley Salmon's Statistical Relevance (S-R) model of scientific explanation. Basically Salmon takes maximal specificity (as given...
Dependence Modelling, Model Risk and Model Calibration in Models of Portfolio Credit Risk
Frey, Rüdiger
as internal models are nowadays used for capital adequacy purposes in market risk management calibration in credit risk models. J.E.L. Subject Classification: G31, G11, C15 Keywords: Risk Management, Credit Risk, Dependence Modelling, Copulas 1 Introduction A major cause of concern in managing the credit
Technology Transfer Automated Retrieval System (TEKTRAN)
Agricultural and ecosystem simulation models valuable for technology transfer require a realistic, process-oriented plant model that can be easily applied to different crops, grasses, and woody species. The objective of this chapter was to describe a general plant model that can be easily applied i...
Enterprise Modelling Michael Gruninger
GrÃ¼ninger, Michael
Chapter 16 Enterprise Modelling Michael Gruninger National Institute of Standards and Technology E-Mail: gruning@nist.gov Abstract: An enterprise model is a computational representation of the structure or other enterprise. An enterprise model can be both descriptive and definitional and it may cover both
ERIC Educational Resources Information Center
Fedorov, Alexander
2011-01-01
The author supposed that media education models can be divided into the following groups: (1) educational-information models (the study of the theory, history, language of media culture, etc.), based on the cultural, aesthetic, semiotic, socio-cultural theories of media education; (2) educational-ethical models (the study of moral, religions,…
Solid Waste Projection Model: Model user's guide
Stiles, D.L.; Crow, V.L.
1990-08-01
The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford company (WHC) specifically to address solid waste management issues at the Hanford Central Waste Complex (HCWC). This document, one of six documents supporting the SWPM system, contains a description of the system and instructions for preparing to use SWPM and operating Version 1 of the model. 4 figs., 1 tab.
Modeling physical growth using mixed effects models.
Johnson, William; Balakrishna, Nagalla; Griffiths, Paula L
2013-01-01
This article demonstrates the use of mixed effects models for characterizing individual and sample average growth curves based on serial anthropometric data. These models are advancement over conventional general linear regression because they effectively handle the hierarchical nature of serial growth data. Using body weight data on 70 infants in the Born in Bradford study, we demonstrate how a mixed effects model provides a better fit than a conventional regression model. Further, we demonstrate how mixed effects models can be used to explore the influence of environmental factors on the sample average growth curve. Analyzing data from 183 infant boys (aged 3-15 months) from rural South India, we show how maternal education shapes infant growth patterns as early as within the first 6 months of life. The presented analyses highlight the utility of mixed effects models for analyzing serial growth data because they allow researchers to simultaneously predict individual curves, estimate sample average curves, and investigate the effects of environmental exposure variables. PMID:23283665
Data and biological model Linear model with latent variable
Nuel, Gregory
Data and biological model Linear model with latent variable Results on simulated data Latent for carcinogenesis. #12;Data and biological model Linear model with latent variable Results on simulated data on gene expression. Latent variable model for carcinogenesis. #12;Data and biological model Linear model
Almagro, Juan C; Beavers, Mary Pat; Hernandez-Guzman, Francisco; Maier, Johannes; Shaulsky, Jodi; Butenhof, Kenneth; Labute, Paul; Thorsteinson, Nels; Kelly, Kenneth; Teplyakov, Alexey; Luo, Jinquan; Sweet, Raymond; Gilliland, Gary L
2011-11-01
A blinded study to assess the state of the art in three-dimensional structure modeling of the variable region (Fv) of antibodies was conducted. Nine unpublished high-resolution x-ray Fab crystal structures covering a wide range of antigen-binding site conformations were used as benchmark to compare Fv models generated by four structure prediction methodologies. The methodologies included two homology modeling strategies independently developed by CCG (Chemical Computer Group) and Accerlys Inc, and two fully automated antibody modeling servers: PIGS (Prediction of ImmunoGlobulin Structure), based on the canonical structure model, and Rosetta Antibody Modeling, based on homology modeling and Rosetta structure prediction methodology. The benchmark structure sequences were submitted to Accelrys and CCG and a set of models for each of the nine antibody structures were generated. PIGS and Rosetta models were obtained using the default parameters of the servers. In most cases, we found good agreement between the models and x-ray structures. The average rmsd (root mean square deviation) values calculated over the backbone atoms between the models and structures were fairly consistent, around 1.2 Å. Average rmsd values of the framework and hypervariable loops with canonical structures (L1, L2, L3, H1, and H2) were close to 1.0 Å. H3 prediction yielded rmsd values around 3.0 Å for most of the models. Quality assessment of the models and the relative strengths and weaknesses of the methods are discussed. We hope this initiative will serve as a model of scientific partnership and look forward to future antibody modeling assessments. PMID:21935986
Seo, Bommie F.; Lee, Jun Yong; Jung, Sung-No
2013-01-01
Keloids and hypertrophic scars are thick, raised dermal scars, caused by derailing of the normal scarring process. Extensive research on such abnormal scarring has been done; however, these being refractory disorders specific to humans, it has been difficult to establish a universal animal model. A wide variety of animal models have been used. These include the athymic mouse, rats, rabbits, and pigs. Although these models have provided valuable insight into abnormal scarring, there is currently still no ideal model. This paper reviews the models that have been developed. PMID:24078916
Ritchie, L.T.; Alpert, D.J.; Burke, R.P.; Johnson, J.D.; Ostmeyer, R.M.; Aldrich, D.C.; Blond, R.M.
1984-03-01
The CRAC2 computer code is a revised version of CRAC (Calculation of Reactor Accident Consequences) which was developed for the Reactor Safety Study. This document provides an overview of the CRAC2 code and a description of each of the models used. Significant improvements incorporated into CRAC2 include an improved weather sequence sampling technique, a new evacuation model, and new output capabilities. In addition, refinements have been made to the atmospheric transport and deposition model. Details of the modeling differences between CRAC2 and CRAC are emphasized in the model descriptions.
M. McGraw
2000-04-13
The UZ Colloid Transport model development plan states that the objective of this Analysis/Model Report (AMR) is to document the development of a model for simulating unsaturated colloid transport. This objective includes the following: (1) use of a process level model to evaluate the potential mechanisms for colloid transport at Yucca Mountain; (2) Provide ranges of parameters for significant colloid transport processes to Performance Assessment (PA) for the unsaturated zone (UZ); (3) Provide a basis for development of an abstracted model for use in PA calculations.
NASA Astrophysics Data System (ADS)
Marion, Giles M.; Kargel, Jeffrey S.
Implementation of the Pitzer approach is through the FREZCHEM (FREEZING CHEMISTRY) model, which is at the core of this work. This model was originally designed to simulate salt chemistries and freezing processes at low temperatures (-54 to 25°C) and 1 atm pressure. Over the years, this model has been broadened to include more chemistries (from 16 to 58 solid phases), a broader temperature range for some chemistries (to 113°C), and incorporation of a pressure dependence (1 to 1000 bars) into the model. Implementation, parameterization, validation, and limitations of the FREZCHEM model are extensively discussed in Chapter 3.
NASA Astrophysics Data System (ADS)
Friedman, Robert Bryan
In this article, I describe constructing a scale model of our galaxy—the Milky Way—and using this model to teach modern astronomy. The Milky Way model expands on concepts usually explored in the more common solar system model. The Milky Way model presents an opportunity to probe a broad array of physical processes and astrophysical systems, as well as multiple astronomical coordinate systems and far more expansive spatial scales. This exercise is kinetic, interactive, and designed to be done in large spaces (such as a gymnasium floor) with students at the middle school to high school levels.
Collins, Lisa M.; Part, Chérie E.
2013-01-01
Simple Summary In this review paper we discuss the different modeling techniques that have been used in animal welfare research to date. We look at what questions they have been used to answer, the advantages and pitfalls of the methods, and how future research can best use these approaches to answer some of the most important upcoming questions in farm animal welfare. Abstract The use of models in the life sciences has greatly expanded in scope and advanced in technique in recent decades. However, the range, type and complexity of models used in farm animal welfare is comparatively poor, despite the great scope for use of modeling in this field of research. In this paper, we review the different modeling approaches used in farm animal welfare science to date, discussing the types of questions they have been used to answer, the merits and problems associated with the method, and possible future applications of each technique. We find that the most frequently published types of model used in farm animal welfare are conceptual and assessment models; two types of model that are frequently (though not exclusively) based on expert opinion. Simulation, optimization, scenario, and systems modeling approaches are rarer in animal welfare, despite being commonly used in other related fields. Finally, common issues such as a lack of quantitative data to parameterize models, and model selection and validation are discussed throughout the review, with possible solutions and alternative approaches suggested. PMID:26487411
Johnson, Douglas H.; Cook, R.D.
2013-01-01
In her AAAS News & Notes piece "Can the Southwest manage its thirst?" (26 July, p. 362), K. Wren quotes Ajay Kalra, who advocates a particular method for predicting Colorado River streamflow "because it eschews complex physical climate models for a statistical data-driven modeling approach." A preference for data-driven models may be appropriate in this individual situation, but it is not so generally, Data-driven models often come with a warning against extrapolating beyond the range of the data used to develop the models. When the future is like the past, data-driven models can work well for prediction, but it is easy to over-model local or transient phenomena, often leading to predictive inaccuracy (1). Mechanistic models are built on established knowledge of the process that connects the response variables with the predictors, using information obtained outside of an extant data set. One may shy away from a mechanistic approach when the underlying process is judged to be too complicated, but good predictive models can be constructed with statistical components that account for ingredients missing in the mechanistic analysis. Models with sound mechanistic components are more generally applicable and robust than data-driven models.
Animal models of atherosclerosis.
Kapourchali, Fatemeh Ramezani; Surendiran, Gangadaran; Chen, Li; Uitz, Elisabeth; Bahadori, Babak; Moghadasian, Mohammed H
2014-05-16
In this mini-review several commonly used animal models of atherosclerosis have been discussed. Among them, emphasis has been made on mice, rabbits, pigs and non-human primates. Although these animal models have played a significant role in our understanding of induction of atherosclerotic lesions, we still lack a reliable animal model for regression of the disease. Researchers have reported several genetically modified and transgenic animal models that replicate human atherosclerosis, however each of current animal models have some limitations. Among these animal models, the apolipoprotein (apo) E-knockout (KO) mice have been used extensively because they develop spontaneous atherosclerosis. Furthermore, atherosclerotic lesions developed in this model depending on experimental design may resemble humans' stable and unstable atherosclerotic lesions. This mouse model of hypercholesterolemia and atherosclerosis has been also used to investigate the impact of oxidative stress and inflammation on atherogenesis. Low density lipoprotein (LDL)-r-KO mice are a model of human familial hypercholesterolemia. However, unlike apo E-KO mice, the LDL-r-KO mice do not develop spontaneous atherosclerosis. Both apo E-KO and LDL-r-KO mice have been employed to generate other relevant mouse models of cardiovascular disease through breeding strategies. In addition to mice, rabbits have been used extensively particularly to understand the mechanisms of cholesterol-induced atherosclerosis. The present review paper details the characteristics of animal models that are used in atherosclerosis research. PMID:24868511
Animal models of atherosclerosis
Kapourchali, Fatemeh Ramezani; Surendiran, Gangadaran; Chen, Li; Uitz, Elisabeth; Bahadori, Babak; Moghadasian, Mohammed H
2014-01-01
In this mini-review several commonly used animal models of atherosclerosis have been discussed. Among them, emphasis has been made on mice, rabbits, pigs and non-human primates. Although these animal models have played a significant role in our understanding of induction of atherosclerotic lesions, we still lack a reliable animal model for regression of the disease. Researchers have reported several genetically modified and transgenic animal models that replicate human atherosclerosis, however each of current animal models have some limitations. Among these animal models, the apolipoprotein (apo) E-knockout (KO) mice have been used extensively because they develop spontaneous atherosclerosis. Furthermore, atherosclerotic lesions developed in this model depending on experimental design may resemble humans’ stable and unstable atherosclerotic lesions. This mouse model of hypercholesterolemia and atherosclerosis has been also used to investigate the impact of oxidative stress and inflammation on atherogenesis. Low density lipoprotein (LDL)-r-KO mice are a model of human familial hypercholesterolemia. However, unlike apo E-KO mice, the LDL-r-KO mice do not develop spontaneous atherosclerosis. Both apo E-KO and LDL-r-KO mice have been employed to generate other relevant mouse models of cardiovascular disease through breeding strategies. In addition to mice, rabbits have been used extensively particularly to understand the mechanisms of cholesterol-induced atherosclerosis. The present review paper details the characteristics of animal models that are used in atherosclerosis research. PMID:24868511
T. Ghezzehej
2004-10-04
The purpose of this model report is to document the calibrated properties model that provides calibrated property sets for unsaturated zone (UZ) flow and transport process models (UZ models). The calibration of the property sets is performed through inverse modeling. This work followed, and was planned in, ''Technical Work Plan (TWP) for: Unsaturated Zone Flow Analysis and Model Report Integration'' (BSC 2004 [DIRS 169654], Sections 1.2.6 and 2.1.1.6). Direct inputs to this model report were derived from the following upstream analysis and model reports: ''Analysis of Hydrologic Properties Data'' (BSC 2004 [DIRS 170038]); ''Development of Numerical Grids for UZ Flow and Transport Modeling'' (BSC 2004 [DIRS 169855]); ''Simulation of Net Infiltration for Present-Day and Potential Future Climates'' (BSC 2004 [DIRS 170007]); ''Geologic Framework Model'' (GFM2000) (BSC 2004 [DIRS 170029]). Additionally, this model report incorporates errata of the previous version and closure of the Key Technical Issue agreement TSPAI 3.26 (Section 6.2.2 and Appendix B), and it is revised for improved transparency.
NASA Technical Reports Server (NTRS)
Tijidjian, Raffi P.
2010-01-01
The TEAMS model analyzer is a supporting tool developed to work with models created with TEAMS (Testability, Engineering, and Maintenance System), which was developed by QSI. In an effort to reduce the time spent in the manual process that each TEAMS modeler must perform in the preparation of reporting for model reviews, a new tool has been developed as an aid to models developed in TEAMS. The software allows for the viewing, reporting, and checking of TEAMS models that are checked into the TEAMS model database. The software allows the user to selectively model in a hierarchical tree outline view that displays the components, failure modes, and ports. The reporting features allow the user to quickly gather statistics about the model, and generate an input/output report pertaining to all of the components. Rules can be automatically validated against the model, with a report generated containing resulting inconsistencies. In addition to reducing manual effort, this software also provides an automated process framework for the Verification and Validation (V&V) effort that will follow development of these models. The aid of such an automated tool would have a significant impact on the V&V process.
Moffat, Harry K.; Noble, David R.; Baer, Thomas A.; Adolf, Douglas Brian; Rao, Rekha Ranjana; Mondy, Lisa Ann
2008-09-01
In this report, we summarize our work on developing a production level foam processing computational model suitable for predicting the self-expansion of foam in complex geometries. The model is based on a finite element representation of the equations of motion, with the movement of the free surface represented using the level set method, and has been implemented in SIERRA/ARIA. An empirically based time- and temperature-dependent density model is used to encapsulate the complex physics of foam nucleation and growth in a numerically tractable model. The change in density with time is at the heart of the foam self-expansion as it creates the motion of the foam. This continuum-level model uses an homogenized description of foam, which does not include the gas explicitly. Results from the model are compared to temperature-instrumented flow visualization experiments giving the location of the foam front as a function of time for our EFAR model system.
Toward Scientific Numerical Modeling
NASA Technical Reports Server (NTRS)
Kleb, Bil
2007-01-01
Ultimately, scientific numerical models need quantified output uncertainties so that modeling can evolve to better match reality. Documenting model input uncertainties and verifying that numerical models are translated into code correctly, however, are necessary first steps toward that goal. Without known input parameter uncertainties, model sensitivities are all one can determine, and without code verification, output uncertainties are simply not reliable. To address these two shortcomings, two proposals are offered: (1) an unobtrusive mechanism to document input parameter uncertainties in situ and (2) an adaptation of the Scientific Method to numerical model development and deployment. Because these two steps require changes in the computational simulation community to bear fruit, they are presented in terms of the Beckhard-Harris-Gleicher change model.
Liddle, A R; Parkinson, D; Liddle, Andrew R.; Mukherjee, Pia; Parkinson, David
2006-01-01
Model selection aims to determine which theoretical models are most plausible given some data, without necessarily asking about the preferred values of the model parameters. A common model selection question is to ask when new data require introduction of an additional parameter, describing a newly-discovered physical effect. We review several model selection statistics, and then focus on use of the Bayesian evidence, which implements the usual Bayesian analysis framework at the level of models rather than parameters. We describe our CosmoNest code, which is the first computationally-efficient implementation of Bayesian model selection in a cosmological context. We apply it to recent WMAP satellite data, examining the need for a perturbation spectral index differing from the scale-invariant (Harrison-Zel'dovich) case.
NASA Astrophysics Data System (ADS)
Horstemeyer, M. F.
This review of multiscale modeling covers a brief history of various multiscale methodologies related to solid materials and the associated experimental influences, the various influence of multiscale modeling on different disciplines, and some examples of multiscale modeling in the design of structural components. Although computational multiscale modeling methodologies have been developed in the late twentieth century, the fundamental notions of multiscale modeling have been around since da Vinci studied different sizes of ropes. The recent rapid growth in multiscale modeling is the result of the confluence of parallel computing power, experimental capabilities to characterize structure-property relations down to the atomic level, and theories that admit multiple length scales. The ubiquitous research that focus on multiscale modeling has broached different disciplines (solid mechanics, fluid mechanics, materials science, physics, mathematics, biological, and chemistry), different regions of the world (most continents), and different length scales (from atoms to autos).
V. Chipman; J. Case
2002-12-20
The purpose of the Ventilation Model is to simulate the heat transfer processes in and around waste emplacement drifts during periods of forced ventilation. The model evaluates the effects of emplacement drift ventilation on the thermal conditions in the emplacement drifts and surrounding rock mass, and calculates the heat removal by ventilation as a measure of the viability of ventilation to delay the onset of peak repository temperature and reduce its magnitude. The heat removal by ventilation is temporally and spatially dependent, and is expressed as the fraction of heat carried away by the ventilation air compared to the fraction of heat produced by radionuclide decay. One minus the heat removal is called the wall heat fraction, or the remaining amount of heat that is transferred via conduction to the surrounding rock mass. Downstream models, such as the ''Multiscale Thermohydrologic Model'' (BSC 2001), use the wall heat fractions as outputted from the Ventilation Model to initialize their post-closure analyses. The Ventilation Model report was initially developed to analyze the effects of preclosure continuous ventilation in the Engineered Barrier System (EBS) emplacement drifts, and to provide heat removal data to support EBS design. Revision 00 of the Ventilation Model included documentation of the modeling results from the ANSYS-based heat transfer model. Revision 01 ICN 01 included the results of the unqualified software code MULTIFLUX to assess the influence of moisture on the ventilation efficiency. The purposes of Revision 02 of the Ventilation Model are: (1) To validate the conceptual model for preclosure ventilation of emplacement drifts and verify its numerical application in accordance with new procedural requirements as outlined in AP-SIII-10Q, Models (Section 7.0). (2) To satisfy technical issues posed in KTI agreement RDTME 3.14 (Reamer and Williams 2001a). Specifically to demonstrate, with respect to the ANSYS ventilation model, the adequacy of the discretization (Section 6.2.3.1), and the downstream applicability of the model results (i.e. wall heat fractions) to initialize post-closure thermal models (Section 6.6). (3) To satisfy the remainder of KTI agreement TEF 2.07 (Reamer and Williams 2001b). Specifically to provide the results of post-test ANSYS modeling of the Atlas Facility forced convection tests (Section 7.1.2). This portion of the model report also serves as a validation exercise per AP-SIII.10Q, Models, for the ANSYS ventilation model. (4) To asses the impacts of moisture on the ventilation efficiency.
Generalized Multilevel Structural Equation Modeling
ERIC Educational Resources Information Center
Rabe-Hesketh, Sophia; Skrondal, Anders; Pickles, Andrew
2004-01-01
A unifying framework for generalized multilevel structural equation modeling is introduced. The models in the framework, called generalized linear latent and mixed models (GLLAMM), combine features of generalized linear mixed models (GLMM) and structural equation models (SEM) and consist of a response model and a structural model for the latent…
Hammerand, Daniel Carl; Scherzinger, William Mark
2007-09-01
The Library of Advanced Materials for Engineering (LAME) provides a common repository for constitutive models that can be used in computational solid mechanics codes. A number of models including both hypoelastic (rate) and hyperelastic (total strain) constitutive forms have been implemented in LAME. The structure and testing of LAME is described in Scherzinger and Hammerand ([3] and [4]). The purpose of the present report is to describe the material models which have already been implemented into LAME. The descriptions are designed to give useful information to both analysts and code developers. Thus far, 33 non-ITAR/non-CRADA protected material models have been incorporated. These include everything from the simple isotropic linear elastic models to a number of elastic-plastic models for metals to models for honeycomb, foams, potting epoxies and rubber. A complete description of each model is outside the scope of the current report. Rather, the aim here is to delineate the properties, state variables, functions, and methods for each model. However, a brief description of some of the constitutive details is provided for a number of the material models. Where appropriate, the SAND reports available for each model have been cited. Many models have state variable aliases for some or all of their state variables. These alias names can be used for outputting desired quantities. The state variable aliases available for results output have been listed in this report. However, not all models use these aliases. For those models, no state variable names are listed. Nevertheless, the number of state variables employed by each model is always given. Currently, there are four possible functions for a material model. This report lists which of these four methods are employed in each material model. As far as analysts are concerned, this information is included only for the awareness purposes. The analyst can take confidence in the fact that model has been properly implemented and the methods necessary for achieving accurate and efficient solutions have been incorporated. The most important method is the getStress function where the actual material model evaluation takes place. Obviously, all material models incorporate this function. The initialize function is included in most material models. The initialize function is called once at the beginning of an analysis and its primary purpose is to initialize the material state variables associated with the model. Many times, there is some information which can be set once per load step. For instance, we may have temperature dependent material properties in an analysis where temperature is prescribed. Instead of setting those parameters at each iteration in a time step, it is much more efficient to set them once per time step at the beginning of the step. These types of load step initializations are performed in the loadStepInit method. The final function used by many models is the pcElasticModuli method which changes the moduli that are to be used by the elastic preconditioner in Adagio. The moduli for the elastic preconditioner are set during the initialization of Adagio. Sometimes, better convergence can be achieved by changing these moduli for the elastic preconditioner. For instance, it typically helps to modify the preconditioner when the material model has temperature dependent moduli. For many material models, it is not necessary to change the values of the moduli that are set initially in the code. Hence, those models do not have pcElasticModuli functions. All four of these methods receive information from the matParams structure as described by Scherzinger and Hammerand.
Gentry, S.; Taylor, J.; Stephenson, D.
1994-06-01
A unique end-to-end LIDAR sensor model has been developed supporting the concept development stage of the CALIOPE UV DIAL and UV laser-induced-fluorescence (LIF) efforts. The model focuses on preserving the temporal and spectral nature of signals as they pass through the atmosphere, are collected by the optics, detected by the sensor, and processed by the sensor electronics and algorithms. This is done by developing accurate component sub-models with realistic inputs and outputs, as well as internal noise sources and operating parameters. These sub-models are then configured using data-flow diagrams to operate together to reflect the performance of the entire DIAL system. This modeling philosophy allows the developer to have a realistic indication of the nature of signals throughout the system and to design components and processing in a realistic environment. Current component models include atmospheric absorption and scattering losses, plume absorption and scattering losses, background, telescope and optical filter models, PMT (photomultiplier tube) with realistic noise sources, amplifier operation and noise, A/D converter operation, noise and distortion, pulse averaging, and DIAL computation. Preliminary results of the model will be presented indicating the expected model operation depicting the October field test at the NTS spill test facility. Indications will be given concerning near-term upgrades to the model.
Modelling Farm Animal Welfare.
Collins, Lisa M; Part, Chérie E
2013-01-01
The use of models in the life sciences has greatly expanded in scope and advanced in technique in recent decades. However, the range, type and complexity of models used in farm animal welfare is comparatively poor, despite the great scope for use of modeling in this field of research. In this paper, we review the different modeling approaches used in farm animal welfare science to date, discussing the types of questions they have been used to answer, the merits and problems associated with the method, and possible future applications of each technique. We find that the most frequently published types of model used in farm animal welfare are conceptual and assessment models; two types of model that are frequently (though not exclusively) based on expert opinion. Simulation, optimization, scenario, and systems modeling approaches are rarer in animal welfare, despite being commonly used in other related fields. Finally, common issues such as a lack of quantitative data to parameterize models, and model selection and validation are discussed throughout the review, with possible solutions and alternative approaches suggested. PMID:26487411
NASA Technical Reports Server (NTRS)
Schoeberl, Mark; Rood, Richard B.; Hildebrand, Peter; Raymond, Carol
2003-01-01
The Earth System Model is the natural evolution of current climate models and will be the ultimate embodiment of our geophysical understanding of the planet. These models are constructed from components - atmosphere, ocean, ice, land, chemistry, solid earth, etc. models and merged together through a coupling program which is responsible for the exchange of data from the components. Climate models and future earth system models will have standardized modules, and these standards are now being developed by the ESMF project funded by NASA. The Earth System Model will have a variety of uses beyond climate prediction. The model can be used to build climate data records making it the core of an assimilation system, and it can be used in OSSE experiments to evaluate. The computing and storage requirements for the ESM appear to be daunting. However, the Japanese ES theoretical computing capability is already within 20% of the minimum requirements needed for some 2010 climate model applications. Thus it seems very possible that a focused effort to build an Earth System Model will achieve succcss.
Geochemical modeling: a review
Jenne, E.A.
1981-06-01
Two general families of geochemical models presently exist. The ion speciation-solubility group of geochemical models contain submodels to first calculate a distribution of aqueous species and to secondly test the hypothesis that the water is near equilibrium with particular solid phases. These models may or may not calculate the adsorption of dissolved constituents and simulate the dissolution and precipitation (mass transfer) of solid phases. Another family of geochemical models, the reaction path models, simulates the stepwise precipitation of solid phases as a result of reacting specified amounts of water and rock. Reaction path models first perform an aqueous speciation of the dissolved constituents of the water, test solubility hypotheses, then perform the reaction path modeling. Certain improvements in the present versions of these models would enhance their value and usefulness to applications in nuclear-waste isolation, etc. Mass-transfer calculations of limited extent are certainly within the capabilities of state-of-the-art models. However, the reaction path models require an expansion of their thermodynamic data bases and systematic validation before they are generally accepted.
Risk modelling: which models to choose?
Csicsaky, M J; Roller, M; Pott, F
1989-01-01
Using as examples excess lung cancer mortality in coke oven workers and lung tumor induction in rats by inhalation of diesel engine emissions or cadmium chloride aerosol, the maximum likelihood estimate and the upper limit of risk were determined using a set of conventional risk models. The additional safety offered by going to the upper limit of the 95% confidence interval when deriving a unit risk value was found to be less than a factor of 5 in all but one case, and usually much less than 2. It is concluded that the selection of an adequate model is the most critical step in risk assessment, and that an additional safety factor may be required to allow for a better protection of the public in case models other than the most conservative ones come into use. PMID:2637154
Software Maintenance Maturity Model (SMmm The software maintenance process model
Hayes, Jane E.
Software Maintenance Maturity Model (SMmm ): The software maintenance process model Alain April1 improvements to the software maintenance standards and introducing a proposed maturity model for daily software maintenance activities: Software Maintenance Maturity Model (SMmm ). The software maintenance function suffers
Radiation Environment Modeling for Spacecraft Design: New Model Developments
NASA Technical Reports Server (NTRS)
Barth, Janet; Xapsos, Mike; Lauenstein, Jean-Marie; Ladbury, Ray
2006-01-01
A viewgraph presentation on various new space radiation environment models for spacecraft design is described. The topics include: 1) The Space Radiatio Environment; 2) Effects of Space Environments on Systems; 3) Space Radiatio Environment Model Use During Space Mission Development and Operations; 4) Space Radiation Hazards for Humans; 5) "Standard" Space Radiation Environment Models; 6) Concerns about Standard Models; 7) Inadequacies of Current Models; 8) Development of New Models; 9) New Model Developments: Proton Belt Models; 10) Coverage of New Proton Models; 11) Comparison of TPM-1, PSB97, AP-8; 12) New Model Developments: Electron Belt Models; 13) Coverage of New Electron Models; 14) Comparison of "Worst Case" POLE, CRESELE, and FLUMIC Models with the AE-8 Model; 15) New Model Developments: Galactic Cosmic Ray Model; 16) Comparison of NASA, MSU, CIT Models with ACE Instrument Data; 17) New Model Developmemts: Solar Proton Model; 18) Comparison of ESP, JPL91, KIng/Stassinopoulos, and PSYCHIC Models; 19) New Model Developments: Solar Heavy Ion Model; 20) Comparison of CREME96 to CREDO Measurements During 2000 and 2002; 21) PSYCHIC Heavy ion Model; 22) Model Standardization; 23) Working Group Meeting on New Standard Radiation Belt and Space Plasma Models; and 24) Summary.
NASA Technical Reports Server (NTRS)
1985-01-01
The outside users payload model which is a continuation of documents and replaces and supersedes the July 1984 edition is presented. The time period covered by this model is 1985 through 2000. The following sections are included: (1) definition of the scope of the model; (2) discussion of the methodology used; (3) overview of total demand; (4) summary of the estimated market segmentation by launch vehicle; (5) summary of the estimated market segmentation by user type; (6) details of the STS market forecast; (7) summary of transponder trends; (8) model overview by mission category; and (9) detailed mission models. All known non-NASA, non-DOD reimbursable payloads forecast to be flown by non-Soviet-block countries are included in this model with the exception of Spacelab payloads and small self contained payloads. Certain DOD-sponsored or cosponsored payloads are included if they are reimbursable launches.
Carcinogenesis models: An overview
Moolgavkar, S.H.
1992-12-31
Biologically based mathematical models of carcinogenesis are not only an essential part of a rational approach to quantitative cancer risk assessment but also raise fundamental questions about the nature of the events leading to malignancy. In this paper two such models are reviewed. The first is the multistage model proposed by Armitage and Doll in the 1950s; most of the paper is devoted to a discussion of the two-mutation model proposed by the author and his colleagues. This model is a generalization of the idea of recessive oncogenesis proposed by Knudson and has been shown to be consistent with a large body of epidemiologic and experimental data. The usefulness of the model is illustrated by analyzing a large experimental data set in which rats exposed to radon developed malignant lung tumors.
Post, D.E.; Heifetz, D.; Petravic, M.
1982-07-01
Recent progress in models for poloidal divertors has both helped to explain current divertor experiments and contributed significantly to design efforts for future large tokamak (INTOR, etc.) divertor systems. These models range in sophistication from zero-dimensional treatments and dimensional analysis to two-dimensional models for plasma and neutral particle transport which include a wide variety of atomic and molecular processes as well as detailed treatments of the plasma-wall interaction. This paper presents a brief review of some of these models, describing the physics and approximations involved in each model. We discuss the wide variety of physics necessary for a comprehensive description of poloidal divertors. To illustrate the progress in models for poloidal divertors, we discuss some of our recent work as typical examples of the kinds of calculations being done.
NASA Technical Reports Server (NTRS)
North, G. R.; Cahalan, R. F.; Coakley, J. A., Jr.
1981-01-01
An introductory survey of the global energy balance climate models is presented with an emphasis on analytical results. A sequence of increasingly complicated models involving ice cap and radiative feedback processes are solved, and the solutions and parameter sensitivities are studied. The model parameterizations are examined critically in light of many current uncertainties. A simple seasonal model is used to study the effects of changes in orbital elements on the temperature field. A linear stability theorem and a complete nonlinear stability analysis for the models are developed. Analytical solutions are also obtained for the linearized models driven by stochastic forcing elements. In this context the relation between natural fluctuation statistics and climate sensitivity is stressed.
Probabilistic Mesomechanical Fatigue Model
NASA Technical Reports Server (NTRS)
Tryon, Robert G.
1997-01-01
A probabilistic mesomechanical fatigue life model is proposed to link the microstructural material heterogeneities to the statistical scatter in the macrostructural response. The macrostructure is modeled as an ensemble of microelements. Cracks nucleation within the microelements and grow from the microelements to final fracture. Variations of the microelement properties are defined using statistical parameters. A micromechanical slip band decohesion model is used to determine the crack nucleation life and size. A crack tip opening displacement model is used to determine the small crack growth life and size. Paris law is used to determine the long crack growth life. The models are combined in a Monte Carlo simulation to determine the statistical distribution of total fatigue life for the macrostructure. The modeled response is compared to trends in experimental observations from the literature.
Extended frequency turbofan model
NASA Technical Reports Server (NTRS)
Mason, J. R.; Park, J. W.; Jaekel, R. F.
1980-01-01
The fan model was developed using two dimensional modeling techniques to add dynamic radial coupling between the core stream and the bypass stream of the fan. When incorporated into a complete TF-30 engine simulation, the fan model greatly improved compression system frequency response to planar inlet pressure disturbances up to 100 Hz. The improved simulation also matched engine stability limits at 15 Hz, whereas the one dimensional fan model required twice the inlet pressure amplitude to stall the simulation. With verification of the two dimensional fan model, this program formulated a high frequency F-100(3) engine simulation using row by row compression system characteristics. In addition to the F-100(3) remote splitter fan, the program modified the model fan characteristics to simulate a proximate splitter version of the F-100(3) engine.
Solid model design simplification
Ames, A.L.; Rivera, J.J.; Webb, A.J.; Hensinger, D.M.
1997-12-01
This paper documents an investigation of approaches to improving the quality of Pro/Engineer-created solid model data for use by downstream applications. The investigation identified a number of sources of problems caused by deficiencies in Pro/Engineer`s geometric engine, and developed prototype software capable of detecting many of these problems and guiding users towards simplified, useable models. The prototype software was tested using Sandia production solid models, and provided significant leverage in attacking the simplification problem.
Young, Michael F.
2015-07-01
Aerosol particles that deposit on surfaces may be subsequently resuspended by air flowing over the surface. A review of models for this liftoff process is presented and compared to available data. Based on this review, a model that agrees with existing data and is readily computed is presented for incorporation into a system level code such as MELCOR. Liftoff Model for MELCOR July 2015 4 This page is intentionally blank
NASA Technical Reports Server (NTRS)
Kerstman, Eric; Minard, Charles; Saile, Lynn; Freiere deCarvalho, Mary; Myers, Jerry; Walton, Marlei; Butler, Douglas; Iyengar, Sriram; Johnson-Throop, Kathy; Baumann, David
2010-01-01
The goals of the Integrated Medical Model (IMM) are to develop an integrated, quantified, evidence-based decision support tool useful to crew health and mission planners and to help align science, technology, and operational activities intended to optimize crew health, safety, and mission success. Presentation slides address scope and approach, beneficiaries of IMM capabilities, history, risk components, conceptual models, development steps, and the evidence base. Space adaptation syndrome is used to demonstrate the model's capabilities.
Eric Baird
2000-11-01
Isaac Newton is usually associated with the idea of absolute space and time, and with ballistic light-corpuscle arguments. However, Newton was also a proponent of wave/particle duality, and published a "new" variable-density aether model in which light and matter trajectories were either bent by gravitational fields, or deflected by an aether density gradient. Newton's (flawed) aether model can be considered as an early attempt at a curved-space model of gravity.
Global Atmospheric Aerosol Modeling
NASA Technical Reports Server (NTRS)
Hendricks, Johannes; Aquila, Valentina; Righi, Mattia
2012-01-01
Global aerosol models are used to study the distribution and properties of atmospheric aerosol particles as well as their effects on clouds, atmospheric chemistry, radiation, and climate. The present article provides an overview of the basic concepts of global atmospheric aerosol modeling and shows some examples from a global aerosol simulation. Particular emphasis is placed on the simulation of aerosol particles and their effects within global climate models.
Atmospheric prediction model survey
NASA Technical Reports Server (NTRS)
Wellck, R. E.
1976-01-01
As part of the SEASAT Satellite program of NASA, a survey of representative primitive equation atmospheric prediction models that exist in the world today was written for the Jet Propulsion Laboratory. Seventeen models developed by eleven different operational and research centers throughout the world are included in the survey. The surveys are tutorial in nature describing the features of the various models in a systematic manner.
Rat Endovascular Perforation Model
Sehba, Fatima A.
2014-01-01
Experimental animal models of aneurysmal subarachnoid hemorrhage (SAH) have provided a wealth of information on the mechanisms of brain injury. The Rat endovascular perforation model (EVP) replicates the early pathophysiology of SAH and hence is frequently used to study early brain injury following SAH. This paper presents a brief review of historical development of the EVP model, details the technique used to create SAH and considerations necessary to overcome technical challenges. PMID:25213427
HOMER® Micropower Optimization Model
Lilienthal, P.
2005-01-01
NREL has developed the HOMER micropower optimization model. The model can analyze all of the available small power technologies individually and in hybrid configurations to identify least-cost solutions to energy requirements. This capability is valuable to a diverse set of energy professionals and applications. NREL has actively supported its growing user base and developed training programs around the model. These activities are helping to grow the global market for solar technologies.
Mandal, Esan
2004-09-30
casting turned off. A displacement map with positive displacement for opaque areas is applied on a copy of the model. A Maya feature, which converts a displacement map to its equivalent polygonal mesh, is used to get a 3D mesh of the displacement map... . . . . . . . . . . . . . . . . . . . . 25 III.1.1. Doo-Sabin modification in Wire modeling . . . . . . . 25 III.1.2. Rind modeling integration . . . . . . . . . . . . . . . 27 III.1.3. Dimension control of the 3D pipes . . . . . . . . . . 30 III.1.4. Self...
Dynamical holographic QCD model
Danning Li; Mei Huang
2014-09-30
We develop a dynamical holographic QCD model, which resembles the renormalization group from ultraviolet (UV) to infrared (IR). The dynamical holographic model is constructed in the graviton-dilaton-scalar framework with the dilaton background field $\\Phi$ and scalar field $X$ responsible for the gluodynamics and chiral dynamics, respectively. We summarize our results on hadron spectra, QCD phase transition and transport properties including the jet quenching parameter and the shear/bulk viscosity in the framework of the dynamical holographic QCD model.
Energy Science and Technology Software Center (ESTSC)
2005-09-28
The Sandia Material Model Driver (MMD) software package allows users to run material models from a variety of different Finite Element Model (FEM) codes in a standalone fashion, independent of the host codes. The MMD software is designed to be run on a variety of different operating system platforms as a console application. Initial development efforts have resulted in a package that has been shown to be fast, convenient, and easy to use, with substantialmore »growth potential.« less
Engel, D.W.; McGrail, B.P.
1993-11-01
The Office of Civilian Radioactive Waste Management and the Power Reactor and Nuclear Fuel Development Corporation of Japan (PNC) have supported the development of the Analytical Repository Source-Term (AREST) at Pacific Northwest Laboratory. AREST is a computer model developed to evaluate radionuclide release from an underground geologic repository. The AREST code can be used to calculate/estimate the amount and rate of each radionuclide that is released from the engineered barrier system (EBS) of the repository. The EBS is the man-made or disrupted area of the repository. AREST was designed as a system-level models to simulate the behavior of the total repository by combining process-level models for the release from an individual waste package or container. AREST contains primarily analytical models for calculating the release/transport of radionuclides to the lost rock that surrounds each waste package. Analytical models were used because of the small computational overhead that allows all the input parameters to be derived from a statistical distribution. Recently, a one-dimensional numerical model was also incorporated into AREST, to allow for more detailed modeling of the transport process with arbitrary length decay chains. The next step in modeling the EBS, is to develop a model that couples the probabilistic capabilities of AREST with a more detailed process model. This model will need to look at the reactive coupling of the processes that are involved with the release process. Such coupling would include: (1) the dissolution of the waste form, (2) the geochemical modeling of the groundwater, (3) the corrosion of the container overpacking, and (4) the backfill material, just to name a few. Several of these coupled processes are already incorporated in the current version of AREST.
Motivation Model Results Summary A generative model for feedback networks
White, Douglas R.
Motivation Model Results Summary A generative model for feedback networks D.R. White1 N. Kejzar2 C #12;Motivation Model Results Summary Outline 1 Motivation An example 2 Model 3 Results Network properties Simulations #12;Motivation Model Results Summary Cycle formation in growing network How to model
Bayesian Data-Model Fit Assessment for Structural Equation Modeling
ERIC Educational Resources Information Center
Levy, Roy
2011-01-01
Bayesian approaches to modeling are receiving an increasing amount of attention in the areas of model construction and estimation in factor analysis, structural equation modeling (SEM), and related latent variable models. However, model diagnostics and model criticism remain relatively understudied aspects of Bayesian SEM. This article describes…
1997-04-01
Western Research Institute (wRI) has developed a numerical model (TCROW) to describe CROW{sup TM} processing of contaminated aquifers. CROW is a patented technology for the removal of contaminant organics from water-saturated formations by injection of hot water or low- temperature steam. TCROW is based on a fully implicit, thermal, compositional model (TSRS) previously developed by wRI. TCROW`s formulation represents several enhancements and simplifications over TSRS and results in a model specifically tailored to model the CROW process.
Multifamily Envelope Leakage Model
Faakye, O.; Griffiths, D.
2015-05-01
The objective of the 2013 research project was to develop the model for predicting fully guarded test results (FGT), using unguarded test data and specific building features of apartment units. The model developed has a coefficient of determination R2 value of 0.53 with a root mean square error (RMSE) of 0.13. Both statistical metrics indicate that the model is relatively strong. When tested against data that was not included in the development of the model, prediction accuracy was within 19%, which is reasonable given that seasonal differences in blower door measurements can vary by as much as 25%.
Mathematical model of sarcoidosis
Hao, Wenrui; Crouser, Elliott D.; Friedman, Avner
2014-01-01
Sarcoidosis is a disease involving abnormal collection of inflammatory cells forming nodules, called granulomas. Such granulomas occur in the lung and the mediastinal lymph nodes, in the heart, and in other vital and nonvital organs. The origin of the disease is unknown, and there are only limited clinical data on lung tissue of patients. No current model of sarcoidosis exists. In this paper we develop a mathematical model on the dynamics of the disease in the lung and use patients’ lung tissue data to validate the model. The model is used to explore potential treatments. PMID:25349384
NASA Technical Reports Server (NTRS)
Sapyta, Joe; Reid, Hank; Walton, Lew
1993-01-01
The topics are presented in viewgraph form and include the following: particle bed reactor (PBR) core cross section; PBR bleed cycle; fuel and moderator flow paths; PBR modeling requirements; characteristics of PBR and nuclear thermal propulsion (NTP) modeling; challenges for PBR and NTP modeling; thermal hydraulic computer codes; capabilities for PBR/reactor application; thermal/hydralic codes; limitations; physical correlations; comparison of predicted friction factor and experimental data; frit pressure drop testing; cold frit mask factor; decay heat flow rate; startup transient simulation; and philosophy of systems modeling.
Visualizing Risk Prediction Models
Van Belle, Vanya; Van Calster, Ben
2015-01-01
Objective Risk prediction models can assist clinicians in making decisions. To boost the uptake of these models in clinical practice, it is important that end-users understand how the model works and can efficiently communicate its results. We introduce novel methods for interpretable model visualization. Methods The proposed visualization techniques are applied to two prediction models from the Framingham Heart Study for the prediction of intermittent claudication and stroke after atrial fibrillation. We represent models using color bars, and visualize the risk estimation process for a specific patient using patient-specific contribution charts. Results The color-based model representations provide users with an attractive tool to instantly gauge the relative importance of the predictors. The patient-specific representations allow users to understand the relative contribution of each predictor to the patient’s estimated risk, potentially providing insightful information on which to base further patient management. Extensions towards non-linear models and interactions are illustrated on an artificial dataset. Conclusion The proposed methods summarize risk prediction models and risk predictions for specific patients in an alternative way. These representations may facilitate communication between clinicians and patients. PMID:26176945
NASA Technical Reports Server (NTRS)
Levy, R.; Mcginness, H.
1976-01-01
Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.
Atmospheric Science Data Center
2014-04-25
... Station Instrument: Chemiluminescence UV Ozone Detector Location: Northeastern United States ... Files: NE Model Readme Hourly Surface Air Quality Ozone & Nitrogen Measurement Sites Related Data: ...
Brown-VanHoozer, S. A.
1999-06-02
Conscious awareness of our environment is based on a feedback loop comprised of sensory input transmitted to the central nervous system leading to construction of our ''model of the world,'' (Lewis et al, 1982). We then assimilate the neurological model at the unconscious level into information we can later consciously consider useful in identifying belief systems and behaviors for designing diverse systems. Thus, we can avoid potential problems based on our open-to-error perceived reality of the world. By understanding how our model of reality is organized, we allow ourselves to transcend content and develop insight into how effective choices and belief systems are generated through sensory derived processes. These are the processes which provide the designer the ability to meta model (build a model of a model) the user; consequently, matching the mental model of the user with that of the designer's and, coincidentally, forming rapport between the two participants. The information shared between the participants is neither assumed nor generalized, it is closer to equivocal; thus minimizing error through a sharing of each other's model of reality. How to identify individual mental mechanisms or processes, how to organize the individual strategies of these mechanisms into useful patterns, and to formulate these into models for success and knowledge based outcomes is the subject of the discussion that follows.
NASA Astrophysics Data System (ADS)
Deo, N.
2002-05-01
This paper discusses random matrix models that exhibit the unusual phenomena of having multiple solutions at the same point in phase space. These matrix models have gaps in their spectrum or density of eigenvalues. The free energy and certain correlation functions of these models show differences for the different solutions. This study presents evidence for the presence of multiple solutions both analytically and numerically. As an example this paper discusses the double-well matrix model with potential V(M)=-(?/2)M2+(g/4)M4, where M is a random N×N matrix (the M4 matrix model) as well as the Gaussian Penner model with V(M)=(?/2)M2-t ln M. First this paper studies what these multiple solutions are in the large N limit using the recurrence coefficient of the orthogonal polynomials. Second it discusses these solutions at the nonperturbative level to bring out some differences between the multiple solutions. Also presented are the two-point density-density correlation functions, which further characterize these models in a different universality class. A motivation for this work is that variants of these models have been conjectured to be models of certain structural glasses in the high temperature phase.
Lightning return stroke models
NASA Technical Reports Server (NTRS)
Lin, Y. T.; Uman, M. A.; Standler, R. B.
1980-01-01
We test the two most commonly used lightning return stroke models, Bruce-Golde and transmission line, against subsequent stroke electric and magnetic field wave forms measured simultaneously at near and distant stations and show that these models are inadequate to describe the experimental data. We then propose a new return stroke model that is physically plausible and that yields good approximations to the measured two-station fields. Using the new model, we derive return stroke charge and current statistics for about 100 subsequent strokes.
NASA Astrophysics Data System (ADS)
Wiegelmann, Thomas; Petrie, Gordon J. D.; Riley, Pete
2015-07-01
Coronal magnetic field models use photospheric field measurements as boundary condition to model the solar corona. We review in this paper the most common model assumptions, starting from MHD-models, magnetohydrostatics, force-free and finally potential field models. Each model in this list is somewhat less complex than the previous one and makes more restrictive assumptions by neglecting physical effects. The magnetohydrostatic approach neglects time-dependent phenomena and plasma flows, the force-free approach neglects additionally the gradient of the plasma pressure and the gravity force. This leads to the assumption of a vanishing Lorentz force and electric currents are parallel (or anti-parallel) to the magnetic field lines. Finally, the potential field approach neglects also these currents. We outline the main assumptions, benefits and limitations of these models both from a theoretical (how realistic are the models?) and a practical viewpoint (which computer resources to we need?). Finally we address the important problem of noisy and inconsistent photospheric boundary conditions and the possibility of using chromospheric and coronal observations to improve the models.
Modelling Hadronic Interactions
NASA Astrophysics Data System (ADS)
Wellisch, J. P.
Optimal exploitation of hadronic final states played a key role in successes of all recent collider experiments in HEP, and the ability to use hadronic final states will continue to be one of the decisive issues during the analysis phase of the LHC experiments. Monte Carlo techniques facilitate the use of hadronic final states, and have been developed for many years. We will give a brief overview of the physics models underlying hadronic shower simulation, discussing the three basic types of modelling used in the geant4 tool-kit; data driven, parameterisation driven, and theory driven modelling, and provide comparisons with experimental data for selected models.
Modeling Imports in a Keynesian Expenditure Model
ERIC Educational Resources Information Center
Findlay, David W.
2010-01-01
The author discusses several issues that instructors of introductory macroeconomics courses should consider when introducing imports in the Keynesian expenditure model. The analysis suggests that the specification of the import function should partially, if not completely, be the result of a simple discussion about the spending and import…
ERC Expressive Seminar Models and Intuitive Modeling
Barthe, Loïc
about highly complex topology and more general objects ? Conclusion #12;Displays / 3D Printers => 3D #12;Triangle meshes Fast and easy rendering of 3D objects require the use of triangle meshes Virtual But : 3DS Max Maya Blender #12;Meshes and Geometric Modeling Dedicated software Dedicated to a specific
Epidemic modeling techniques for smallpox
McLean, Cory Y. (Cory Yuen Fu)
2004-01-01
Infectious disease models predict the impact of outbreaks. Discrepancies between model predictions stem from both the disease parameters used and the underlying mathematics of the models. Smallpox has been modeled extensively ...
General background on modeling and specifics of modeling vapor intrusion are given. Three classical model applications are described and related to the problem of petroleum vapor intrusion. These indicate the need for model calibration and uncertainty analysis. Evaluation of Bi...
Model selection in compositional spaces
Grosse, Roger Baker
2014-01-01
We often build complex probabilistic models by composing simpler models-using one model to generate parameters or latent variables for another model. This allows us to express complex distributions over the observed data ...
NASA Astrophysics Data System (ADS)
Bahr, Benjamin; Hellmann, Frank; Kami?ski, Wojciech; Kisielowski, Marcin; Lewandowski, Jerzy
2011-05-01
The goal of this paper is to introduce a systematic approach to spin foams. We define operator spin foams, that is foams labelled by group representations and operators, as our main tool. A set of moves we define in the set of the operator spin foams (among other operations) allows us to split the faces and the edges of the foams. We assign to each operator spin foam a contracted operator, by using the contractions at the vertices and suitably adjusted face amplitudes. The emergence of the face amplitudes is the consequence of assuming the invariance of the contracted operator with respect to the moves. Next, we define spin foam models and consider the class of models assumed to be symmetric with respect to the moves we have introduced, and assuming their partition functions (state sums) are defined by the contracted operators. Briefly speaking, those operator spin foam models are invariant with respect to the cellular decomposition, and are sensitive only to the topology and colouring of the foam. Imposing an extra symmetry leads to a family we call natural operator spin foam models. This symmetry, combined with assumed invariance with respect to the edge splitting move, determines a complete characterization of a general natural model. It can be obtained by applying arbitrary (quantum) constraints on an arbitrary BF spin foam model. In particular, imposing suitable constraints on a spin(4) BF spin foam model is exactly the way we tend to view 4D quantum gravity, starting with the BC model and continuing with the Engle-Pereira-Rovelli-Livine (EPRL) or Freidel-Krasnov (FK) models. That makes our framework directly applicable to those models. Specifically, our operator spin foam framework can be translated into the language of spin foams and partition functions. Among our natural spin foam models there are the BF spin foam model, the BC model, and a model corresponding to the EPRL intertwiners. Our operator spin foam framework can also be used for more general spin foam models which are not symmetric with respect to one or more moves we consider.
Ahmed E. Hassan
2006-01-24
Models have an inherent uncertainty. The difficulty in fully characterizing the subsurface environment makes uncertainty an integral component of groundwater flow and transport models, which dictates the need for continuous monitoring and improvement. Building and sustaining confidence in closure decisions and monitoring networks based on models of subsurface conditions require developing confidence in the models through an iterative process. The definition of model validation is postulated as a confidence building and long-term iterative process (Hassan, 2004a). Model validation should be viewed as a process not an end result. Following Hassan (2004b), an approach is proposed for the validation process of stochastic groundwater models. The approach is briefly summarized herein and detailed analyses of acceptance criteria for stochastic realizations and of using validation data to reduce input parameter uncertainty are presented and applied to two case studies. During the validation process for stochastic models, a question arises as to the sufficiency of the number of acceptable model realizations (in terms of conformity with validation data). Using a hierarchical approach to make this determination is proposed. This approach is based on computing five measures or metrics and following a decision tree to determine if a sufficient number of realizations attain satisfactory scores regarding how they represent the field data used for calibration (old) and used for validation (new). The first two of these measures are applied to hypothetical scenarios using the first case study and assuming field data consistent with the model or significantly different from the model results. In both cases it is shown how the two measures would lead to the appropriate decision about the model performance. Standard statistical tests are used to evaluate these measures with the results indicating they are appropriate measures for evaluating model realizations. The use of validation data to constrain model input parameters is shown for the second case study using a Bayesian approach known as Markov Chain Monte Carlo. The approach shows a great potential to be helpful in the validation process and in incorporating prior knowledge with new field data to derive posterior distributions for both model input and output.
Biosphere Process Model Report
J. Schmitt
2000-05-25
To evaluate the postclosure performance of a potential monitored geologic repository at Yucca Mountain, a Total System Performance Assessment (TSPA) will be conducted. Nine Process Model Reports (PMRs), including this document, are being developed to summarize the technical basis for each of the process models supporting the TSPA model. These reports cover the following areas: (1) Integrated Site Model; (2) Unsaturated Zone Flow and Transport; (3) Near Field Environment; (4) Engineered Barrier System Degradation, Flow, and Transport; (5) Waste Package Degradation; (6) Waste Form Degradation; (7) Saturated Zone Flow and Transport; (8) Biosphere; and (9) Disruptive Events. Analysis/Model Reports (AMRs) contain the more detailed technical information used to support TSPA and the PMRs. The AMRs consists of data, analyses, models, software, and supporting documentation that will be used to defend the applicability of each process model for evaluating the postclosure performance of the potential Yucca Mountain repository system. This documentation will ensure the traceability of information from its source through its ultimate use in the TSPA-Site Recommendation (SR) and in the National Environmental Policy Act (NEPA) analysis processes. The objective of the Biosphere PMR is to summarize (1) the development of the biosphere model, and (2) the Biosphere Dose Conversion Factors (BDCFs) developed for use in TSPA. The Biosphere PMR does not present or summarize estimates of potential radiation doses to human receptors. Dose calculations are performed as part of TSPA and will be presented in the TSPA documentation. The biosphere model is a component of the process to evaluate postclosure repository performance and regulatory compliance for a potential monitored geologic repository at Yucca Mountain, Nevada. The biosphere model describes those exposure pathways in the biosphere by which radionuclides released from a potential repository could reach a human receptor. Collectively, the potential human receptor and exposure pathways form the biosphere model. More detailed technical information and data about potential human receptor groups and the characteristics of exposure pathways have been developed in a series of AMRs and Calculation Reports.
Buried Markov Model Hidden Markov
Takiguchi, Tetsuya
Buried Markov Model , , , ( ), ( ) 1 [1] [2] [3] Hidden Markov Model (HMM) HMM HMM Buried Markov Model (BMM) [4] J. Bilmes HMM BMM BMM 2 Buried Markov Model 2.1 Buried Markov Model HMM Fig. 1 BMM A Study on Dysarthric Speech Recognition using Buried Markov Model, by Chikoto Miyamoto, Yuto Komai
ERIC Educational Resources Information Center
Gabel, Dorothy; And Others
1992-01-01
Chemistry can be described on three levels: sensory, molecular, and symbolic. Proposes a particle approach to teaching chemistry that uses magnets to aid students construct molecular models and solve particle problems. Includes examples of Johnstone's model of chemistry phenomena, a problem worksheet, and a student concept mastery sheet. (MDH)
Technology Transfer Automated Retrieval System (TEKTRAN)
Pigeonpea (Cajanus cajan (L.) Millsp.) is a widely grown legume in tropical and subtropical areas. A crop simulation model that can assist in farmer decision-making was developed. The phenological module is one of the major elements of the crop model because accurate prediction of the timing of gr...
Jacob J. Jacobson; Gretchen Matthern
2007-04-01
System Dynamics is a computer-aided approach to evaluating the interrelationships of different components and activities within complex systems. Recently, System Dynamics models have been developed in areas such as policy design, biological and medical modeling, energy and the environmental analysis, and in various other areas in the natural and social sciences. The real power of System Dynamic modeling is gaining insights into total system behavior as time, and system parameters are adjusted and the effects are visualized in real time. System Dynamic models allow decision makers and stakeholders to explore long-term behavior and performance of complex systems, especially in the context of dynamic processes and changing scenarios without having to wait decades to obtain field data or risk failure if a poor management or design approach is used. The Idaho National Laboratory recently has been developing a System Dynamic model of the US Nuclear Fuel Cycle. The model is intended to be used to identify and understand interactions throughout the entire nuclear fuel cycle and suggest sustainable development strategies. This paper describes the basic framework of the current model and presents examples of useful insights gained from the model thus far with respect to sustainable development of nuclear power.
Microreview Modelling malaria pathogenesis
Day, Troy
Microreview Modelling malaria pathogenesis OnlineOpen: This article is available free online at www the development of models of malaria pathogenesis began, we are beyond the `proof-of-concept' phase of malaria. Recent research has begun to iterate theory and data in a much more comprehensive way
Automated Student Model Improvement
ERIC Educational Resources Information Center
Koedinger, Kenneth R.; McLaughlin, Elizabeth A.; Stamper, John C.
2012-01-01
Student modeling plays a critical role in developing and improving instruction and instructional technologies. We present a technique for automated improvement of student models that leverages the DataShop repository, crowd sourcing, and a version of the Learning Factors Analysis algorithm. We demonstrate this method on eleven educational…
ERIC Educational Resources Information Center
Ivie, Stanley D.
2007-01-01
Humanity delights in spinning conceptual models of the world. These models, in turn, mirror their respective root metaphors. Three root metaphors--spiritual, organic, and mechanical--have dominated western thought. The spiritual metaphor runs from Plato, through Hegel, and connects with Montessori. The organic metaphor extends from Aristotle,…
ERIC Educational Resources Information Center
Flannery, Maura C.
1997-01-01
Addresses the most popular models currently being chosen for biological research and the reasons behind those choices. Among the current favorites are zebra fish, fruit flies, mice, monkeys, and yeast. Concludes with a brief examination of the ethical issues involved, and why some animals may need to be replaced in research with model systems.…
Thom, Ronald M.; Judd, Chaeli
2007-07-27
Successful restoration of wetland habitats depends on both our understanding of our system and our ability to characterize it. By developing a conceptual model, looking at different spatial scales and integrating diverse data streams: GIS datasets and NASA products, we were able to develop a dynamic model for site prioritization based on both qualitative and quantitative relationships found in the coastal environment.
Reliability model generator specification
NASA Technical Reports Server (NTRS)
Cohen, Gerald C.; Mccann, Catherine
1990-01-01
The Reliability Model Generator (RMG), a program which produces reliability models from block diagrams for ASSIST, the interface for the reliability evaluation tool SURE is described. An account is given of motivation for RMG and the implemented algorithms are discussed. The appendices contain the algorithms and two detailed traces of examples.
The scope of modelling the behavior of pollutants in the aquatic environment is now immense. n many practical applications, there are effectively no computational constraints on what is possible. here is accordingly an increasing need for a set of principles of modelling that in ...
This task provides credible state of the art air quality models and guidance for use in implementation of National Ambient Air Quality Standards for ozone and PM. This research effort is to develop and improve air quality models, such as the Community Multiscale Air Quality (CMA...
CISNET: Breast Cancer Modeling
The Breast Group is in its third round of funding. Six groups and a coordinating center are funded to model modern developments in breast prevention, early detection and treatment. A unique aspect of the current round of funding is that the groups will model breast cancer as four separate sub-types (based on molecular subtypes).
CISNET's flexible broad-based disease models incorporate a central cancer model, which is modified by the full range of cancer control interventions (i.e., changing risk factor profiles of the population, new screening modalities, and treatment regimens). Outputs can include the full range of the benefits and costs of the interventions.
NASA Technical Reports Server (NTRS)
Gilland, Jim; George, Jeffrey A.
1993-01-01
Various aspects of nuclear electric propulsion (NEP) systems analysis and modeling are discussed. The following specific topics are covered: (1) systems analysis challenges; (2) goals for NEP systems analysis; (3) the Nuclear Propulsion Office approach; and (4) NEP subsystem model development. The discussion is presented in vugraph form.
Phyloclimatic Modelling Workshop
Yesson, Christopher
2012-11-13
Palaeohistory • Fossil history – Mostly pollen • Geological record – Continental drift – Climate • Computer models – Climate Alastair Culham Gathering the evidence • Fossil history is generally poor and patchy even in the best recorded groups. • Pollen offers...? • Modelling here relies on: – Knowing continental positions – Knowing altitudes – Knowing sea levels – Knowing atmospheric gas concentrations • This can be validated against fossil evidence – Pollen/macrofossils – ‘Fossil’ atmospheres – from ice cores...
Edward L. Wright
2001-06-22
Models of the zodiacal light are necessary to convert measured data taken from low Earth orbit into the radiation field outside the solar system. The uncertainty in these models dominates the overall uncertainty in determining the extragalactic background light for wavelengths < 100 microns.
Despite the value and widespread use of the Ames test, little attention has been focused on standardizing quantitative methods of analyzing these data. In this paper, a realistic and statistically tractable model is developed for the evaluation of Ames-type data. The model assume...
Raby, Stuart
2010-02-10
In this talk I review some recent progress in heterotic and F theory model building. I then consider work in progress attempting to find the F theory dual to a class of heterotic orbifold models which come quite close to the MSSM.
This lecture will present AQUATOX, an aquatic ecosystem simulation model developed by Dr. Dick Park and supported by the U.S. EPA. The AQUATOX model predicts the fate of various pollutants, such as nutrients and organic chemicals, and their effects on the ecosystem, including fi...
Barchet, W.R. ); Dennis, R.L. ); Seilkop, S.K. ); Banic, C.M.; Davies, D.; Hoff, R.M.; Macdonald, A.M.; Mickle, R.E.; Padro, J.; Puckett, K. ); Byun, D.; McHenry, J.N.
1991-12-01
The binational Eulerian Model Evaluation Field Study (EMEFS) consisted of several coordinated data gathering and model evaluation activities. In the EMEFS, data were collected by five air and precipitation monitoring networks between June 1988 and June 1990. Model evaluation is continuing. This interim report summarizes the progress made in the evaluation of the Regional Acid Deposition Model (RADM) and the Acid Deposition and Oxidant Model (ADOM) through the December 1990 completion of a State of Science and Technology report on model evaluation for the National Acid Precipitation Assessment Program (NAPAP). Because various assessment applications of RADM had to be evaluated for NAPAP, the report emphasizes the RADM component of the evaluation. A protocol for the evaluation was developed by the model evaluation team and defined the observed and predicted values to be used and the methods by which the observed and predicted values were to be compared. Scatter plots and time series of predicted and observed values were used to present the comparisons graphically. Difference statistics and correlations were used to quantify model performance. 64 refs., 34 figs., 6 tabs.
NASA Astrophysics Data System (ADS)
Davies, A. D.
1985-07-01
The NBS Center for Fire Research (CFR) conducts scientific research bearing on the fire safety of buildings, vehicles, tunnels and other inhabited structures. Data from controlled fire experiments are collected, analyzed and reduced to the analytical formulas that appear to underly the observed phenomena. These results and more general physical principles are then combined into models to predict the development of environments that may be hostile to humans. This is a progress report of an applied model validation case study. The subject model is Transport of Fire, Smoke and Gases (FAST). Products from a fire in a burn room exit through a connected corridor to outdoors. Cooler counterflow air in a lower layer feeds the fire. The model predicts corridor layer temperatures and thicknesses vs. time, given enclosure, fire and ambient specifications. Data have been collected from 38 tests using several fire sizes, but have not been reduced. Corresponding model results, and model and test documentation are yet to come. Considerable modeling and calculation is needed to convert instrument readings to test results comparable with model outputs so that residual differences may be determined.
This task addresses a number of issues that arise in multimedia modeling with an emphasis on interactions among the atmosphere and multiple other environmental media. Approaches for working with multiple types of models and the data sets are being developed. Proper software tool...
AGRICULTURAL SIMULATION MODEL (AGSIM)
AGSIM is a large-scale econometric simulation model of regional crop and national livestock production in the United States. The model was initially developed to analyze the aggregate economic impacts of a wide variety issues facing agriculture, such as technological change, pest...
ERIC Educational Resources Information Center
Parks, Melissa
2014-01-01
Model-eliciting activities (MEAs) are not new to those in engineering or mathematics, but they were new to Melissa Parks. Model-eliciting activities are simulated real-world problems that integrate engineering, mathematical, and scientific thinking as students find solutions for specific scenarios. During this process, students generate solutions…
QUAL2K (or Q2K) is a river and stream water quality model that is intended to represent a modernized version of the QUAL2E (or Q2E) model (Brown and Barnwell 1987). Q2K is similar to Q2E in the following respects:
ERIC Educational Resources Information Center
Watt, James H., Jr.
Pointing out that linear causal models can organize the interrelationships of a large number of variables, this paper contends that such models are particularly useful to mass communication research, which must by necessity deal with complex systems of variables. The paper first outlines briefly the philosophical requirements for establishing a…
Postinstability models in elasticity
NASA Technical Reports Server (NTRS)
Zak, M.
1984-01-01
It is demonstrated that the instability caused by the failure of hyperbolicity in elasticity and associated with the problem of unpredictability in classical mechanics expresses the incompleteness of the original model of an elastic medium. The instability as well as the ill-posedness of the Cauchy problem are eliminated by reformulating the original model.
A hybrid receptor model is a specified mathematical procedure which uses not only the ambient species concentration measurements that form the input data for a pure receptor model, but in addition source emission rates or atmospheric dispersion or transformation information chara...
A hybrid receptor model is a specified mathematical procedure which uses not only the ambient species concentration measurements that form the input data for a pure receptor model, but in addition source emission rates or atmospheric dispersion or transformation information chara...
NASA Astrophysics Data System (ADS)
Taniguchi, Tadahiro; Sawaragi, Tetsuo
In this paper, a new machine-learning method, called Dual-Schemata model, is presented. Dual-Schemata model is a kind of self-organizational machine learning methods for an autonomous robot interacting with an unknown dynamical environment. This is based on Piaget's Schema model, that is a classical psychological model to explain memory and cognitive development of human beings. Our Dual-Schemata model is developed as a computational model of Piaget's Schema model, especially focusing on sensori-motor developing period. This developmental process is characterized by a couple of two mutually-interacting dynamics; one is a dynamics formed by assimilation and accommodation, and the other dynamics is formed by equilibration and differentiation. By these dynamics schema system enables an agent to act well in a real world. This schema's differentiation process corresponds to a symbol formation process occurring within an autonomous agent when it interacts with an unknown, dynamically changing environment. Experiment results obtained from an autonomous facial robot in which our model is embedded are presented; an autonomous facial robot becomes able to chase a ball moving in various ways without any rewards nor teaching signals from outside. Moreover, emergence of concepts on the target movements within a robot is shown and discussed in terms of fuzzy logics on set-subset inclusive relationships.
ERIC Educational Resources Information Center
Eichinger, John
2005-01-01
Models are crucial to science teaching and learning, yet they can create unforeseen and overlooked challenges for students and teachers. For example, consider the time-tested clay volcano that relies on a vinegar and-baking-soda mixture for its "eruption." Based on a classroom demonstration of that geologic model, elementary students may interpret…
VENTURI SCRUBBER PERFORMANCE MODEL
The paper presents a new model for predicting the particle collection performance of venturi scrubbers. It assumes that particles are collected by atomized liquid only in the throat section. The particle collection mechanism is inertial impaction, and the model uses a single drop...
Multilevel Mixture Factor Models
ERIC Educational Resources Information Center
Varriale, Roberta; Vermunt, Jeroen K.
2012-01-01
Factor analysis is a statistical method for describing the associations among sets of observed variables in terms of a small number of underlying continuous latent variables. Various authors have proposed multilevel extensions of the factor model for the analysis of data sets with a hierarchical structure. These Multilevel Factor Models (MFMs)…
MCMC Estimation Multilevel Models
Browne, William J.
MCMC Estimation of Multilevel Models in the MLwiN software package William Browne Multilevel Models of Education Headed by Professor Harvey Goldstein Funded by the ESRC originally through ALCD 3 Full Time and bootstrapping. MCMC algorithms and engine developed by William Browne (and David Draper). Uses IGLS
Structural Equation Model Trees
ERIC Educational Resources Information Center
Brandmaier, Andreas M.; von Oertzen, Timo; McArdle, John J.; Lindenberger, Ulman
2013-01-01
In the behavioral and social sciences, structural equation models (SEMs) have become widely accepted as a modeling tool for the relation between latent and observed variables. SEMs can be seen as a unification of several multivariate analysis techniques. SEM Trees combine the strengths of SEMs and the decision tree paradigm by building tree…
Technology Transfer Automated Retrieval System (TEKTRAN)
Models of wind erosion are used to investigate fundamental processes and guide resource management. Many models are similar in that - temporal variables control soil wind erodibility; erosion begins when friction velocity exceeds a threshold; and transport capacity for saltation/creep is proportion...
Rice, Ken
Model fitting Thomas Lumley Ken Rice Model Fitting Seattle, June 2009 #12;Regression commands Two levels plotted by genotype (single SNP) q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q
ERIC Educational Resources Information Center
New Mexico Univ., Albuquerque. American Indian Law Center.
The Model Children's Code was developed to provide a legally correct model code that American Indian tribes can use to enact children's codes that fulfill their legal, cultural and economic needs. Code sections cover the court system, jurisdiction, juvenile offender procedures, minor-in-need-of-care, and termination. Almost every Code section is…
This magnetic tape contains the FORTRAN source code, sample input data, and sample output data for the Photochemical Box Model (PBM). The PBM is a simple stationary single-cell model with a variable height lid designed to provide volume-integrated hour averages of O3 and other ph...
ERIC Educational Resources Information Center
Weinburgh, Molly; Silva, Cecilia
2011-01-01
For the past five summers, the authors have taught summer school to recent immigrants and refugees. Their experiences with these fourth-grade English language learners (ELL) have taught them the value of using models to build scientific and mathematical concepts. In this article, they describe the use of different forms of 2- and 3-D models to…
ERIC Educational Resources Information Center
Baker, William P.; Moore, Cathy Ronstadt
1998-01-01
Understanding antibody structure and function is difficult for many students. The rearrangement of constant and variable regions during antibody differentiation can be effectively simulated using a paper model. Describes a hands-on laboratory exercise which allows students to model antibody diversity using readily available resources. (PVD)
Canister Model, Systems Analysis
Energy Science and Technology Software Center (ESTSC)
1993-09-29
This packges provides a computer simulation of a systems model for packaging nuclear waste and spent nuclear fuel in canisters. The canister model calculates overall programmatic cost, number of canisters, and fuel and waste inventories for the Idaho Chemical Processing Plant (other initial conditions can be entered).
NASA Technical Reports Server (NTRS)
Knezovich, F. M.
1976-01-01
A modular structured system of computer programs is presented utilizing earth and ocean dynamical data keyed to finitely defined parameters. The model is an assemblage of mathematical algorithms with an inherent capability of maturation with progressive improvements in observational data frequencies, accuracies and scopes. The Eom in its present state is a first-order approach to a geophysical model of the earth's dynamics.
Technology Transfer Automated Retrieval System (TEKTRAN)
Water quality models are based on some representation of hydrology and may include movement of surface water, ground water, and mixing of water in lakes and water bodies. Water quality models simulate some combination of sediment, nutrients, heavy metals, xenobiotics, and aquatic biology. Althoug...
Dasymetric Modeling and Uncertainty
Nagle, Nicholas N.; Buttenfield, Barbara P.; Leyk, Stefan; Speilman, Seth
2014-01-01
Dasymetric models increase the spatial resolution of population data by incorporating related ancillary data layers. The role of uncertainty in dasymetric modeling has not been fully addressed as of yet. Uncertainty is usually present because most population data are themselves uncertain, and/or the geographic processes that connect population and the ancillary data layers are not precisely known. A new dasymetric methodology - the Penalized Maximum Entropy Dasymetric Model (P-MEDM) - is presented that enables these sources of uncertainty to be represented and modeled. The P-MEDM propagates uncertainty through the model and yields fine-resolution population estimates with associated measures of uncertainty. This methodology contains a number of other benefits of theoretical and practical interest. In dasymetric modeling, researchers often struggle with identifying a relationship between population and ancillary data layers. The PEDM model simplifies this step by unifying how ancillary data are included. The P-MEDM also allows a rich array of data to be included, with disparate spatial resolutions, attribute resolutions, and uncertainties. While the P-MEDM does not necessarily produce more precise estimates than do existing approaches, it does help to unify how data enter the dasymetric model, it increases the types of data that may be used, and it allows geographers to characterize the quality of their dasymetric estimates. We present an application of the P-MEDM that includes household-level survey data combined with higher spatial resolution data such as from census tracts, block groups, and land cover classifications. PMID:25067846
Modelling University Governance
ERIC Educational Resources Information Center
Trakman, Leon
2008-01-01
Twentieth century governance models used in public universities are subject to increasing doubt across the English-speaking world. Governments question if public universities are being efficiently governed; if their boards of trustees are adequately fulfilling their trust obligations towards multiple stakeholders; and if collegial models of…
NASA Technical Reports Server (NTRS)
Sellers, Piers
2012-01-01
Model results will be reviewed to assess different methods for bounding the terrestrial role in the global carbon cycle. It is proposed that a series of climate model runs could be scoped that would tighten the limits on the "missing sink" of terrestrial carbon and could also direct future satellite image analyses to search for its geographical location and understand its seasonal dynamics.
Tan, Chew Lim
MODELING FASHION Qi Chen , Gang Wang , Chew Lim Tan Department of Computer Science, National}@comp.nus.edu.sg, wanggang@ntu.edu.sg ABSTRACT We propose a method to try to model fashionable dress- es in this paper. We clustering approach. A fashionable dress is expected to con- tain certain visual patterns which make
Raby, Stuart
2008-11-23
In this talk I discuss the evolution of SUSY GUT model building as I see it. Starting with 4 dimensional model building, I then consider orbifold GUTs in 5 dimensions and finally orbifold GUTs embedded into the E{sub 8}xE{sub 8} heterotic string.
Modelling extended chromospheres
NASA Technical Reports Server (NTRS)
Linsky, J. L.
1986-01-01
Attention is given to the concept that the warm, partially ionized plasma (presently called chromosphere) associated with such stars as Alpha Boo and Rho Per extends outwards at least several photospheric radii. Calculations are presented for the Mg II K line in light of two input model atmospheres. Specific predictions are deduced from the results obtained by each of the two models.
Computational Modeling of Tires
NASA Technical Reports Server (NTRS)
Noor, Ahmed K. (compiler); Tanner, John A. (compiler)
1995-01-01
This document contains presentations and discussions from the joint UVA/NASA Workshop on Computational Modeling of Tires. The workshop attendees represented NASA, the Army and Air force, tire companies, commercial software developers, and academia. The workshop objectives were to assess the state of technology in the computational modeling of tires and to provide guidelines for future research.
Preliminary semiempirical transport models
Singer, C.E.
1983-11-01
A class of semiempirical transport models is proposed for testing against confinement data from tokamaks and for use in operations planning and machine design. A reference model is proposed to be compatible with published confinement data. Theoretical considerations are used to express the anomalous transport coefficients in terms of appropriate dimensionless parameters.
Animal models for osteoporosis
NASA Technical Reports Server (NTRS)
Turner, R. T.; Maran, A.; Lotinun, S.; Hefferan, T.; Evans, G. L.; Zhang, M.; Sibonga, J. D.
2001-01-01
Animal models will continue to be important tools in the quest to understand the contribution of specific genes to establishment of peak bone mass and optimal bone architecture, as well as the genetic basis for a predisposition toward accelerated bone loss in the presence of co-morbidity factors such as estrogen deficiency. Existing animal models will continue to be useful for modeling changes in bone metabolism and architecture induced by well-defined local and systemic factors. However, there is a critical unfulfilled need to develop and validate better animal models to allow fruitful investigation of the interaction of the multitude of factors which precipitate senile osteoporosis. Well characterized and validated animal models that can be recommended for investigation of the etiology, prevention and treatment of several forms of osteoporosis have been listed in Table 1. Also listed are models which are provisionally recommended. These latter models have potential but are inadequately characterized, deviate significantly from the human response, require careful choice of strain or age, or are not practical for most investigators to adopt. It cannot be stressed strongly enough that the enormous potential of laboratory animals as models for osteoporosis can only be realized if great care is taken in the choice of an appropriate species, age, experimental design, and measurements. Poor choices will results in misinterpretation of results which ultimately can bring harm to patients who suffer from osteoporosis by delaying advancement of knowledge.
Stuart Raby
2009-11-06
In this talk I review some recent progress in heterotic and F theory model building. I then consider work in progress attempting to find the F theory dual to a class of heterotic orbifold models which come quite close to the MSSM.
Stuart Raby
2007-10-19
I review some of the latest directions in supersymmetric model building, focusing on SUSY breaking mechanisms in the minimal supersymmetric standard model [MSSM], the "little" hierarchy and $\\mu$ problems, etc. I then discuss SUSY GUTs and UV completions in string theory.
Holford, Theodore R.; Ebisu, Keita; McKay, Lisa; Oh, Cheongeun; Zheng, Tongzhang
2013-01-01
The age-period-cohort model is known to provide an excellent description of the temporal trends in lung cancer incidence and mortality. This analytic approach is extended to include the contribution of carcinogenesis models for smoking. Usefulness of this strategy is that it offers a way to temporally calibrate a model that is fitted to population data and it can be readily adopted for the consideration of many different models. In addition, it provides diagnostics that can suggest temporal limitations of a particular carcinogenesis model in describing population rates. Alternative carcinogenesis models can be embedded within this framework. The two stage clonal expansion model is implemented here. The model was used to estimate the impact of tobacco control following dissemination of knowledge of the harmful effects of cigarette smoking by comparing the observed number of lung cancer deaths to those expected if there had been no control compared to an ideal of complete control in 1965. Results indicate that 35.2% and 26.5% of lung cancer deaths that could have been avoided actually were for males and females, respectively. PMID:22882886
Baldwin, John T.
of the countable first order theory T. Definition The atomic class KT is extendible if there is a pair M N is usually not atomic. #12;Using Set theory in model theory John T. Baldwin Introduction PseudoclosureUsing Set theory in model theory John T. Baldwin Introduction Pseudoclosure and Pseudo- minimality
Newville, Matthew
The XAFS Model Compound Library contains XAFS data on model compounds. The term "model" compounds refers to compounds of homogeneous and well-known crystallographic or molecular structure. Each data file in this library has an associated atoms.inp file that can be converted to a feff.inp file using the program ATOMS. (See the related Searchable Atoms.inp Archive at http://cars9.uchicago.edu/~newville/adb/) This Library exists because XAFS data on model compounds is useful for several reasons, including comparing to unknown data for "fingerprinting" and testing calculations and analysis methods. The collection here is currently limited, but is growing. The focus to date has been on inorganic compounds and minerals of interest to the geochemical community. [Copied, with editing, from http://cars9.uchicago.edu/~newville/ModelLib/
NASA Technical Reports Server (NTRS)
North, G. R.; Crowley, T. J.
1984-01-01
Mathematical climate modelling has matured as a discipline to the point that it is useful in paleoclimatology. As an example a new two dimensional energy balance model is described and applied to several problems of current interest. The model includes the seasonal cycle and the detailed land-sea geographical distribution. By examining the changes in the seasonal cycle when external perturbations are forced upon the climate system it is possible to construct hypotheses about the origin of midlatitude ice sheets and polar ice caps. In particular the model predicts a rather sudden potential for glaciation over large areas when the Earth's orbital elements are only slightly altered. Similarly, the drift of continents or the change of atmospheric carbon dioxide over geological time induces radical changes in continental ice cover. With the advance of computer technology and improved understanding of the individual components of the climate system, these ideas will be tested in far more realistic models in the near future.
Pupo, Amaury; Baez-Nieto, David; Martínez, Agustín; Latorre, Ramón; González, Carlos
2014-01-01
Voltage-gated proton channels are integral membrane proteins with the capacity to permeate elementary particles in a voltage and pH dependent manner. These proteins have been found in several species and are involved in various physiological processes. Although their primary topology is known, lack of details regarding their structures in the open conformation has limited analyses toward a deeper understanding of the molecular determinants of their function and regulation. Consequently, the function-structure relationships have been inferred based on homology models. In the present work, we review the existing proton channel models, their assumptions, predictions and the experimental facts that support them. Modeling proton channels is not a trivial task due to the lack of a close homolog template. Hence, there are important differences between published models. This work attempts to critically review existing proton channel models toward the aim of contributing to a better understanding of the structural features of these proteins. PMID:24755912
Hughes, T.J.; Fastook, J.L.
1994-05-01
The University of Maine conducted this study for Pacific Northwest Laboratory (PNL) as part of a global climate modeling task for site characterization of the potential nuclear waste respository site at Yucca Mountain, NV. The purpose of the study was to develop a global ice sheet dynamics model that will forecast the three-dimensional configuration of global ice sheets for specific climate change scenarios. The objective of the third (final) year of the work was to produce ice sheet data for glaciation scenarios covering the next 100,000 years. This was accomplished using both the map-plane and flowband solutions of our time-dependent, finite-element gridpoint model. The theory and equations used to develop the ice sheet models are presented. Three future scenarios were simulated by the model and results are discussed.
Qiong-Tao Xie; Shuai Cui; Jun-Peng Cao; Luigi Amico; Heng Fan
2014-05-20
We define the anisotropic Rabi model as the generalization of the spin-boson Rabi model: The Hamiltonian system breaks the parity symmetry; the rotating and counter-rotating interactions are governed by two different coupling constants; a further parameter introduces a phase factor in the counter-rotating terms. The exact energy spectrum and eigenstates of the generalized model is worked out. The solution is obtained as an elaboration of a recent proposed method for the isotropic limit of the model. In this way, we provide a long sought solution of a cascade of models with immediate relevance in different physical fields, including i) quantum optics: two-level atom in single mode cross electric and magnetic fields; ii) solid state physics: electrons in semiconductors with Rashba and Dresselhaus spin-orbit coupling; iii) mesoscopic physics: Josephson junctions flux-qubit quantum circuits.
John Weinstein
1996-06-19
We discuss the non-relativistic multichannel quark model and describe the techniques developed to solve the resulting equations. We then investigate some simple solutions to demonstrate how the model unifies meson-meson scattering with meson spectroscopy, thereby greatly extending the domain of applicability of the naive quark model. In the limits of narrow resonance widths and no quark exchange, it reproduces the standard quark model spectroscopy and Breit-Wigner phase description. Outside those limits s-channel resonance masses are lowered by their two-meson couplings, the line-shapes of wide resonances are significantly altered, and the equivalent Breit-Wigner masses and widths show an energy dependence. Because meson-meson interactions are due to coherent s-channel resonance production and t-channel quark exchange (though other interactions can readily be added), the multichannel equations model experimental resonance production and decay in a way that the usual eigenvalue equations cannot.
Integrated Environmental Control Model
Energy Science and Technology Software Center (ESTSC)
1999-09-03
IECM is a powerful multimedia engineering software program for simulating an integrated coal-fired power plant. It provides a capability to model various conventional and advanced processes for controlling air pollutant emissions from coal-fired power plants before, during, or after combustion. The principal purpose of the model is to calculate the performance, emissions, and cost of power plant configurations employing alternative environmental control methods. The model consists of various control technology modules, which may be integratedmore »into a complete utility plant in any desired combination. In contrast to conventional deterministic models, the IECM offers the unique capability to assign probabilistic values to all model input parameters, and to obtain probabilistic outputs in the form of cumulative distribution functions indicating the likelihood of dofferent costs and performance results. A Graphical Use Interface (GUI) facilitates the configuration of the technologies, entry of data, and retrieval of results.« less
NASA Technical Reports Server (NTRS)
Badler, N. I.; Lee, P.; Wong, S.
1985-01-01
Strength modeling is a complex and multi-dimensional issue. There are numerous parameters to the problem of characterizing human strength, most notably: (1) position and orientation of body joints; (2) isometric versus dynamic strength; (3) effector force versus joint torque; (4) instantaneous versus steady force; (5) active force versus reactive force; (6) presence or absence of gravity; (7) body somatotype and composition; (8) body (segment) masses; (9) muscle group envolvement; (10) muscle size; (11) fatigue; and (12) practice (training) or familiarity. In surveying the available literature on strength measurement and modeling an attempt was made to examine as many of these parameters as possible. The conclusions reached at this point toward the feasibility of implementing computationally reasonable human strength models. The assessment of accuracy of any model against a specific individual, however, will probably not be possible on any realistic scale. Taken statistically, strength modeling may be an effective tool for general questions of task feasibility and strength requirements.
Cardiovascular modeling and diagnostics
Kangas, L.J.; Keller, P.E.; Hashem, S.; Kouzes, R.T.
1995-12-31
In this paper, a novel approach to modeling and diagnosing the cardiovascular system is introduced. A model exhibits a subset of the dynamics of the cardiovascular behavior of an individual by using a recurrent artificial neural network. Potentially, a model will be incorporated into a cardiovascular diagnostic system. This approach is unique in that each cardiovascular model is developed from physiological measurements of an individual. Any differences between the modeled variables and the variables of an individual at a given time are used for diagnosis. This approach also exploits sensor fusion to optimize the utilization of biomedical sensors. The advantage of sensor fusion has been demonstrated in applications including control and diagnostics of mechanical and chemical processes.
NASA Astrophysics Data System (ADS)
Dehant, Véronique
2005-01-01
The nutation model that has been adopted by the IAU in 2000 is the semi-analytical model MHB2000 of Mathews et al. (JGR 107(B4) 10.1028/2001JB000390). We show how robust this model is and examine the information about the interior of the Earth that has been derived. The observations used to derived the parameters of MHB2000 as well as the amplitude of the Earth Free Core Nutation (FCN) are examined in terms of their stability and precision. We examine in parallel the possibilities that are provided by a numerical integration model. Additional contributions from the external geophysical fluids (atmosphere ocean) are also studied. The extension of this model to short-term polar motion induced by the lunisolar forcing is examined as well. The conclusions of the WG related to that work is given.
V. Chipman
2002-10-31
The purpose of the Ventilation Model is to simulate the heat transfer processes in and around waste emplacement drifts during periods of forced ventilation. The model evaluates the effects of emplacement drift ventilation on the thermal conditions in the emplacement drifts and surrounding rock mass, and calculates the heat removal by ventilation as a measure of the viability of ventilation to delay the onset of peak repository temperature and reduce its magnitude. The heat removal by ventilation is temporally and spatially dependent, and is expressed as the fraction of heat carried away by the ventilation air compared to the fraction of heat produced by radionuclide decay. One minus the heat removal is called the wall heat fraction, or the remaining amount of heat that is transferred via conduction to the surrounding rock mass. Downstream models, such as the ''Multiscale Thermohydrologic Model'' (BSC 2001), use the wall heat fractions as outputted from the Ventilation Model to initialize their postclosure analyses.
Trelles, J P; Vardelle, A; Heberlein, J V R
2013-01-01
Arc plasma torches are the primary components of various industrial thermal plasma processes involving plasma spraying, metal cutting and welding, thermal plasma CVD, metal melting and remelting, waste treatment and gas production. They are relatively simple devices whose operation implies intricate thermal, chemical, electrical, and fluid dynamics phenomena. Modeling may be used as a means to better understand the physical processes involved in their operation. This paper presents an overview of the main aspects involved in the modeling of DC arc plasma torches: the mathematical models including thermodynamic and chemical non-equilibrium models, turbulent and radiative transport, thermodynamic and transport property calculation, boundary conditions and arc reattachment models. It focuses on the conventional plasma torches used for plasma spraying that include a hot-cathode and a nozzle anode.
Jones, Katherine A.; Finley, Patrick D.; Moore, Thomas W.; Nozick, Linda Karen; Martin, Nathaniel; Bandlow, Alisa; Detry, Richard Joseph; Evans, Leland B.; Berger, Taylor Eugen
2013-09-01
Infectious diseases can spread rapidly through healthcare facilities, resulting in widespread illness among vulnerable patients. Computational models of disease spread are useful for evaluating mitigation strategies under different scenarios. This report describes two infectious disease models built for the US Department of Veteran Affairs (VA) motivated by a Varicella outbreak in a VA facility. The first model simulates disease spread within a notional contact network representing staff and patients. Several interventions, along with initial infection counts and intervention delay, were evaluated for effectiveness at preventing disease spread. The second model adds staff categories, location, scheduling, and variable contact rates to improve resolution. This model achieved more accurate infection counts and enabled a more rigorous evaluation of comparative effectiveness of interventions.
Composite technicolor standard models
Chivukula, R.S.
1987-01-01
In this thesis we introduce the idea of Composite Technicolor Standard Models (CTSM). In these models the quarks, leptons, and technifermions are assumed to be composite particles built from fermions (preons) bound by strong gauge interactions. We argue that if the preon dynamics respects an (SU(3) {times} U(1)){sup 5} flavor symmetry that is explicitly broken only by preon mass terms which are proportional to the quark and lepton mass matrices, then the theory as a natural GIM mechanism which suppresses dangerous flavor changing neutral currents. We show that CTSM effects give rise to a number of small, but observable, deviations from the standard model of electroweak interactions. We discuss the difficulties with anomaly constraints and flavor symmetry breaking involved in building a Composite Technicolor Standard Model. We conclude by constructing a model of quarks with the required symmetry properties.
Saturn Radiation (SATRAD) Model
NASA Technical Reports Server (NTRS)
Garrett, H. B.; Ratliff, J. M.; Evans, R. W.
2005-01-01
The Saturnian radiation belts have not received as much attention as the Jovian radiation belts because they are not nearly as intense-the famous Saturnian particle rings tend to deplete the belts near where their peak would occur. As a result, there has not been a systematic development of engineering models of the Saturnian radiation environment for mission design. A primary exception is that of Divine (1990). That study used published data from several charged particle experiments aboard the Pioneer 1 1, Voyager 1, and Voyager 2 spacecraft during their flybys at Saturn to generate numerical models for the electron and proton radiation belts between 2.3 and 13 Saturn radii. The Divine Saturn radiation model described the electron distributions at energies between 0.04 and 10 MeV and the proton distributions at energies between 0.14 and 80 MeV. The model was intended to predict particle intensity, flux, and fluence for the Cassini orbiter. Divine carried out hand calculations using the model but never formally developed a computer program that could be used for general mission analyses. This report seeks to fill that void by formally developing a FORTRAN version of the model that can be used as a computer design tool for missions to Saturn that require estimates of the radiation environment around the planet. The results of that effort and the program listings are presented here along with comparisons with the original estimates carried out by Divine. In addition, Pioneer and Voyager data were scanned in from the original references and compared with the FORTRAN model s predictions. The results were statistically analyzed in a manner consistent with Divine s approach to provide estimates of the ability of the model to reproduce the original data. Results of a formal review of the model by a panel of experts are also presented. Their recommendations for further tests, analyses, and extensions to the model are discussed.
Spiral model pilot project information model
NASA Technical Reports Server (NTRS)
1991-01-01
The objective was an evaluation of the Spiral Model (SM) development approach to allow NASA Marshall to develop an experience base of that software management methodology. A discussion is presented of the Information Model (IM) that was used as part of the SM methodology. A key concept of the SM is the establishment of an IM to be used by management to track the progress of a project. The IM is the set of metrics that is to be measured and reported throughout the life of the project. These metrics measure both the product and the process to ensure the quality of the final delivery item and to ensure the project met programmatic guidelines. The beauty of the SM, along with the IM, is the ability to measure not only the correctness of the specification and implementation of the requirements but to also obtain a measure of customer satisfaction.
Modeling cytomegalovirus infection in mouse tumor models.
Price, Richard Lee; Chiocca, Ennio Antonio
2015-01-01
The hypothesis that cytomegalovirus (CMV) modulates cancer is evolving. Originally discovered in glioblastoma in 2002, the number of cancers, where intratumoral CMV antigen is detected, has increased in recent years suggesting that CMV actively affects the pathobiology of certain tumors. These findings are controversial as several groups have also reported inability to replicate these results. Regardless, several clinical trials for glioblastoma are underway or have been completed that target intratumoral CMV with anti-viral drugs or immunotherapy. Therefore, a better understanding of the possible pathobiology of CMV in cancer needs to be ascertained. We have developed genetic, syngeneic, and orthotopic malignant glioma mouse models to study the role of CMV in cancer development and progression. These models recapitulate for the most part intratumoral CMV expression as seen in human tumors. Additionally, we discovered that CMV infection in Trp53(-/+) mice promotes pleomorphic rhabdomyosarcomas. These mouse models are not only a vehicle for studying pathobiology of the viral-tumor interaction but also a platform for developing and testing cancer therapeutics. PMID:25853089
Expert Models and Modeling Processes Associated with a Computer-Modeling Tool
ERIC Educational Resources Information Center
Zhang, BaoHui; Liu, Xiufeng; Krajcik, Joseph S.
2006-01-01
Holding the premise that the development of expertise is a continuous process, this study concerns expert models and modeling processes associated with a modeling tool called Model-It. Five advanced Ph.D. students in environmental engineering and public health used Model-It to create and test models of water quality. Using "think aloud" technique…
SPAR Model Structural Efficiencies
John Schroeder; Dan Henry
2013-04-01
The Nuclear Regulatory Commission (NRC) and the Electric Power Research Institute (EPRI) are supporting initiatives aimed at improving the quality of probabilistic risk assessments (PRAs). Included in these initiatives are the resolution of key technical issues that are have been judged to have the most significant influence on the baseline core damage frequency of the NRC’s Standardized Plant Analysis Risk (SPAR) models and licensee PRA models. Previous work addressed issues associated with support system initiating event analysis and loss of off-site power/station blackout analysis. The key technical issues were: • Development of a standard methodology and implementation of support system initiating events • Treatment of loss of offsite power • Development of standard approach for emergency core cooling following containment failure Some of the related issues were not fully resolved. This project continues the effort to resolve outstanding issues. The work scope was intended to include substantial collaboration with EPRI; however, EPRI has had other higher priority initiatives to support. Therefore this project has addressed SPAR modeling issues. The issues addressed are • SPAR model transparency • Common cause failure modeling deficiencies and approaches • Ac and dc modeling deficiencies and approaches • Instrumentation and control system modeling deficiencies and approaches
Model molecules mimicking asphaltenes.
Sjöblom, Johan; Simon, Sébastien; Xu, Zhenghe
2015-04-01
Asphalthenes are typically defined as the fraction of petroleum insoluble in n-alkanes (typically heptane, but also hexane or pentane) but soluble in toluene. This fraction causes problems of emulsion formation and deposition/precipitation during crude oil production, processing and transport. From the definition it follows that asphaltenes are not a homogeneous fraction but is composed of molecules polydisperse in molecular weight, structure and functionalities. Their complexity makes the understanding of their properties difficult. Proper model molecules with well-defined structures which can resemble the properties of real asphaltenes can help to improve this understanding. Over the last ten years different research groups have proposed different asphaltene model molecules and studied them to determine how well they can mimic the properties of asphaltenes and determine the mechanisms behind the properties of asphaltenes. This article reviews the properties of the different classes of model compounds proposed and present their properties by comparison with fractionated asphaltenes. After presenting the interest of developing model asphaltenes, the composition and properties of asphaltenes are presented, followed by the presentation of approaches and accomplishments of different schools working on asphaltene model compounds. The presentation of bulk and interfacial properties of perylene-based model asphaltene compounds developed by Sjöblom et al. is the subject of the next part. Finally the emulsion-stabilization properties of fractionated asphaltenes and model asphaltene compounds is presented and discussed. PMID:25638443
Computationally modeling interpersonal trust
Lee, Jin Joo; Knox, W. Bradley; Wormwood, Jolie B.; Breazeal, Cynthia; DeSteno, David
2013-01-01
We present a computational model capable of predicting—above human accuracy—the degree of trust a person has toward their novel partner by observing the trust-related nonverbal cues expressed in their social interaction. We summarize our prior work, in which we identify nonverbal cues that signal untrustworthy behavior and also demonstrate the human mind's readiness to interpret those cues to assess the trustworthiness of a social robot. We demonstrate that domain knowledge gained from our prior work using human-subjects experiments, when incorporated into the feature engineering process, permits a computational model to outperform both human predictions and a baseline model built in naiveté of this domain knowledge. We then present the construction of hidden Markov models to investigate temporal relationships among the trust-related nonverbal cues. By interpreting the resulting learned structure, we observe that models built to emulate different levels of trust exhibit different sequences of nonverbal cues. From this observation, we derived sequence-based temporal features that further improve the accuracy of our computational model. Our multi-step research process presented in this paper combines the strength of experimental manipulation and machine learning to not only design a computational trust model but also to further our understanding of the dynamics of interpersonal trust. PMID:24363649
NASA Technical Reports Server (NTRS)
Mccutcheon, Kimble D.; Gordon, Stephen S.; Thompson, Paul A.
1992-01-01
NASA uses the Variable Polarity Plasma Arc Welding (VPPAW) process extensively for fabrication of Space Shuttle External Tanks. This welding process has been in use at NASA since the late 1970's but the physics of the process have never been satisfactorily modeled and understood. In an attempt to advance the level of understanding of VPPAW, Dr. Arthur C. Nunes, Jr., (NASA) has developed a mathematical model of the process. The work described in this report evaluated and used two versions (level-0 and level-1) of Dr. Nunes' model, and a model derived by the University of Alabama at Huntsville (UAH) from Dr. Nunes' level-1 model. Two series of VPPAW experiments were done, using over 400 different combinations of welding parameters. Observations were made of VPPAW process behavior as a function of specific welding parameter changes. Data from these weld experiments was used to evaluate and suggest improvements to Dr. Nunes' model. Experimental data and correlations with the model were used to develop a multi-variable control algorithm for use with a future VPPAW controller. This algorithm is designed to control weld widths (both on the crown and root of the weld) based upon the weld parameters, base metal properties, and real-time observation of the crown width. The algorithm exhibited accuracy comparable to that of the weld width measurements for both aluminum and mild steel welds.
Damping models in elastography
NASA Astrophysics Data System (ADS)
McGarry, Matthew D. J.; Berger, Hans-Uwe; Van Houten, Elijah E. W.
2007-03-01
Current optimization based Elastography reconstruction algorithms encounter difficulties when the motion approaches resonant conditions, where the model does a poor job of approximating the real behavior of the material. Model accuracy can be improved through the addition of damping effects. These effects occur in-vivo due to the complex interaction between microstructural elements of the tissue; however reconstruction models are typically formulated at larger scales where the structure can be treated as a continuum. Attenuation behavior in an elastic continuum can be described as a mixture of inertial and viscoelastic damping effects. In order to develop a continuum damping model appropriate for human tissue, the behavior of each aspect of this proportional, or Rayleigh damping needs to be characterized. In this paper we investigate the nature of these various damping representations with a goal of best describing in-vivo behavior of actual tissue in order to improve the accuracy and performance of optimization based elastographic reconstruction. Inertial damping effects are modelled using a complex density, where the imaginary part is equivalent to a damping coefficient, and the effects of viscoelasticity are modelled through the use of complex shear moduli, where the real and imaginary parts represent the storage and loss moduli respectively. The investigation is carried out through a combination of theoretical analysis, numerical experiment, investigation of gelatine phantoms and comparison with other continua such as porous media models.
NASA Astrophysics Data System (ADS)
Jensen, Kristoffer
2002-11-01
A timbre model is proposed for use in multiple applications. This model, which encompasses all voiced isolated musical instruments, has an intuitive parameter set, fixed size, and separates the sounds in dimensions akin to the timbre dimensions as proposed in timbre research. The analysis of the model parameters is fully documented, and it proposes, in particular, a method for the estimation of the difficult decay/release split-point. The main parameters of the model are the spectral envelope, the attack/release durations and relative amplitudes, and the inharmonicity and the shimmer and jitter (which provide both for the slow random variations of the frequencies and amplitudes, and also for additive noises). Some of the applications include synthesis, where a real-time application is being developed with an intuitive gui, classification, and search of sounds based on the content of the sounds, and a further understanding of acoustic musical instrument behavior. In order to present the background of the model, this presentation will start with sinusoidal A/S, some timbre perception research, then present the timbre model, show the validity for individual music instrument sounds, and finally introduce some expression additions to the model.
NASA Technical Reports Server (NTRS)
Rubinstein, R. (Editor); Rumsey, C. L. (Editor); Salas, M. D. (Editor); Thomas, J. L. (Editor); Bushnell, Dennis M. (Technical Monitor)
2001-01-01
Advances in turbulence modeling are needed in order to calculate high Reynolds number flows near the onset of separation and beyond. To this end, the participants in this workshop made the following recommendations. (1) A national/international database and standards for turbulence modeling assessment should be established. Existing experimental data sets should be reviewed and categorized. Advantage should be taken of other efforts already under-way, such as that of the European Research Community on Flow, Turbulence, and Combustion (ERCOFTAC) consortium. Carefully selected "unit" experiments will be needed, as well as advances in instrumentation, to fill the gaps in existing datasets. A high priority should be given to document existing turbulence model capabilities in a standard form, including numerical implementation issues such as grid quality and resolution. (2) NASA should support long-term research on Algebraic Stress Models and Reynolds Stress Models. The emphasis should be placed on improving the length-scale equation, since it is the least understood and is a key component of two-equation and higher models. Second priority should be given to the development of improved near-wall models. Direct Numerical Simulations (DNS) and Large Eddy Simulations (LES) would provide valuable guidance in developing and validating new Reynolds-averaged Navier-Stokes (RANS) models. Although not the focus of this workshop, DNS, LES, and hybrid methods currently represent viable approaches for analysis on a limited basis. Therefore, although computer limitations require the use of RANS methods for realistic configurations at high Reynolds number in the foreseeable future, a balanced effort in turbulence modeling development, validation, and implementation should include these approaches as well.
NASA Technical Reports Server (NTRS)
Srivastava, Priyaka; Kraus, Jeff; Murawski, Robert; Golden, Bertsel, Jr.
2015-01-01
NASAs Space Communications and Navigation (SCaN) program manages three active networks: the Near Earth Network, the Space Network, and the Deep Space Network. These networks simultaneously support NASA missions and provide communications services to customers worldwide. To efficiently manage these resources and their capabilities, a team of student interns at the NASA Glenn Research Center is developing a distributed system to model the SCaN networks. Once complete, the system shall provide a platform that enables users to perform capacity modeling of current and prospective missions with finer-grained control of information between several simulation and modeling tools. This will enable the SCaN program to access a holistic view of its networks and simulate the effects of modifications in order to provide NASA with decisional information. The development of this capacity modeling system is managed by NASAs Strategic Center for Education, Networking, Integration, and Communication (SCENIC). Three primary third-party software tools offer their unique abilities in different stages of the simulation process. MagicDraw provides UMLSysML modeling, AGIs Systems Tool Kit simulates the physical transmission parameters and de-conflicts scheduled communication, and Riverbed Modeler (formerly OPNET) simulates communication protocols and packet-based networking. SCENIC developers are building custom software extensions to integrate these components in an end-to-end space communications modeling platform. A central control module acts as the hub for report-based messaging between client wrappers. Backend databases provide information related to mission parameters and ground station configurations, while the end user defines scenario-specific attributes for the model. The eight SCENIC interns are working under the direction of their mentors to complete an initial version of this capacity modeling system during the summer of 2015. The intern team is composed of four students in Computer Science, two in Computer Engineering, one in Electrical Engineering, and one studying Space Systems Engineering.
Modeling the equatorial electrojet
Stening, R.J.
1985-02-01
The equivalent circuit method has been modified to give greater accuracy and greater detail near the equator in order to model the equatorial electrojet. Electron collision frequencies used in the conductivity model are consistent with laboratory measurements. Variations with longitude are allowed, and the electrojet in the model is driven by suitable emf's generated by a global thermotidal wind system. The height of maximum current density in the Indian electrojet provided by the model at 104 km is consistent with some observations. The model gives the same height in Peru when an electron density profile typical of that region is used. The form of the electron density profile is shown to have a considerable affect on the current profile. The calculated variation with latitude of high-integrated current density gives good agreement. The two-layer equivalent circuit model is more successful than the single-layer model in modeling the latitude profile of the jet, but the observed depression in ..delta..H near 4/sup 0/ dip latitude requires much larger changes in currents with latitude than either model can provide. The theory that currents are limited by the two-stream instability does not agree with measured altitude profiles of the jet. Before latitude variations of ..delta..H and ..delta..Z on the ground can satisfactorily be explained, greater understanding of the contribution of conductivity anomalies to internal components will be required, but with suitable assumptions, a good fit with observed results is obtained. The effects produced by a simple local F region wind system are also investigated. A discrepancy with the observed relationship between integrated current densities and ..delta..H still awaits explanation.
NASA Astrophysics Data System (ADS)
Hill, Mary; Ye, Ming; Foglia, Laura; Lu, Dan
2015-04-01
Modeling frameworks include many ideas about, for example, how to parameterize models, conduct sensitivity analysis (including identifying observations and parameters important to calibration and prediction), quantify uncertainty, and so on. Of concern in this talk is meaningful testing of how ideas proposed for any modeling framework perform. The design of meaningful tests depends on the aspect of the framework being tested and the timing of system dynamics. Consider a situation in which the aspect being tested is prediction accuracy and the quantities of concern are readily measured and change quickly, such as for precipitation, floods, or hurricanes. In such cases meaningful tests involve comparing simulated and measured values and tests can be conducted daily, hourly or even more frequently. Though often challenged by measurement difficulties, this remains the simplest circumstance for conducting meaningful tests of modeling frameworks. If measurements are not readily available and(or) the system responds to changes over decades or centuries, as generally occurs for climate change, saltwater intrusion of groundwater systems, and dewatering of aquifers, prediction accuracy needs to be evaluated in other ways. Often these require high performance computing. For example, complex and simple models can be compared or cross-validation experiments can be conducted. Both can require massive computational resources for any but the simplest of problems. Testing other aspects of a modeling framework can require different types of tests. For example, testing methods of identifying observations or parameters important to model calibration or predictions might entail evaluation of many circumstances for methods that are themselves commonly computationally demanding. Again, high performance computing is needed even when the goal is to include computationally frugal methods in the modeling framework. In this talk we discuss the importance of such testing, stress the need to design and implement tests when any modeling framework is developed, and provide examples of tests from several recent publications.
Wang, Panqu; Cottrell, Garrison
2015-09-01
"The Model" (a.k.a. "TM", Dailey and Cottrell, 1999) is a biologically-plausible neurocomputational model designed for face and object recognition. Developed over the last 25 years, TM has been successfully used to model many cognitive phenomena, such as facial expression perception (Dailey et al., 2002), recruitment of the FFA for other categories of expertise (Tong et al., 2008), and the experience moderation effect on the correlation between face and object recognition (Wang et al., 2014). However, as TM is a "shallow" model, it cannot develop rich feature representations needed for challenging computer vision tasks. Meanwhile, the recent deep convolutional neural network techniques produce state-of-the-art results for many computer vision benchmarks, but they have not been used in cognitive modeling. The deep architecture allows the network to develop rich high level features, which generalize really well to other novel visual tasks. However, the deep learning models use a fully supervised training approach, which seems implausible for early visual system. Here, "The Deep Model" (TDM) tries to bridge TM and deep learning models together to create a "gradually" supervised deep architecture which can be both biologically-plausible and perform well on computer vision tasks. We show that, by using the sparse PCA and RICA algorithms on natural image datasets, we can obtain center surround color-opponent receptive field that represent LGN cells, and Gabor-like filters that represent V1 simple cells. This suggests that the unsupervised learning approach is what is used in the development of the early visual system. We employ this insight to develop a gradually supervised deep neural network and test it on some standard computer vision and cognitive modeling tasks. Meeting abstract presented at VSS 2015. PMID:26326779
Animal Model of Dermatophytosis
Shimamura, Tsuyoshi; Kubota, Nobuo; Shibuya, Kazutoshi
2012-01-01
Dermatophytosis is superficial fungal infection caused by dermatophytes that invade the keratinized tissue of humans and animals. Lesions from dermatophytosis exhibit an inflammatory reaction induced to eliminate the invading fungi by using the host's normal immune function. Many scientists have attempted to establish an experimental animal model to elucidate the pathogenesis of human dermatophytosis and evaluate drug efficacy. However, current animal models have several issues. In the present paper, we surveyed reports about the methodology of the dermatophytosis animal model for tinea corporis, tinea pedis, and tinea unguium and discussed future prospects. PMID:22619489
NASA Astrophysics Data System (ADS)
Hsia, H.-M.; Chou, Y.-L.; Wang, S.-Y.; Hsieh, S.-J.
Papers are presented on mixed and displacement finite-element models of a shear deformation theory for laminated anisotropic plates, substructure coupling with damping, dynamic responses of contact problems with interface friction, and numerical modeling of pressure-meter tests in rocks with inelastic discontinuities. Also considered are a numerical simulation of storm surges, a segmented plume trajectory model for real-time industrial hazard assessment, calculation of topographic waves in lakes, and stability analysis of water wave propagation. Other topics include the analysis of linear optimal control systems incorporating observers, nonlinear interfaces for acceleration-commanded control of sapcecraft and manipulators, and a unified approach to structure and control system design iterations.
Deconstructed Higgsless Models
Casalbuoni, Roberto
2006-01-12
We consider the possibility of constructing realistic Higgsless models within the context of deconstructed or moose models. We show that the constraints coming from the electro-weak experimental data are very severe and that it is very difficult to reconcile them with the requirement of improving the unitarity bound of the Higgsless Standard Model. On the other hand, with some fine tuning, a solution is found by delocalizing the standard fermions along the lattice line, that is allowing the fermions to couple to the moose gauge fiel0008.
Deconstructed Higgsless Models
NASA Astrophysics Data System (ADS)
Casalbuoni, Roberto
2006-01-01
We consider the possibility of constructing realistic Higgsless models within the context of deconstructed or moose models. We show that the constraints coming from the electro-weak esperimental data are very severe and that it is very difficult to reconcile them with the requirement of improving the unitarity bound of the Higgsless Standard Model. On the other hand, with some fine tuning, a solution is found by delocalizing the standard fermions along the lattice line, that is allowing the fermions to couple to the moose gauge fields.
NASA Technical Reports Server (NTRS)
Morrow, C. T. (principal investigator)
1981-01-01
Measurements of wind speed, net irradiation, and of air, soil, and dew point temperatures in an orchard at the Rock Springs Agricultural Research Center, as well as topographical and climatological data and a description of the major apple growing regions of Pennsylvania were supplied to the University of Florida for use in running the P-model, freeze prediction program. Results show that the P-model appears to have considerable applicability to conditions in Pennsylvania. Even though modifications may have to be made for use in the fruit growing regions, there are advantages for fruit growers with the model in its present form.
A. Morozov
2012-04-18
Partition functions of eigenvalue matrix models possess a number of very different descriptions: as matrix integrals, as solutions to linear and non-linear equations, as tau-functions of integrable hierarchies and as special-geometry prepotentials, as result of the action of W-operators and of various recursions on elementary input data, as gluing of certain elementary building blocks. All this explains the central role of such matrix models in modern mathematical physics: they provide the basic "special functions" to express the answers and relations between them, and they serve as a dream model of what one should try to achieve in any other field.
Hierarchical model of matching
NASA Technical Reports Server (NTRS)
Pedrycz, Witold; Roventa, Eugene
1992-01-01
The issue of matching two fuzzy sets becomes an essential design aspect of many algorithms including fuzzy controllers, pattern classifiers, knowledge-based systems, etc. This paper introduces a new model of matching. Its principal features involve the following: (1) matching carried out with respect to the grades of membership of fuzzy sets as well as some functionals defined on them (like energy, entropy,transom); (2) concepts of hierarchies in the matching model leading to a straightforward distinction between 'local' and 'global' levels of matching; and (3) a distributed character of the model realized as a logic-based neural network.
Temperature Dependent Pspice Model
Tolbert, Leon M; Cui, Yutian; Chinthavali, Madhu Sudhan
2012-01-01
This paper provides a behavioral model in Pspice for a silicon carbide (SiC) power MOSFET rated at 1200 V / 20 A for a wide temperature range. The Pspice model is built using device parameters extracted through experiment. The static and dynamic behavior of the SiC power MOSFET is simulated and compared to the measured data to show the accuracy of the Pspice model. The switching losses are obtained from experiment under multiple operation conditions. The temperature dependent behavior has been simulated and analyzed. Then the parasitics in the circuit have been studied and the effects on the switching behavior are simulated and discussed.
Modeling Compressed Turbulence
Israel, Daniel M.
2012-07-13
From ICE to ICF, the effect of mean compression or expansion is important for predicting the state of the turbulence. When developing combustion models, we would like to know the mix state of the reacting species. This involves density and concentration fluctuations. To date, research has focused on the effect of compression on the turbulent kinetic energy. The current work provides constraints to help development and calibration for models of species mixing effects in compressed turbulence. The Cambon, et al., re-scaling has been extended to buoyancy driven turbulence, including the fluctuating density, concentration, and temperature equations. The new scalings give us helpful constraints for developing and validating RANS turbulence models.
Emerging developmental model systems.
Kiefer, Julie C
2006-10-01
This primer briefly describes four emerging animal model systems that promise to provide insights into specific aspects of developmental biology. Highlighted here are two relatively well-characterized model systems, Gasterosteus aculeatus (three-spine stickleback fish) and Schmidtea mediterranea (planarian), as well as two organisms on which research is in its infancy, Carollia perspicillata (short-tailed fruit bat), and the basal metazoan, Trichoplax adhaerens. Scientists who helped develop these species into model systems discuss why they chose to research these animals. PMID:16881053
Modeling EERE Deployment Programs
Cort, Katherine A.; Hostick, Donna J.; Belzer, David B.; Livingston, Olga V.
2007-11-08
The purpose of this report is to compile information and conclusions gathered as part of three separate tasks undertaken as part of the overall project, “Modeling EERE Deployment Programs,” sponsored by the Planning, Analysis, and Evaluation office within the Department of Energy’s Office of Energy Efficiency and Renewable Energy (EERE). The purpose of the project was to identify and characterize the modeling of deployment programs within the EERE Technology Development (TD) programs, address improvements to modeling in the near term, and note gaps in knowledge where future research is needed.
Stochastic ontogenetic growth model
NASA Astrophysics Data System (ADS)
West, B. J.; West, D.
2012-02-01
An ontogenetic growth model (OGM) for a thermodynamically closed system is generalized to satisfy both the first and second law of thermodynamics. The hypothesized stochastic ontogenetic growth model (SOGM) is shown to entail the interspecies allometry relation by explicitly averaging the basal metabolic rate and the total body mass over the steady-state probability density for the total body mass (TBM). This is the first derivation of the interspecies metabolic allometric relation from a dynamical model and the asymptotic steady-state distribution of the TBM is fit to data and shown to be inverse power law.
Aviation Safety Simulation Model
NASA Technical Reports Server (NTRS)
Houser, Scott; Yackovetsky, Robert (Technical Monitor)
2001-01-01
The Aviation Safety Simulation Model is a software tool that enables users to configure a terrain, a flight path, and an aircraft and simulate the aircraft's flight along the path. The simulation monitors the aircraft's proximity to terrain obstructions, and reports when the aircraft violates accepted minimum distances from an obstruction. This model design facilitates future enhancements to address other flight safety issues, particularly air and runway traffic scenarios. This report shows the user how to build a simulation scenario and run it. It also explains the model's output.
NASA Astrophysics Data System (ADS)
Kleindienst, Jan; Cu?ín, Jan; Brdiczka, Oliver; Dimakis, Nikolaos
CHIL services require observation of human activity. The observation of humans and their activities is provided by perceptual components. For most human activities, a potentially infinite number of entities could be detected, and an infinite number of possible relations exist for any set of entities. The appropriate entities and relations must be determined for a task or service to be provided. This is the role of the situation model. Situation models allow focusing attention and computing resources to determine the information required for operation of CHIL services. In this chapter, we introduce concepts and abstractions of situation modeling schema used in the CHIL architecture.
Absorption in dielectric models
Churchill, R J
2015-01-01
We develop a classical microscopic model of a dielectric. The model features nonlinear interaction terms between polarizable dipoles and lattice vibrations. The lattice vibrations are found to act as a pseudo-reservoir, giving broadband absorption of electromagnetic radiation without the addition of damping terms in the dynamics. The effective permittivity is calculated using a perturbative iteration method and is found to have the form associated with real dielectrics. Spatial dispersion is naturally included in the model and we also calculate the wavevector dependence of the permittivity.
NASA Astrophysics Data System (ADS)
Rokni Lamooki, Gholam Reza; Shirazi, Amir H.; Mani, Ali R.
2015-05-01
Thyroid's main chemical reactions are employed to develop a mathematical model. The presented model is based on differential equations where their dynamics reflects many aspects of thyroid's behavior. Our main focus here is the well known, but not well understood, phenomenon so called as Wolff-Chaikoff effect. It is shown that the inhibitory effect of intake iodide on the rate of one single enzyme causes a similar effect as Wolff-Chaikoff. Besides this issue, the presented model is capable of revealing other complex phenomena of thyroid hormones homeostasis.
Nonparametric Transfer Function Models.
Liu, Jun M; Chen, Rong; Yao, Qiwei
2010-07-01
In this paper a class of nonparametric transfer function models is proposed to model nonlinear relationships between 'input' and 'output' time series. The transfer function is smooth with unknown functional forms, and the noise is assumed to be a stationary autoregressive-moving average (ARMA) process. The nonparametric transfer function is estimated jointly with the ARMA parameters. By modeling the correlation in the noise, the transfer function can be estimated more efficiently. The parsimonious ARMA structure improves the estimation efficiency in finite samples. The asymptotic properties of the estimators are investigated. The finite-sample properties are illustrated through simulations and one empirical example. PMID:20628584
Dynamical atmospheric cluster model
NASA Astrophysics Data System (ADS)
Kulmala, Markku
2010-11-01
Dynamical behavior of atmospheric molecular clusters was investigated. A model was developed to describe the evolution, concentration and sources of atmospheric clusters. The model includes sources of monomers, monomer - i-mer collisions and evaporation of i-mers. The developed model is agile and flexible. The evaporation rate is shown to be most important parameter describing the growth of clusters. The existence of dimers, trimers and bigger i-mers depends mainly on evaporation rate. In practice, the evaporation rate should be smaller than 1 s- 1 for atmospheric nucleation to occur.
Climate and atmospheric modeling studies
NASA Technical Reports Server (NTRS)
1992-01-01
The climate and atmosphere modeling research programs have concentrated on the development of appropriate atmospheric and upper ocean models, and preliminary applications of these models. Principal models are a one-dimensional radiative-convective model, a three-dimensional global model, and an upper ocean model. Principal applications were the study of the impact of CO2, aerosols, and the solar 'constant' on climate.
Multiple model simulation: modelling cell division and differentiation in the
Stepney, Susan
Multiple model simulation: modelling cell division and differentiation in the prostate Alastair this approach to building a model of prostate cell division and differentiation, with each model layer can be designed and validated. In this paper we present the modelling and simulation of cell division
Model Comparison of Bayesian Semiparametric and Parametric Structural Equation Models
ERIC Educational Resources Information Center
Song, Xin-Yuan; Xia, Ye-Mao; Pan, Jun-Hao; Lee, Sik-Yum
2011-01-01
Structural equation models have wide applications. One of the most important issues in analyzing structural equation models is model comparison. This article proposes a Bayesian model comparison statistic, namely the "L[subscript nu]"-measure for both semiparametric and parametric structural equation models. For illustration purposes, we consider…
A High Precision Prediction Model Using Hybrid Grey Dynamic Model
ERIC Educational Resources Information Center
Li, Guo-Dong; Yamaguchi, Daisuke; Nagai, Masatake; Masuda, Shiro
2008-01-01
In this paper, we propose a new prediction analysis model which combines the first order one variable Grey differential equation Model (abbreviated as GM(1,1) model) from grey system theory and time series Autoregressive Integrated Moving Average (ARIMA) model from statistics theory. We abbreviate the combined GM(1,1) ARIMA model as ARGM(1,1)…
Bayesian Feature and Model Selection for Gaussian Mixture Models
Likas, Aristidis
Bayesian Feature and Model Selection for Gaussian Mixture Models Constantinos Constantinopoulos for mixture model training that simultaneously treats the feature selection and the model selection problem. The method is based on the integration of a mixture model formulation that takes into account the saliency
High School Students' Modeling Knowledge High School Students' Modeling Knowledge
High School Students' Modeling Knowledge High School Students' Modeling Knowledge David Fortus of the authors. #12;High School Students' Modeling Knowledge Abstract Modeling is a core scientific practice. This study probed the modeling knowledge of high school students who had not any explicit exposure
The diffusive evaporation-deposition model and the voter model
Johansen, Adam
The diffusive evaporation-deposition model and the voter model Benjamin Graham University Introduction The evaporation-deposition model [1], also known as the Takaysu model with desorption [4], is a model for monomers undergoing diffusion, coagulation, growth and evaporation. It is defined
Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method
NASA Astrophysics Data System (ADS)
Tsai, F. T. C.; Elshall, A. S.
2014-12-01
Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.
Miller, Daniel Evan
2014-12-10
leverages various aspects of the geometric and topological information contained in the different input skeletal models to form a single result that may limit the error introduced by particular inputs by means of a confidence function. Using...
Morphological modeling of neurons
Mulchandani, Kishore
1995-01-01
A formal representation of neuron morphology, adequate for the geometric modeling of manually-traced neurons, is presented. The concept of a stochastic L-system is then introduced and the critical distribution functions governing the stochastic...
Asteroid thermophysical modeling
Delbo, Marco; Emery, Joshua P; Rozitis, Ben; Capria, Maria Teresa
2015-01-01
The field of asteroid thermophysical modeling has experienced an extraordinary growth in the last ten years, as new thermal infrared data became available for hundreds of thousands of asteroids. The infrared emission of asteroids depends on the body's size, shape, albedo, thermal inertia, roughness and rotational properties. These parameters can therefore be derived by thermophysical modeling of infrared data. Thermophysical modeling led to asteroid size estimates that were confirmed at the few-percent level by later spacecraft visits. We discuss how instrumentation advances now allow mid-infrared interferometric observations as well as high-accuracy spectro-photometry, posing their own set of thermal-modeling challenges.We present major breakthroughs achieved in studies of the thermal inertia, a sensitive indicator for the nature of asteroids soils, allowing us, for instance, to determine the grain size of asteroidal regoliths. Thermal inertia also governs non-gravitational effects on asteroid orbits, requir...
Taylor, Paul
The tilt intonation model facilitates automatic analysis and synthesis of intonation. The analysis algorithm detects intonational events in F0 contours and parameterises them in terms of the continuously varying parameters. ...
Huang, Zhiheng
Due to its high performance and comprehensibility, fuzzy modelling is becoming more and more popular in dealing with nonlinear, uncertain and complex systems for tasks such as signal processing, medical diagnosis and ...
Bagger, J.A.
1984-09-01
We begin to construct the most general supersymmetric Lagrangians in one, two and four dimensions. We find that the matter couplings have a natural interpretation in the language of the nonlinear sigma model.
ERIC Educational Resources Information Center
Palmer, Dennis L.; Olsen, Richard W.
1977-01-01
Described is how to build a solar furnace model. A detailed list of materials and methods are included along with diagrams. This particular activity is part of an audiotutorial unit concerned with the energy crisis and energy alternatives. (MA)
Modeling Newspaper Advertising
ERIC Educational Resources Information Center
Harper, Joseph; And Others
1978-01-01
Presents a mathematical model for simulating a newspaper financial system. Includes the effects of advertising and circulation for predicting advertising linage as a function of population, income, and advertising rate. (RL)
Modelling Immunological Memory
Garret, Simon; Walker, Joanne; Wilson, William; Aickelin, Uwe
2010-01-01
Accurate immunological models offer the possibility of performing highthroughput experiments in silico that can predict, or at least suggest, in vivo phenomena. In this chapter, we compare various models of immunological memory. We first validate an experimental immunological simulator, developed by the authors, by simulating several theories of immunological memory with known results. We then use the same system to evaluate the predicted effects of a theory of immunological memory. The resulting model has not been explored before in artificial immune systems research, and we compare the simulated in silico output with in vivo measurements. Although the theory appears valid, we suggest that there are a common set of reasons why immunological memory models are a useful support tool; not conclusive in themselves.
Modeling Ionospheric Electrodynamics (Invited)
NASA Astrophysics Data System (ADS)
Huba, J. D.
2009-12-01
We present modeling results of ionospheric electrodynamics using the 3D NRL ionosphere model SAMI3. Recently, SAMI3 has been upgraded to solve the potential equation that determines the electrostatic potential from the ionospheric conductances (Pedersen and Hall) and drivers: neutral wind, gravity, and parallel current systems. We present results showing the impact of different neutral wind models (e.g., HWM93, HWM07, TIMEGCM) on the dynamics of the low- to mid-latitude ionosphere, as well as the Region 1 and 2 current systems. We point out issues and concerns with obtaining an accurate specification of the global electric field within the context of existing models.(with J. Krall, G. Joyce, S. Slinker, and G. Crowley). Research supported by NASA and ONR
Structural Equation Model Trees
Brandmaier, Andreas M.; von Oertzen, Timo; McArdle, John J.; Lindenberger, Ulman
2015-01-01
In the behavioral and social sciences, structural equation models (SEMs) have become widely accepted as a modeling tool for the relation between latent and observed variables. SEMs can be seen as a unification of several multivariate analysis techniques. SEM Trees combine the strengths of SEMs and the decision tree paradigm by building tree structures that separate a data set recursively into subsets with significantly different parameter estimates in a SEM. SEM Trees provide means for finding covariates and covariate interactions that predict differences in structural parameters in observed as well as in latent space and facilitate theory-guided exploration of empirical data. We describe the methodology, discuss theoretical and practical implications, and demonstrate applications to a factor model and a linear growth curve model. PMID:22984789
Modeling Viral Capsid Assembly
2014-01-01
I present a review of the theoretical and computational methodologies that have been used to model the assembly of viral capsids. I discuss the capabilities and limitations of approaches ranging from equilibrium continuum theories to molecular dynamics simulations, and I give an overview of some of the important conclusions about virus assembly that have resulted from these modeling efforts. Topics include the assembly of empty viral shells, assembly around single-stranded nucleic acids to form viral particles, and assembly around synthetic polymers or charged nanoparticles for nanotechnology or biomedical applications. I present some examples in which modeling efforts have promoted experimental breakthroughs, as well as directions in which the connection between modeling and experiment can be strengthened. PMID:25663722
NASA Technical Reports Server (NTRS)
Jaap, John; Davis, Elizabeth; Richardson, Lea
2004-01-01
Planning and scheduling systems organize tasks into a timeline or schedule. Tasks are logically grouped into containers called models. Models are a collection of related tasks, along with their dependencies and requirements, that when met will produce the desired result. One challenging domain for a planning and scheduling system is the operation of on-board experiments for the International Space Station. In these experiments, the equipment used is among the most complex hardware ever developed; the information sought is at the cutting edge of scientific endeavor; and the procedures are intricate and exacting. Scheduling is made more difficult by a scarcity of station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling International Space Station experiment operations calls for a maximally expressive modeling schema.
Lagrangians for biological models
M. C. Nucci; K. M. Tamizhmani
2011-08-10
We show that a method presented in [S.L. Trubatch and A. Franco, Canonical Procedures for Population Dynamics, J. Theor. Biol. 48 (1974), 299-324] and later in [G.H. Paine, The development of Lagrangians for biological models, Bull. Math. Biol. 44 (1982) 749-760] for finding Lagrangians of classic models in biology, is actually based on finding the Jacobi Last Multiplier of such models. Using known properties of Jacobi Last Multiplier we show how to obtain linear Lagrangians of those first-order systems and nonlinear Lagrangian of the corresponding single second-order equations that can be derived from them, even in the case where those authors failed such as the host-parasite model.
HOMER® Energy Modeling Software
Energy Science and Technology Software Center (ESTSC)
2000-12-31
The HOMER® energy modeling software is a tool for designing and analyzing hybrid power systems, which contain a mix of conventional generators, cogeneration, wind turbines, solar photovoltaic, hydropower, batteries, fuel cells, hydropower, biomass and other inputs.
Warren, Jeff; Iversen, Colleen; Brooks, Jonathan; Ricciuto, Daniel
2014-06-26
Using dogwood trees, Oak Ridge National Laboratory researchers are gaining a better understanding of the role photosynthesis and respiration play in the atmospheric carbon dioxide cycle. Their findings will aid computer modelers in improving the accuracy of climate simulations.
NASA Technical Reports Server (NTRS)
Sinha, Neeraj; Brinckman, Kevin; Jansen, Bernard; Seiner, John
2011-01-01
A method was developed of obtaining propulsive base flow data in both hot and cold jet environments, at Mach numbers and altitude of relevance to NASA launcher designs. The base flow data was used to perform computational fluid dynamics (CFD) turbulence model assessments of base flow predictive capabilities in order to provide increased confidence in base thermal and pressure load predictions obtained from computational modeling efforts. Predictive CFD analyses were used in the design of the experiments, available propulsive models were used to reduce program costs and increase success, and a wind tunnel facility was used. The data obtained allowed assessment of CFD/turbulence models in a complex flow environment, working within a building-block procedure to validation, where cold, non-reacting test data was first used for validation, followed by more complex reacting base flow validation.
NASA Technical Reports Server (NTRS)
Olsen, N.; Holme, R.; Hulot, G.; Sabaka, T.; Neubert, T.; Toffner-Clausen, L.; Primdahl, F.; Jorgensen, J.; Leger, J.-M.; Barraclough, D.; Smith, David E. (Technical Monitor)
2000-01-01
Magnetic measurements taken by the Orsted satellite during geomagnetic quiet conditions around January 1, 2000 have been used to derive a spherical harmonic model of the Earth's magnetic field for epoch 2000.0. The maximum degree and order of the model is 19 for internal, and 2 for external, source fields; however, coefficients above degree 14 may not be robust. Such detailed models exist for only one previous epoch, 1980. Achieved rms misfit is 2 nT for the scalar intensity and 4 nT for the vector components perpendicular to the magnetic field. This model is of higher detail than the IGRF 2000, which for scientific purposes related to the Orsted mission it supersedes.
Dietary Exposure Potential Model
Existing food consumption and contaminant residue databases, typically products of nutrition and regulatory monitoring, contain useful information to characterize dietary intake of environmental chemicals. A PC-based model with resident database system, termed the Die...
Colorado Model Rocketry Workshop.
ERIC Educational Resources Information Center
Galindez, Peter
1978-01-01
Describes a summer workshop course in rocketry offered to educators and sponsored by industry. The participants built various model rockets and equipment and worked on challenging practical problems and activities. (GA)
NASA Astrophysics Data System (ADS)
Stauffer, D.
1990-09-01
The Swendsen-Wang cluster flipping algorithm gives a better way to simulate equilibrium critical phenomena. This review summarizes recent applications, in particular of Wang and Kertész, to better understand the Ising model.
Rice, Ken
Model Fitting Ken Rice Thomas Lumley UW Biostatistics Seattle, June 2008 #12;Regression commands/homozygous or "2 df" #12;Use of lm() in genetics Some data; cholesterol levels plotted by genotype (single SNP) q q
Computationally modeling interpersonal trust
Lee, Jin Joo
We present a computational model capable of predicting—above human accuracy—the degree of trust a person has toward their novel partner by observing the trust-related nonverbal cues expressed in their social interaction. ...
Molecular Modeling and Bioinformatics
Dr. Byungkook Lee, Ph.D. Head, Molecular Modeling and Bioinformatics Section Laboratory of Molecular Biology Building 37, Room 5120 37 Convent Drive, MSC 4262 Bethesda, MD 20891 We use theoretical and computational techniques to help solve biological and
NASA Astrophysics Data System (ADS)
Vajna, Szabolcs; Tóth, Bálint; Kertész, János
2013-10-01
Many human-related activities show power-law decaying interevent time distribution with exponents usually varying between 1 and 2. We study a simple task-queuing model, which produces bursty time series due to the non-trivial dynamics of the task list. The model is characterized by a priority distribution as an input parameter, which describes the choice procedure from the list. We give exact results on the asymptotic behaviour of the model and we show that the interevent time distribution is power-law decaying for any kind of input distributions that remain normalizable in the infinite list limit, with exponents tunable between 1 and 2. The model satisfies a scaling law between the exponents of interevent time distribution (?) and autocorrelation function (?): ? + ? = 2. This law is general for renewal processes with power-law decaying interevent time distribution. We conclude that slowly decaying autocorrelation function indicates long-range dependence only if the scaling law is violated.
Otero, Marcelo J; Dorso, Claudio O; Solari, Hernán G; Natiello, Mario A
2010-01-01
We introduce a dengue model (SEIR) where the human individuals are treated on an individual basis (IBM) while the mosquito population, produced by an independent model, is treated by compartments (SEI). We study the spread of epidemics by the sole action of the mosquito. Exponential, deterministic and experimental distributions for the (human) exposed period are considered in two weather scenarios, one corresponding to temperate climate and the other to tropical climate. Virus circulation, final epidemic size and duration of outbreaks are considered showing that the results present little sensitivity to the statistics followed by the exposed period provided the median of the distributions are in coincidence. Only the time between an introduced (imported) case and the appearance of the first symptomatic secondary case is sensitive to this distribution. We finally show that the IBM model introduced is precisely a realization of a compartmental model, and that at least in this case, the choice between compartmen...
Direct integration transmittance model
NASA Technical Reports Server (NTRS)
Kunde, V. G.; Maguire, W. C.
1973-01-01
A transmittance model was developed for the 200-2000/cm region for interpretation of high spectral resolution measurements of laboratory absorption and of planetary thermal emission. The high spectral resolution requires transmittances to be computed monochromatically by summing the contribution of individual molecular absorption lines. A magnetic tape atlas of H2O,O3, and CO2 molecular line parameters serves as input to the transmittance model with simple empirical representations used for continuum regions wherever suitable laboratory data exist. The theoretical formulation of the transmittance model and the computational procedures used for the evaluation of the transmittances are discussed. Application is demonstrated of the model to several homogenous path laboratory absorption examples.
Models of Successful Cooperation
Dwyer, Arienne M.
2010-09-01
This chapter uses case studies to develop a model of productive collaborative research. In contrast to the privileged position academician-researchers may accord themselves, true collaborations recognize full agency in all key participants...
Fluidized bed combustor modeling
NASA Technical Reports Server (NTRS)
Horio, M.; Rengarajan, P.; Krishnan, R.; Wen, C. Y.
1977-01-01
A general mathematical model for the prediction of performance of a fluidized bed coal combustor (FBC) is developed. The basic elements of the model consist of: (1) hydrodynamics of gas and solids in the combustor; (2) description of gas and solids contacting pattern; (3) kinetics of combustion; and (4) absorption of SO2 by limestone in the bed. The model is capable of calculating the combustion efficiency, axial bed temperature profile, carbon hold-up in the bed, oxygen and SO2 concentrations in the bubble and emulsion phases, sulfur retention efficiency and particulate carry over by elutriation. The effects of bed geometry, excess air, location of heat transfer coils in the bed, calcium to sulfur ratio in the feeds, etc. are examined. The calculated results are compared with experimental data. Agreement between the calculated results and the observed data are satisfactory in most cases. Recommendations to enhance the accuracy of prediction of the model are suggested.
An overview of three main types of simulation approach (explanatory, abstraction, and estimation) is presented, along with a discussion of their capabilities limitations, and the steps required for their validation. A process model being developed through the Forest Response Prog...
NASA Technical Reports Server (NTRS)
Glaese, John R.; Tobbe, Patrick A.
1986-01-01
The Space Station Mechanism Test Bed consists of a hydraulically driven, computer controlled six degree of freedom (DOF) motion system with which docking, berthing, and other mechanisms can be evaluated. Measured contact forces and moments are provided to the simulation host computer to enable representation of orbital contact dynamics. This report describes the development of a generalized math model which represents the relative motion between two rigid orbiting vehicles. The model allows motion in six DOF for each body, with no vehicle size limitation. The rotational and translational equations of motion are derived. The method used to transform the forces and moments from the sensor location to the vehicles' centers of mass is also explained. Two math models of docking mechanisms, a simple translational spring and the Remote Manipulator System end effector, are presented along with simulation results. The translational spring model is used in an attempt to verify the simulation with compensated hardware in the loop results.
Warren, Jeff; Iversen, Colleen; Brooks, Jonathan; Ricciuto, Daniel
2012-10-31
Using dogwood trees, Oak Ridge National Laboratory researchers are gaining a better understanding of the role photosynthesis and respiration play in the atmospheric carbon dioxide cycle. Their findings will aid computer modelers in improving the accuracy of climate simulations.
Factorial Hidden Markov Models
Ghahramani, Zoubin
1996-02-09
We present a framework for learning in hidden Markov models with distributed state representations. Within this framework, we derive a learning algorithm based on the Expectation--Maximization (EM) procedure for maximum ...
Energy Science and Technology Software Center (ESTSC)
2004-10-18
The Community Atmosphere Model (CAM) is an atmospheric general circulation model that solves equations for atmospheric dynamics and physics. CAM is an outgrowth of the Community Climate Model at the National Center for Atmospheric Research (NCAR) and was developed as a joint collaborative effort between NCAR and several DOE laboratories, including LLNL. CAM contains several alternative approaches for advancing the atmospheric dynamics. One of these approaches uses a finite-volume method originally developed by personnel atmore »NASNGSFC, We have developed a scalable version of the finite-volume solver for massively parallel computing systems. FV-CAM is meant to be used in conjunction with the Community Atmosphere Model. It is not stand-alone.« less
Improved steamflood analytical model
Chandra, Suandy
2006-10-30
two field cases, a 45x23x8 model was used that represented 1/8 of a 10-acre 5-spot pattern unit, using typical rock and reservoir fluid properties. In the SPE project case, three models were used: 23x12x12 (2.5 ac), 31x16x12 (5 ac) and 45x23x8 (10 ac...
Jingfei Zhang; Xin Zhang; Hongya Liu
2007-06-19
We propose in this Letter a holographic model of tachyon dark energy. A connection between the tachyon scalar-field and the holographic dark energy is established, and accordingly, the potential of the holographic tachyon field is constructed. We show that the holographic evolution of the universe with $c\\geqslant 1$ can be described completely by the resulting tachyon model in a certain way.
Theory Modeling and Simulation
Shlachter, Jack
2012-08-23
Los Alamos has a long history in theory, modeling and simulation. We focus on multidisciplinary teams that tackle complex problems. Theory, modeling and simulation are tools to solve problems just like an NMR spectrometer, a gas chromatograph or an electron microscope. Problems should be used to define the theoretical tools needed and not the other way around. Best results occur when theory and experiments are working together in a team.
Modelling heart rate kinetics.
Zakynthinaki, Maria S
2015-01-01
The objective of the present study was to formulate a simple and at the same time effective mathematical model of heart rate kinetics in response to movement (exercise). Based on an existing model, a system of two coupled differential equations which give the rate of change of heart rate and the rate of change of exercise intensity is used. The modifications introduced to the existing model are justified and discussed in detail, while models of blood lactate accumulation in respect to time and exercise intensity are also presented. The main modification is that the proposed model has now only one parameter which reflects the overall cardiovascular condition of the individual. The time elapsed after the beginning of the exercise, the intensity of the exercise, as well as blood lactate are also taken into account. Application of the model provides information regarding the individual's cardiovascular condition and is able to detect possible changes in it, across the data recording periods. To demonstrate examples of successful numerical fit of the model, constant intensity experimental heart rate data sets of two individuals have been selected and numerical optimization was implemented. In addition, numerical simulations provided predictions for various exercise intensities and various cardiovascular condition levels. The proposed model can serve as a powerful tool for a complete means of heart rate analysis, not only in exercise physiology (for efficiently designing training sessions for healthy subjects) but also in the areas of cardiovascular health and rehabilitation (including application in population groups for which direct heart rate recordings at intense exercises are not possible or not allowed, such as elderly or pregnant women). PMID:25876164
Rotational modeling of Hyperion
NASA Astrophysics Data System (ADS)
Harbison, Rebecca A.; Thomas, Peter C.; Nicholson, Philip C.
2011-05-01
Saturn's moon, Hyperion, is subject to strongly-varying solid body torques from its primary and lacks a stable spin state resonant with its orbital frequency. In fact, its rotation is chaotic, with a Lyapunov timescale on the order of 100 days. In 2005, Cassini made three close passes of Hyperion at intervals of 40 and 67 days, when the moon was imaged extensively and the spin state could be measured. Curiously, the spin axis was observed at the same location within the body, within errors, during all three fly-bys—~ 30° from the long axis of the moon and rotating between 4.2 and 4.5 times faster than the synchronous rate. Our dynamical modeling predicts that the rotation axis should be precessing within the body, with a period of ~ 16 days. If the spin axis retains its orientation during all three fly-bys, then this puts a strong constraint on the in-body precessional period, and thus the moments of inertia. However, the location of the principal axes in our model are derived from the shape model of Hyperion, assuming a uniform composition. This may not be a valid assumption, as Hyperion has significant void space, as shown by its density of 544± 50 kg m-3 (Thomas et al. in Nature 448:50, 2007). This paper will examine both a rotation model with principal axes fixed by the shape model, and one with offsets from the shape model. We favor the latter interpretation, which produces a best-fit with principal axes offset of ~ 30° from the shape model, placing the A axis at the spin axis in 2005, but returns a lower reduced ? 2 than the best-fit fixed-axes model.
Ion Thruster Performance Model.
NASA Astrophysics Data System (ADS)
Brophy, John Raymond
A model of ion thruster performance is developed for high flux density cusped magnetic field thruster designs. This model is formulated in terms of the average energy required to produce an ion in the discharge chamber plasma and the fraction of these ions that are extracted to form the beam. The direct loss of high energy (primary) electrons from the plasma to the anode is shown to have a major effect on thruster performance. The model provides simple algebraic equations enabling one to calculate the beam ion energy cost, the average discharge chamber plasma ion energy cost, the primary electron density, the primary-to-Maxwellian electron density ratio and the Maxwellian electron temperature. Experiments indicate that the model correctly predicts the variation in plasma ion energy cost for changes in propellant gas (Ar, Kr and Xe), grid transparency to neutral atoms, beam extraction area, discharge voltage, and discharge chamber wall temperature. The model and experiments indicate that thruster performance may be described in terms of only four thruster configuration dependent parameters and two operating parameters. The model also suggests that improved performance should be exhibited by thruster designs which extract a large fraction of the ions produced in the discharge chamber, which have good primary electron and neutral atom containment and which operate at high propellant flow rates. In addition, it suggests that hollow cathode efficiency becomes increasingly important to the discharge chamber performance as the discharge voltage is reduced. Finally, the utility of the model is mission analysis calculations is demonstrated. The model makes it easy to determine which changes in thruster design or operating parameters have the greatest effect on the payload fraction and/or mission duration.
NASA Technical Reports Server (NTRS)
Johnson, Barry
1992-01-01
The topics covered include the following: (1) CO2 laser kinetics modeling; (2) gas lifetimes in pulsed CO2 lasers; (3) frequency chirp and laser pulse spectral analysis; (4) LAWS A' Design Study; and (5) discharge circuit components for LAWS. The appendices include LAWS Memos, computer modeling of pulsed CO2 lasers for lidar applications, discharge circuit considerations for pulsed CO2 lidars, and presentation made at the Code RC Review.
Pruess, K.
1994-04-01
Our research is concerned with mathematical modeling techniques for engineering design and optimization of water injection in vapor-dominated systems. The emphasis in the project has been on the understanding of physical processes and mechanisms during injection, applications to field problems, and on transfer of numerical simulation capabilities to the geothermal community. This overview summarizes recent work on modeling injection interference in the Southeast Geysers, and on improving the description of two-phase flow processes in heterogeneous media.
Zakynthinaki, Maria S.
2015-01-01
The objective of the present study was to formulate a simple and at the same time effective mathematical model of heart rate kinetics in response to movement (exercise). Based on an existing model, a system of two coupled differential equations which give the rate of change of heart rate and the rate of change of exercise intensity is used. The modifications introduced to the existing model are justified and discussed in detail, while models of blood lactate accumulation in respect to time and exercise intensity are also presented. The main modification is that the proposed model has now only one parameter which reflects the overall cardiovascular condition of the individual. The time elapsed after the beginning of the exercise, the intensity of the exercise, as well as blood lactate are also taken into account. Application of the model provides information regarding the individual’s cardiovascular condition and is able to detect possible changes in it, across the data recording periods. To demonstrate examples of successful numerical fit of the model, constant intensity experimental heart rate data sets of two individuals have been selected and numerical optimization was implemented. In addition, numerical simulations provided predictions for various exercise intensities and various cardiovascular condition levels. The proposed model can serve as a powerful tool for a complete means of heart rate analysis, not only in exercise physiology (for efficiently designing training sessions for healthy subjects) but also in the areas of cardiovascular health and rehabilitation (including application in population groups for which direct heart rate recordings at intense exercises are not possible or not allowed, such as elderly or pregnant women). PMID:25876164
2012-01-01
Background Transplantation is often the only way to treat a number of diseases leading to organ failure. To overcome rejection towards the transplanted organ (graft), immunosuppression therapies are used, which have considerable side-effects and expose patients to opportunistic infections. The development of a model to complement the physician’s experience in specifying therapeutic regimens is therefore desirable. The present work proposes an Ordinary Differential Equations model accounting for immune cell proliferation in response to the sudden entry of graft antigens, through different activation mechanisms. The model considers the effect of a single immunosuppressive medication (e.g. cyclosporine), subject to first-order linear kinetics and acting by modifying, in a saturable concentration-dependent fashion, the proliferation coefficient. The latter has been determined experimentally. All other model parameter values have been set so as to reproduce reported state variable time-courses, and to maintain consistency with one another and with the experimentally derived proliferation coefficient. Results The proposed model substantially simplifies the chain of events potentially leading to organ rejection. It is however able to simulate quantitatively the time course of graft-related antigen and competent immunoreactive cell populations, showing the long-term alternative outcomes of rejection, tolerance or tolerance at a reduced functional tissue mass. In particular, the model shows that it may be difficult to attain tolerance at full tissue mass with acceptably low doses of a single immunosuppressant, in accord with clinical experience. Conclusions The introduced model is mathematically consistent with known physiology and can reproduce variations in immune status and allograft survival after transplantation. The model can be adapted to represent different therapeutic schemes and may offer useful indications for the optimization of therapy protocols in the transplanted patient. PMID:22607638
Lipkin, H.J.
1986-01-01
The success of simple constituent quark models in single-hardon physics and their failure in multiquark physics is discussed, emphasizing the relation between meson and baryon spectra, hidden color and the color matrix, breakup decay modes, coupled channels, and hadron-hadron interactions via flipping and tunneling of flux tubes. Model-independent predictions for possible multiquark bound states are considered and the most promising candidates suggested. A quark approach to baryon-baryon interactions is discussed.
NASA Technical Reports Server (NTRS)
Callis, S. L.; Sakamoto, C.
1984-01-01
Five models based on multiple regression were developed to estimate wheat yields for the five wheat growing provinces of Argentina. Meteorological data sets were obtained for each province by averaging data for stations within each province. Predictor variables for the models were derived from monthly total precipitation, average monthly mean temperature, and average monthly maximum temperature. Buenos Aires was the only province for which a trend variable was included because of increasing trend in yield due to technology from 1950 to 1963.
Modeling using optimization routines
NASA Technical Reports Server (NTRS)
Thomas, Theodore
1995-01-01
Modeling using mathematical optimization dynamics is a design tool used in magnetic suspension system development. MATLAB (software) is used to calculate minimum cost and other desired constraints. The parameters to be measured are programmed into mathematical equations. MATLAB will calculate answers for each set of inputs; inputs cover the boundary limits of the design. A Magnetic Suspension System using Electromagnets Mounted in a Plannar Array is a design system that makes use of optimization modeling.
NASA Technical Reports Server (NTRS)
Callis, S. L.; Sakamoto, C.
1984-01-01
A model based on multiple regression was developed to estimate corn yields for the country of Argentina. A meteorological data set was obtained for the country by averaging data for stations within the corn-growing area. Predictor variables for the model were derived from monthly total precipitation, average monthly mean temperature, and average monthly maximum temperature. A trend variable was included for the years 1965 to 1980 since an increasing trend in yields due to technology was observed between these years.
NASA Technical Reports Server (NTRS)
Callis, S. L.; Sakamoto, C.
1984-01-01
A model based on multiple regression was developed to estimate soybean yields for the country of Argentina. A meteorological data set was obtained for the country by averaging data for stations within the soybean growing area. Predictor variables for the model were derived from monthly total precipitation and monthly average temperature. A trend variable was included for the years 1969 to 1978 since an increasing trend in yields due to technology was observed between these years.
Stuart Raby
2008-08-27
I discuss an evolution of SUSY GUT model building, starting with the construction of 4d GUTs, to orbifold GUTs and finally to orbifold GUTs within the heterotic string. This evolution is an attempt to obtain realistic string models, perhaps relevant for the LHC. This review is in memory of the sudden loss of Julius Wess, a leader in the field, who will be sorely missed.
General composite Higgs models
NASA Astrophysics Data System (ADS)
Marzocca, David; Serone, Marco; Shu, Jing
2012-08-01
We construct a general class of pseudo-Goldstone composite Higgs models, within the minimal SO(5)/SO(4) coset structure, that are not necessarily of moose-type. We characterize the main properties these models should have in order to give rise to a Higgs mass around 125 GeV. We assume the existence of relatively light and weakly coupled spin 1 and 1/2 resonances. In absence of a symmetry principle, we introduce the Minimal Higgs Potential (MHP) hypothesis: the Higgs potential is assumed to be one-loop dominated by the SM fields and the above resonances, with a contribution that is made calculable by imposing suitable generalizations of the first and second Weinberg sum rules. We show that a 125 GeV Higgs requires light, often sub-TeV, fermion resonances. Their presence can also be important for the models to successfully pass the electroweak precision tests. Interestingly enough, the latter can also be passed by models with a heavy Higgs around 320 GeV. The composite Higgs models of the moose-type considered in the literature can be seen as particular limits of our class of models.
Heterogeneous conductorlike solvation model
NASA Astrophysics Data System (ADS)
Si, Dejun; Li, Hui
2009-07-01
A heterogeneous conductorlike solvation model (conductorlike screening model/conductorlike polarizable continuum model) that uses different local effective dielectrics for different portions of the solute cavity surface is implemented for quantum chemical Hartree-Fock and Kohn-Sham methods. A variational treatment is used to form the heterogeneous solvation operator, so a simple analytic expression of the energy gradients, which are vital for geometry optimization and molecular dynamics simulation, is derived and implemented. Using the new Fixed Points with Variable Areas surface tessellation scheme, continuous and smooth potential energy surfaces as well as analytic gradients are obtained for this heterogeneous model. Application of the heterogeneous solvation model to a realistic quantum model consisting of 101 atoms for the type-1 Cu center in rusticyanin shows that the desolvation due to protein burial can likely raise the reduction potential by ˜200 mV and, including the heterogeneity in geometry optimization, can likely affect the results by ˜2 kcal/mol or ˜70 mV.
Ion thruster performance model
NASA Technical Reports Server (NTRS)
Brophy, J. R.
1984-01-01
A model of ion thruster performance is developed for high flux density, cusped magnetic field thruster designs. This model is formulated in terms of the average energy required to produce an ion in the discharge chamber plasma and the fraction of these ions that are extracted to form the beam. The direct loss of high energy (primary) electrons from the plasma to the anode is shown to have a major effect on thruster performance. The model provides simple algebraic equations enabling one to calculate the beam ion energy cost, the average discharge chamber plasma ion energy cost, the primary electron density, the primary-to-Maxwellian electron density ratio and the Maxwellian electron temperature. Experiments indicate that the model correctly predicts the variation in plasma ion energy cost for changes in propellant gas (Ar, Kr and Xe), grid transparency to neutral atoms, beam extraction area, discharge voltage, and discharge chamber wall temperature. The model and experiments indicate that thruster performance may be described in terms of only four thruster configuration dependent parameters and two operating parameters. The model also suggests that improved performance should be exhibited by thruster designs which extract a large fraction of the ions produced in the discharge chamber, which have good primary electron and neutral atom containment and which operate at high propellant flow rates.
Modeling of metallurgical emulsions
NASA Astrophysics Data System (ADS)
Lin, Zuohua; Guthrie, R. I. L.
1994-12-01
Emulsification behavior caused by gas bubbles rising through a slag/metal interface has been studied in both a thin-slice model and a three-dimensional model using low-temperature oil/aqueous and oil/mercury analogues. A generalized model characterizing the transitional volume of droplets entrained in the upper phase in the emulsification process was developed. The transient volume of “metal” entrained, V d(t), following the start of bubbling followed the relation V d(t) = V ?(1 - e (t/?)). This model is also of general significance to other metallurgical emulsification processes, such as those induced by iron ore reduction and top blowing, regardless of the mechanisms of droplet generation. Based on this model, the birth rate and mean residence time of droplets dispersed by rising bubbles can be quantified. Dimensional analysis was used to express the volume of lower liquid carried up into the emulsion per bubble, thereby allowing better estimates of the droplet birth rate in a practical emulsification process induced by bottom blowing. Emulsification behaviors in industrial in-bath smelting processes were interpreted with the present modeling results.
Seismic wave propagation modeling
Jones, E.M.; Olsen, K.B.
1998-12-31
This is the final report of a one-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). A hybrid, finite-difference technique was developed for modeling nonlinear soil amplification from three-dimensional, finite-fault radiation patters for earthquakes in arbitrary earth models. The method was applied to the 17 January 1994 Northridge earthquake. Particle velocities were computed on a plane at 5-km depth, immediately above the causative fault. Time-series of the strike-perpendicular, lateral velocities then were propagated vertically in a soil column typical of the San Fernando Valley. Suitable material models were adapted from a suite used to model ground motions at the US Nevada Test Site. The effects of nonlinearity reduced relative spectral amplitudes by about 40% at frequencies above 1.5 Hz but only by 10% at lower frequencies. Runs made with source-depth amplitudes increased by a factor of two showed relative amplitudes above 1.5 Hz reduced by a total of 70% above 1.5 Hz and 20% at lower frequencies. Runs made with elastic-plastic material models showed similar behavior to runs made with Masing-Rule models.
Johnson, Jason K; Chertkov, Michael; Netrapalli, Praneeth
2010-11-12
Inference and learning of graphical models are both well-studied problems in statistics and machine learning that have found many applications in science and engineering. However, exact inference is intractable in general graphical models, which suggests the problem of seeking the best approximation to a collection of random variables within some tractable family of graphical models. In this paper, we focus our attention on the class of planar Ising models, for which inference is tractable using techniques of statistical physics [Kac and Ward; Kasteleyn]. Based on these techniques and recent methods for planarity testing and planar embedding [Chrobak and Payne], we propose a simple greedy algorithm for learning the best planar Ising model to approximate an arbitrary collection of binary random variables (possibly from sample data). Given the set of all pairwise correlations among variables, we select a planar graph and optimal planar Ising model defined on this graph to best approximate that set of correlations. We present the results of numerical experiments evaluating the performance of our algorithm.
Bobyn, Justin D; Little, David G; Gray, Randolph; Schindeler, Aaron
2015-04-01
Multiple techniques designed to induce scoliotic deformity have been applied across many animal species. We have undertaken a review of the literature regarding experimental models of scoliosis in animals to discuss their utility in comprehending disease aetiology and treatment. Models of scoliosis in animals can be broadly divided into quadrupedal and bipedal experiments. Quadrupedal models, in the absence of axial gravitation force, depend upon development of a mechanical asymmetry along the spine to initiate a scoliotic deformity. Bipedal models more accurately mimic human posture and consequently are subject to similar forces due to gravity, which have been long appreciated to be a contributing factor to the development of scoliosis. Many effective models of scoliosis in smaller animals have not been successfully translated to primates and humans. Though these models may not clarify the aetiology of human scoliosis, by providing a reliable and reproducible deformity in the spine they are a useful means with which to test interventions designed to correct and prevent deformity. PMID:25492698
Stenner, A. Jackson; Fisher, William P.; Stone, Mark H.; Burdick, Donald S.
2013-01-01
Rasch's unidimensional models for measurement show how to connect object measures (e.g., reader abilities), measurement mechanisms (e.g., machine-generated cloze reading items), and observational outcomes (e.g., counts correct on reading instruments). Substantive theory shows what interventions or manipulations to the measurement mechanism can be traded off against a change to the object measure to hold the observed outcome constant. A Rasch model integrated with a substantive theory dictates the form and substance of permissible interventions. Rasch analysis, absent construct theory and an associated specification equation, is a black box in which understanding may be more illusory than not. Finally, the quantitative hypothesis can be tested by comparing theory-based trade-off relations with observed trade-off relations. Only quantitative variables (as measured) support such trade-offs. Note that to test the quantitative hypothesis requires more than manipulation of the algebraic equivalencies in the Rasch model or descriptively fitting data to the model. A causal Rasch model involves experimental intervention/manipulation on either reader ability or text complexity or a conjoint intervention on both simultaneously to yield a successful prediction of the resultant observed outcome (count correct). We conjecture that when this type of manipulation is introduced for individual reader text encounters and model predictions are consistent with observations, the quantitative hypothesis is sustained. PMID:23986726
Radiology interpretation process modeling.
Noumeir, Rita
2006-04-01
Information and communication technology in healthcare promises optimized patient care while ensuring efficiency and cost-effectiveness. However, the promised results are not yet achieved; the healthcare process requires analysis and radical redesign to achieve improvements in care quality and productivity. Healthcare process reengineering is thus necessary and involves modeling its workflow. Even though the healthcare process is very large and not very well modeled yet, its sub-processes can be modeled individually, providing fundamental pieces of the whole model. In this paper, we are interested in modeling the radiology interpretation process that results in generating a diagnostic radiology report. This radiology report is an important clinical element of the patient healthcare record and assists in healthcare decisions. We present the radiology interpretation process by identifying its boundaries and by positioning it on the large healthcare process map. Moreover, we discuss an information data model and identify roles, tasks and several information flows. Furthermore, we describe standard frameworks to enable radiology interpretation workflow implementations between heterogeneous systems. PMID:16165403
Conceptual stress study model.
Harrah, C B
1978-01-01
The conceptual stress study model (SSM) is basically a schematic representation of the key elements associated with the study of mechanical forces on the human operator. It has been a useful tool for systematically describing the scope and effects of vibration stress study efforts. The model structure is organized in a hierarchy of four levels established to reflect four different levels of description of human response complexity. In order of increasing complexity, these four classes of responses are biodynamic, physiological, psychophysical, and performance. The model also depicts the internal energy and/or information pathways that are likely to occur in the formation of a response to specified vibration inputs. The present utility of this model is to provide a conceptual scheme to help guide vibration study efforts. It is not a general mathematically predictive model. It could prove useful in pulling together the many-segmented modeling results achieved over the last several years to form a centralized interpretive data base. PMID:623599
Generic CSP Performance Model for NREL's System Advisor Model: Preprint
Wagner, M. J.; Zhu, G.
2011-08-01
The suite of concentrating solar power (CSP) modeling tools in NREL's System Advisor Model (SAM) includes technology performance models for parabolic troughs, power towers, and dish-Stirling systems. Each model provides the user with unique capabilities that are catered to typical design considerations seen in each technology. Since the scope of the various models is generally limited to common plant configurations, new CSP technologies, component geometries, and subsystem combinations can be difficult to model directly in the existing SAM technology models. To overcome the limitations imposed by representative CSP technology models, NREL has developed a 'Generic Solar System' (GSS) performance model for use in SAM. This paper discusses the formulation and performance considerations included in this model and verifies the model by comparing its results with more detailed models.
Quantum Semiconductor Modeling Ansgar Jungel
Jüngel, Ansgar
Quantum Semiconductor Modeling Ansgar J¨ungel Vienna University of Technology, Austria www.jungel.at.vu Ansgar J¨ungel (TU Wien) Quantum Semiconductor Modeling www.jungel.at.vu 1 / 154 #12;Contents 1 Introduction 2 Semiconductor modeling 3 Microscopic quantum models Density matrices Schr¨odinger models Wigner
Lykken, Joseph D.
2010-05-01
'BSM physics' is a phrase used in several ways. It can refer to physical phenomena established experimentally but not accommodated by the Standard Model, in particular dark matter and neutrino oscillations (technically also anything that has to do with gravity, since gravity is not part of the Standard Model). 'Beyond the Standard Model' can also refer to possible deeper explanations of phenomena that are accommodated by the Standard Model but only with ad hoc parameterizations, such as Yukawa couplings and the strong CP angle. More generally, BSM can be taken to refer to any possible extension of the Standard Model, whether or not the extension solves any particular set of puzzles left unresolved in the SM. In this general sense one sees reference to the BSM 'theory space' of all possible SM extensions, this being a parameter space of coupling constants for new interactions, new charges or other quantum numbers, and parameters describing possible new degrees of freedom or new symmetries. Despite decades of model-building it seems unlikely that we have mapped out most of, or even the most interesting parts of, this theory space. Indeed we do not even know what is the dimensionality of this parameter space, or what fraction of it is already ruled out by experiment. Since Nature is only implementing at most one point in this BSM theory space (at least in our neighborhood of space and time), it might seem an impossible task to map back from a finite number of experimental discoveries and measurements to a unique BSM explanation. Fortunately for theorists the inevitable limitations of experiments themselves, in terms of resolutions, rates, and energy scales, means that in practice there are only a finite number of BSM model 'equivalence classes' competing at any given time to explain any given set of results. BSM phenomenology is a two-way street: not only do experimental results test or constrain BSM models, they also suggest - to those who get close enough to listen - new directions for BSM model building. Contrary to popular shorthand jargon, supersymmetry (SUSY) is not a BSM model: it is a symmetry principle characterizing a BSM framework with an infinite number of models. Indeed we do not even know the full dimensionality of the SUSY parameter space, since this presumably includes as-yet-unexplored SUSY-breaking mechanisms and combinations of SUSY with other BSM principles. The SUSY framework plays an important role in BSM physics partly because it includes examples of models that are 'complete' in the same sense as the Standard Model, i.e. in principle the model predicts consequences for any observable, from cosmology to b physics to precision electroweak data to LHC collisions. Complete models, in addition to being more explanatory and making connections between diverse phenomena, are also much more experimentally constrained than strawman scenarios that focus more narrowly. One sometimes hears: 'Anything that is discovered at the LHC will be called supersymmetry.' There is truth behind this joke in the sense that the SUSY framework incorporates a vast number of possible signatures accessible to TeV colliders. This is not to say that the SUSY framework is not testable, but we are warned that one should pay attention to other promising frameworks, and should be prepared to make experimental distinctions between them. Since there is no formal classification of BSM frameworks I have invented my own. At the highest level there are six parent frameworks: (1) Terascale supersymmetry; (2) PNGB Higgs; (3) New strong dynamics; (4) Warped extra dimensions; (5) Flat extra dimensions; and (6) Hidden valleys. Here is the briefest possible survey of each framework, with the basic idea, the generic new phenomena, and the energy regime over which the framework purports to make comprehensive predictions.
Multiscale Thermohydrologic Model
T. Buscheck
2004-10-12
The purpose of the multiscale thermohydrologic model (MSTHM) is to predict the possible range of thermal-hydrologic conditions, resulting from uncertainty and variability, in the repository emplacement drifts, including the invert, and in the adjoining host rock for the repository at Yucca Mountain. Thus, the goal is to predict the range of possible thermal-hydrologic conditions across the repository; this is quite different from predicting a single expected thermal-hydrologic response. The MSTHM calculates the following thermal-hydrologic parameters: temperature, relative humidity, liquid-phase saturation, evaporation rate, air-mass fraction, gas-phase pressure, capillary pressure, and liquid- and gas-phase fluxes (Table 1-1). These thermal-hydrologic parameters are required to support ''Total System Performance Assessment (TSPA) Model/Analysis for the License Application'' (BSC 2004 [DIRS 168504]). The thermal-hydrologic parameters are determined as a function of position along each of the emplacement drifts and as a function of waste package type. These parameters are determined at various reference locations within the emplacement drifts, including the waste package and drip-shield surfaces and in the invert. The parameters are also determined at various defined locations in the adjoining host rock. The MSTHM uses data obtained from the data tracking numbers (DTNs) listed in Table 4.1-1. The majority of those DTNs were generated from the following analyses and model reports: (1) ''UZ Flow Model and Submodels'' (BSC 2004 [DIRS 169861]); (2) ''Development of Numerical Grids for UZ Flow and Transport Modeling'' (BSC 2004); (3) ''Calibrated Properties Model'' (BSC 2004 [DIRS 169857]); (4) ''Thermal Conductivity of the Potential Repository Horizon'' (BSC 2004 [DIRS 169854]); (5) ''Thermal Conductivity of the Non-Repository Lithostratigraphic Layers'' (BSC 2004 [DIRS 170033]); (6) ''Ventilation Model and Analysis Report'' (BSC 2004 [DIRS 169862]); (7) ''Heat Capacity Analysis Report'' (BSC 2004 [DIRS 170003]).
NASA Astrophysics Data System (ADS)
Zakšek, Klemen; Podobnikar, Tomaž; Oštir, Krištof
2005-03-01
The Sun is the main energy source of the life on the Earth. Thus, solar radiation energy data and models are important for many areas of research and applications. Many parameters influence the amount of solar energy at a particular standing point of the Earth's surface; therefore, many solar radiation models were produced in the last few years. Solar radiation energy depends mostly on incidence angle, which is defined by astronomical and surface parameters. Our solar radiation model is based on defining incidence angle by computing normal-to-the-surface tangent plane and direction of the Sun. If a part of the surface is in the shadow, it receives lesser energy than sunny areas. That is why shadow determination is an important part of the model. The sky is usually not completely clear, so meteorological parameters had to be integrated into the model. Meteorological model distinguishes among direct and diffuse Sun radiation. The model was tested and implemented for the whole Slovenia and it was also compared with previous studies. Case study surface data were calculated from the DEM with a 25 m resolution. The astronomical data, which were required for virtual Sun motion simulation around the Earth, were derived from the astronomical almanac. Meteorological data were acquired from observed mean values on 24 meteorological stations between 1961 and 1990. All calculations were made for hours and decades and finally, the annual quasiglobal radiation energy, which is the energy received by inclined plane from the Sun in one year, was calculated from the sum of all the energies of all the decades.
Battery performance models in ADVISOR
NASA Astrophysics Data System (ADS)
Johnson, V. H.
This paper summarizes battery modeling capabilities in ADVISOR—the National Renewable Energy Laboratory's advanced vehicle simulator written in the Matlab/Simulink environment. ADVISOR's Matlab-oriented battery models consist of the following: (1) an internal resistance model, (2) a resistance-capacitance ( RC) model, (3) a PNGV capacitance model, (4) a neural network (nnet) lead acid model, and (5) a fundamental lead acid battery model. For the models, the electric schematics (where applicable), thermal models, accuracy, existing datasets, and sample validation plots are presented. A brief summary of ADVISOR's capabilities for co-simulation with Saber is presented, which links ADVISOR with Saber's lead acid battery model. The models outlined in this paper were presented at the workshop on 'Development of Advanced Battery Engineering Models' in August 2001.
DENSE GAS DISPERSION MODEL (DEGADIS)
The Dense Gas Dispersion Model (DEGADIS) is a mathematical dispersion model that can be used to model the transport of toxic chemical releases into the atmosphere. Its range of applicability includes continuous, instantaneous, finite duration, and time- variant releases; negative...
Grand unified models and cosmology
Rachel Jeannerot
2006-04-28
The cosmological consequences of particle physics grand unified theories (GUTs) are studied. Cosmological models are implemented in realistic particle physics models. Models consistent from both particle physics and cosmological considerations are selected. (...)
Gravitational models for mission planning
NASA Technical Reports Server (NTRS)
Mueller, A. C.
1982-01-01
A fitted truncated model is developed and any differences between this fitted model and one derived by simply truncating are analyzed. Based on the study, recommendations are made for an appropriate model for use in a mission planning environment.
Physical Modeling of Protein Folding
Lunds Universitet,
Physical Modeling of Protein Folding Stefan Wallin Department of Theoretical Physics Lund Opponent: Cecilia Clementi Rice University, Houston, USA To be presented, with the permission Stefan Wallin Physical modeling of protein folding Sequence-based models for protein folding
POEM: PESTICIDE ORCHARD ECOSYSTEM MODEL
The Pesticide Orchard Ecosystem Model (POEM) is a mathematical model of organophosphate pesticide movement in an apple orchard ecosystem. In addition submodels on invertebrate population dynamics are included. The fate model allows the user to select the pesticide, its applicatio...
Phylogenetic Models: Algebra and Evolution
Allman, Elizabeth S.
Phylogenetic Models: Algebra and Evolution Elizabeth S. Allman Dept. of Mathematics and Statistics evolutionary tree 2. sequence evolution probabilistic models on trees 3. phylogenetic ideals and varieties history. IMA -- Phylogenetic Models: Algebra and Evolution Slide 1 #12;For phylogenetic inference
Modeling ocean deep convection
NASA Astrophysics Data System (ADS)
Canuto, V. M.; Howard, A.; Hogan, P.; Cheng, Y.; Dubovikov, M. S.; Montenegro, L. M.
The goal of this study is to assess models for Deep Convection with special emphasis on their use in coarse resolution ocean general circulation models. A model for deep convection must contain both vertical transport and lateral advection by mesoscale eddies generated by baroclinic instabilities. The first process operates mostly in the initial phases while the second dominates the final stages. Here, the emphasis is on models for vertical mixing. When mesoscales are not resolved, they are treated with the Gent and McWilliams parameterization. The model results are tested against the measurements of Lavender, Davis and Owens, 2002 (LDO) in the Labrador Sea. Specifically, we shall inquire whether the models are able to reproduce the region of " deepest convection," which we shall refer to as DC (mixed layer depths 800-1300 m). The region where it was measured by Lavender et al. (2002) will be referred to as the LDO region. The main results of this study can be summarized as follows. 3° × 3° resolution. A GFDL-type OGCM with the GISS vertical mixing model predicts DC in the LDO region where the vertical heat diffusivity is found to be 10 m 2 s -1, a value that is quite close to the one suggested by heuristic studies. No parameter was changed from the original GISS model. However, the GISS model also predicts some DC in a region to the east of the LDO region. 3° × 3° resolution. A GFDL-type OGCM with the KPP model (everything else being the same) does not predict DC in the LDO region where the vertical heat diffusivity is found to be 0.5 × 10 -4 m 2 s -1 which is the background value. The KPP model yields DC only to the east of the LDO region. 1° × 1° resolution. In this case, a MY2.5 mixing scheme predicts DC in the LDO region. However, it also predicts DC to the west, north and south of it, where it is not observed. The behavior of the KPP and MY models are somewhat anti-symmetric. The MY models yield too low a mixing in stably stratified flows since they predict a critical Richardson number Ri(cr)=0.19 which is five times smaller than the value Ri(cr)=O(1) needed to obtain realistic ML depths. However, as discussed above, in unstable stratifications the MY models yield better results. On the other hand, the KPP model, which was motivated primarily by the need to overcome the MY "too low mixing" in stable stratification, yields at coarse resolution, no DC in the LDO region. In this respect, the GISS model, yields both a correct Ri(cr)=O(1) in stable stratification and correct results in the unstable configuration in the LDO region. 1/3° × 1/3° resolution. In this case, KPP predicts mixed layer depths up to 1.7 km inside the LDO region where at coarse resolution none existed. However, the model still produces DC at locations outside the LDO region where it is not observed. However, since these regions are intermingled with very shallow mixed layer depths, the resulting mean mixed layer depths turn out to be less than 800 m almost everywhere outside the LDO region. 1/12° × 1/12° resolution. In this case, KPP predicts mixed layer depths up to 3 km both inside and outside the LDO region. These regions are, here too, intermingled with very shallow mixed layer depths with resulting mean mixed depths greater than 800 m both inside and outside the LDO region. In conclusion, as for a model for deep convection to be used in coarse resolution, these results indicate that the GISS mixing model fares well with observations in both stable and unstable stratifications but overestimates its geographical extent. This leads to the problem of future improvements of the model. It must be generalized to include the following physically important features: (a) rotation that becomes important in the later phases of deep convection when it acts to slow down the rate of mixed layer deepening, (b) non-locality, in particular skewness which is large (negative) in the initial phases of deep convection and becomes small in the final stages, and finally, (c) a new model to treat lateral advection by baroclinic eddi
Validation of Space Weather Models at Community Coordinated Modeling Center
NASA Technical Reports Server (NTRS)
Kuznetsova, M. M.; Hesse, M.; Pulkkinen, A.; Maddox, M.; Rastaetter, L.; Berrios, D.; Zheng, Y.; MacNeice, P. J.; Shim, J.; Taktakishvili, A.; Chulaki, A.
2011-01-01
The Community Coordinated Modeling Center (CCMC) is a multi-agency partnership to support the research and developmental work necessary to substantially increase space weather modeling capabilities and to facilitate advanced models deployment in forecasting operations. Space weather models and coupled model chains hosted at the CCMC range from the solar corona to the Earth's upper atmosphere. CCMC has developed a number of real-time modeling systems, as well as a large number of modeling and data products tailored to address the space weather needs of NASA's robotic missions. The CCMC conducts unbiased model testing and validation and evaluates model readiness for operational environment. CCMC has been leading recent comprehensive modeling challenges under GEM, CEDAR and SHINE programs. The presentation will focus on experience in carrying out comprehensive and systematic validation of large sets of. space weather models
NASA Technical Reports Server (NTRS)
Iscoe, Neil; Liu, Zheng-Yang; Feng, Guohui; Yenne, Britt; Vansickle, Larry; Ballantyne, Michael
1992-01-01
Domain-specific knowledge is required to create specifications, generate code, and understand existing systems. Our approach to automating software design is based on instantiating an application domain model with industry-specific knowledge and then using that model to achieve the operational goals of specification elicitation and verification, reverse engineering, and code generation. Although many different specification models can be created from any particular domain model, each specification model is consistent and correct with respect to the domain model.
Pragmatic geometric model evaluation
NASA Astrophysics Data System (ADS)
Pamer, Robert
2015-04-01
Quantification of subsurface model reliability is mathematically and technically demanding as there are many different sources of uncertainty and some of the factors can be assessed merely in a subjective way. For many practical applications in industry or risk assessment (e. g. geothermal drilling) a quantitative estimation of possible geometric variations in depth unit is preferred over relative numbers because of cost calculations for different scenarios. The talk gives an overview of several factors that affect the geometry of structural subsurface models that are based upon typical geological survey organization (GSO) data like geological maps, borehole data and conceptually driven construction of subsurface elements (e. g. fault network). Within the context of the trans-European project "GeoMol" uncertainty analysis has to be very pragmatic also because of different data rights, data policies and modelling software between the project partners. In a case study a two-step evaluation methodology for geometric subsurface model uncertainty is being developed. In a first step several models of the same volume of interest have been calculated by omitting successively more and more input data types (seismic constraints, fault network, outcrop data). The positions of the various horizon surfaces are then compared. The procedure is equivalent to comparing data of various levels of detail and therefore structural complexity. This gives a measure of the structural significance of each data set in space and as a consequence areas of geometric complexity are identified. These areas are usually very data sensitive hence geometric variability in between individual data points in these areas is higher than in areas of low structural complexity. Instead of calculating a multitude of different models by varying some input data or parameters as it is done by Monte-Carlo-simulations, the aim of the second step of the evaluation procedure (which is part of the ongoing work) is to calculate basically two model variations that can be seen as geometric extremes of all available input data. This does not lead to a probability distribution for the spatial position of geometric elements but it defines zones of major (or minor resp.) geometric variations due to data uncertainty. Both model evaluations are then analyzed together to give ranges of possible model outcomes in metric units.
Microburst modelling and scaling
NASA Technical Reports Server (NTRS)
Lundgren, T. S.; Yao, J.; Mansour, N. N.
1992-01-01
A microburst can be modeled by releasing a volume of fluid that is slightly heavier than the ambient fluid, allowing it to fall onto a horizontal surface. Vorticity develops on the sides of this parcel as it descends and causes it to roll up into a turbulent vortex ring which impinges on the ground. Such a model exhibits many of the features of naturally occurring microbursts which are a hazard to aviation. In this paper this model is achieved experimentally by releasing a volume of salt water into fresh water from a cylindrical dispenser. When care is taken with the release the spreading rate of the surface outflow is measurable and quite repeatable despite the fact that the flow is turbulent. An elementary numerical approximation to this model, based on inviscid vortex dynamics, has also been developed. A scaling law is proposed which allows experiments with different fluid densities to be compared with each other and with the numerical results. More importantly the scaling law makes it possible to compare the model results with real microbursts.
The standard cosmological model
NASA Astrophysics Data System (ADS)
Scott, D.
2006-06-01
The Standard Model of Particle Physics (SMPP) is an enormously successful description of high-energy physics, driving ever more precise measurements to find "physics beyond the standard model", as well as providing motivation for developing more fundamental ideas that might explain the values of its parameters. Simultaneously, a description of the entire three-dimensional structure of the present-day Universe is being built up painstakingly. Most of the structure is stochastic in nature, being merely the result of the particular realization of the "initial conditions" within our observable Universe patch. However, governing this structure is the Standard Model of Cosmology (SMC), which appears to require only about a dozen parameters. Cosmologists are now determining the values of these quantities with increasing precision to search for "physics beyond the standard model", as well as trying to develop an understanding of the more fundamental ideas that might explain the values of its parameters. Although it is natural to see analogies between the two Standard Models, some intrinsic differences also exist, which are discussed here. Nevertheless, a truly fundamental theory will have to explain both the SMPP and SMC, and this must include an appreciation of which elements are deterministic and which are accidental. Considering different levels of stochasticity within cosmology may make it easier to accept that physical parameters in general might have a nondeterministic aspect.
NASA Astrophysics Data System (ADS)
Sahoo, Shaon; Ganguly, Soumya Kanti
2015-04-01
Contrary to the actual nonlinear Glauber model, the linear Glauber model (LGM) is exactly solvable, although the detailed balance condition is not generally satisfied. This motivates us to address the issue of writing the transition rate () in a best possible linear form such that the mean squared error in satisfying the detailed balance condition is least. The advantage of this work is that, by studying the LGM analytically, we will be able to anticipate how the kinetic properties of an arbitrary Ising system depend on the temperature and the coupling constants. The analytical expressions for the optimal values of the parameters involved in the linear are obtained using a simple Moore-Penrose pseudoinverse matrix. This approach is quite general, in principle applicable to any system and can reproduce the exact results for one dimensional Ising system. In the continuum limit, we get a linear time-dependent Ginzburg-Landau equation from the Glauber's microscopic model of non-conservative dynamics. We analyze the critical and dynamic properties of the model, and show that most of the important results obtained in different studies can be reproduced by our new mathematical approach. We will also show in this paper that the effect of magnetic field can easily be studied within our approach; in particular, we show that the inverse of relaxation time changes quadratically with (weak) magnetic field and that the fluctuation-dissipation theorem is valid for our model.
NASA Astrophysics Data System (ADS)
Cathala, Thierry; Latger, Jean
2010-10-01
More and more defence and civil applications require simulation of marine synthetic environment. Currently, the "Future Anti-Surface-Guided-Weapon" (FASGW) or "anti-navire léger" (ANL) missile needs this kind of modelling. This paper presents a set of technical enhancement of the SE-Workbench that aim at better representing the sea profile and the interaction with targets. The operational scenario variability is a key criterion: the generic geographical area (e.g. Persian Gulf, coast of Somalia,...), the type of situation (e.g. peace keeping, peace enforcement, anti-piracy, drug interdiction,...)., the objectives (political, strategic, or military objectives), the description of the mission(s) (e.g. antipiracy) and operation(s) (e.g. surveillance and reconnaissance, escort, convoying) to achieve the objectives, the type of environment (Weather, Time of day, Geography [coastlines, islands, hills/mountains]). The paper insists on several points such as the dual rendering using either ray tracing [and the GP GPU optimization] or rasterization [and GPU shaders optimization], the modelling of sea-surface based on hypertextures and shaders, the wakes modelling, the buoyancy models for targets, the interaction of coast and littoral, the dielectric infrared modelling of water material.
Vaginal drug distribution modeling.
Katz, David F; Yuan, Andrew; Gao, Yajing
2015-09-15
This review presents and applies fundamental mass transport theory describing the diffusion and convection driven mass transport of drugs to the vaginal environment. It considers sources of variability in the predictions of the models. It illustrates use of model predictions of microbicide drug concentration distribution (pharmacokinetics) to gain insights about drug effectiveness in preventing HIV infection (pharmacodynamics). The modeling compares vaginal drug distributions after different gel dosage regimens, and it evaluates consequences of changes in gel viscosity due to aging. It compares vaginal mucosal concentration distributions of drugs delivered by gels vs. intravaginal rings. Finally, the modeling approach is used to compare vaginal drug distributions across species with differing vaginal dimensions. Deterministic models of drug mass transport into and throughout the vaginal environment can provide critical insights about the mechanisms and determinants of such transport. This knowledge, and the methodology that obtains it, can be applied and translated to multiple applications, involving the scientific underpinnings of vaginal drug distribution and the performance evaluation and design of products, and their dosage regimens, that achieve it. PMID:25933938
Advanced Chemistry Basins Model
Blanco, Mario; Cathles, Lawrence; Manhardt, Paul; Meulbroek, Peter; Tang, Yongchun
2003-02-13
The objective of this project is to: (1) Develop a database of additional and better maturity indicators for paleo-heat flow calibration; (2) Develop maturation models capable of predicting the chemical composition of hydrocarbons produced by a specific kerogen as a function of maturity, heating rate, etc.; assemble a compositional kinetic database of representative kerogens; (3) Develop a 4 phase equation of state-flash model that can define the physical properties (viscosity, density, etc.) of the products of kerogen maturation, and phase transitions that occur along secondary migration pathways; (4) Build a conventional basin model and incorporate new maturity indicators and data bases in a user-friendly way; (5) Develop an algorithm which combines the volume change and viscosities of the compositional maturation model to predict the chemistry of the hydrocarbons that will be expelled from the kerogen to the secondary migration pathways; (6) Develop an algorithm that predicts the flow of hydrocarbons along secondary migration pathways, accounts for mixing of miscible hydrocarbon components along the pathway, and calculates the phase fractionation that will occur as the hydrocarbons move upward down the geothermal and fluid pressure gradients in the basin; and (7) Integrate the above components into a functional model implemented on a PC or low cost workstation.
PATHS groundwater hydrologic model
Nelson, R.W.; Schur, J.A.
1980-04-01
A preliminary evaluation capability for two-dimensional groundwater pollution problems was developed as part of the Transport Modeling Task for the Waste Isolation Safety Assessment Program (WISAP). Our approach was to use the data limitations as a guide in setting the level of modeling detail. PATHS Groundwater Hydrologic Model is the first level (simplest) idealized hybrid analytical/numerical model for two-dimensional, saturated groundwater flow and single component transport; homogeneous geology. This document consists of the description of the PATHS groundwater hydrologic model. The preliminary evaluation capability prepared for WISAP, including the enhancements that were made because of the authors' experience using the earlier capability is described. Appendixes A through D supplement the report as follows: complete derivations of the background equations are provided in Appendix A. Appendix B is a comprehensive set of instructions for users of PATHS. It is written for users who have little or no experience with computers. Appendix C is for the programmer. It contains information on how input parameters are passed between programs in the system. It also contains program listings and test case listing. Appendix D is a definition of terms.
Recalibrating software reliability models
NASA Technical Reports Server (NTRS)
Brocklehurst, Sarah; Chan, P. Y.; Littlewood, Bev; Snell, John
1989-01-01
In spite of much research effort, there is no universally applicable software reliability growth model which can be trusted to give accurate predictions of reliability in all circumstances. Further, it is not even possible to decide a priori which of the many models is most suitable in a particular context. In an attempt to resolve this problem, techniques were developed whereby, for each program, the accuracy of various models can be analyzed. A user is thus enabled to select that model which is giving the most accurate reliability predictions for the particular program under examination. One of these ways of analyzing predictive accuracy, called the u-plot, in fact allows a user to estimate the relationship between the predicted reliability and the true reliability. It is shown how this can be used to improve reliability predictions in a completely general way by a process of recalibration. Simulation results show that the technique gives improved reliability predictions in a large proportion of cases. However, a user does not need to trust the efficacy of recalibration, since the new reliability estimates produced by the technique are truly predictive and so their accuracy in a particular application can be judged using the earlier methods. The generality of this approach would therefore suggest that it be applied as a matter of course whenever a software reliability model is used.
High altitude atmospheric modeling
NASA Technical Reports Server (NTRS)
Hedin, Alan E.
1988-01-01
Five empirical models were compared with 13 data sets, including both atmospheric drag-based data and mass spectrometer data. The most recently published model, MSIS-86, was found to be the best model overall with an accuracy around 15 percent. The excellent overall agreement of the mass spectrometer-based MSIS models with the drag data, including both the older data from orbital decay and the newer accelerometer data, suggests that the absolute calibration of the (ensemble of) mass spectrometers and the assumed drag coefficient in the atomic oxygen regime are consistent to 5 percent. This study illustrates a number of reasons for the current accuracy limit such as calibration accuracy and unmodeled trends. Nevertheless, the largest variations in total density in the thermosphere are accounted for, to a very high degree, by existing models. The greatest potential for improvements is in areas where we still have insufficient data (like the lower thermosphere or exosphere), where there are disagreements in technique (such as the exosphere) which can be resolved, or wherever generally more accurate measurements become available.
He, X. ); Joshi, G.C.; Lew, H.; Volkas, R.R. )
1991-10-01
The differences in family-lepton numbers are anomaly-free in the minimal standard model (MSM), and can therefore be gauged. For three generations of quarks and leptons, three models emerge depending on whether (i) {ital L}{sub {ital e}}{minus}{ital L}{sub {mu}}, (ii) {ital L}{sub {ital e}}{minus}{ital L}{sub {tau}}, or (iii) {ital L}{sub {mu}}{minus}{ital L}{sub {tau}} are gauged. These are the simplest models to feature a {ital Z}{prime} boson because no fermions beyond those already present in the MSM are required to cancel gauge anomalies. We analyze the phenomenology of models (i) and (ii) in detail, and present constraints derived from low-energy neutral-current data and CERN LEP data. We find that these {ital Z}{prime} bosons may have a relatively low mass yet still evade present experimental bounds, while remaining detectable in current accelerators. The introduction of neutrino masses into the models is then considered. We discuss how one may incorporate both the reported 17-keV neutrino, and the Mikheyev-Smirnov-Wolfenstein-effect solution of the solar-neutrino problem. We then describe how to embed the extra U(1) gauge group into a horizontal SU(2)-symmetry group acting on leptons.
NASA Astrophysics Data System (ADS)
Wolf, Sebastian; Henning, Thomas
We present model calculations for the scattered flux and polarization of nuclear radiation having in mind the AGN unification scheme. It was our aim to consider recently discussed AGN polarization models, to find out their common aspects, and to derive fundamental model parameters for AGNi, ultimately. Additionally, we improved present models, for instance by including multiple instead of single scattering. For these purposes we developed a Monte-Carlo radiative transfer code for both electron and dust scattering (Thomson and Rayleigh/Mie scattering). Irradiation from a point-like source can be treated as well as extended anisotropic radiation sources and dust re-emission. The goal of our work is to point out common aspects and differences between those kinds of AGN-models which relate the polarization of light to electron and/or dust scattering. We found that the observed wavelength dependence of the linear polarization of AGNi must be caused by dust scattering in optically thin cones, whereby the additional presence of electrons increases the absolute amount of polarization. The torus geometry and density profile as well as the thermal dust re-emission also influence the polarization degree. Multiple scattering in the cones was found to be important for optical depths above \\approx 0.1.
Stubbs, D F
1977-01-01
Some empirical and theoretical models of the emptying behaviour of the stomach are presented. The laws of Laplace, Hooke, and Poisseuille are used to derive a new model of gastric emptying. Published data on humans are used to test the model and evaluate empirical constants. It is shown that for meals with an initial volume of larger than or equal to 300 ml, the reciprocal of the cube root of the volume of meal remaining is proportional to the time the meal is in the stomach.For meals of initial volume of less than 300 ml the equation has to be corrected for the fact that the 'resting volume' of gastric contents is about 28 ml. The more exact formula is given in the text. As this model invokes no neural or hormonal factors, it is suggested that the gastric emptying response to the volume of a meal does not depend on these factors. The gastric emptying response to the composition of the meal does depend on such factors and a recent model of this process is used to evaluate an empirical constant. PMID:856678
Cosmological Models and Stability
NASA Astrophysics Data System (ADS)
Andersson, Lars
Principles in the form of heuristic guidelines or generally accepted dogma play an important role in the development of physical theories. In particular, philosophical considerations and principles figure prominently in the work of Albert Einstein. As mentioned in the talk by Ji?í Bi?ák at this conference, Einstein formulated the equivalence principle, an essential step on the road to general relativity, during his time in Prague 1911-1912. In this talk, I would like to discuss some aspects of cosmological models. As cosmology is an area of physics where "principles" such as the "cosmological principle" or the "Copernican principle" play a prominent role in motivating the class of models which form part of the current standard model, I will start by comparing the role of the equivalence principle to that of the principles used in cosmology. I will then briefly describe the standard model of cosmology to give a perspective on some mathematical problems and conjectures on cosmological models, which are discussed in the later part of this paper.
Not Available
1981-10-01
(1) We recommend the establishment of an experimental test facility, appropriately instrumented, dedicated to research on theoretical modeling concepts. Validation of models for the various flow regimes, and establishment of the limitations or concepts used in the construction of models, are sorely needed areas of research. There exists no mechanism currently for funding of such research on a systematic basis. Such a facility would provide information fundamental to progress in the physics of turbulent multi-phase flow, which would also have impact on the understanding of coal utilization processes; (2) combustion research appears to have special institutional barriers to information exchange because it is an established, commercial ongoing effort, with heavy reliance on empirical data for proprietary configurations; (3) for both gasification and combustion reactors, current models appear to handle adequately some, perhaps even most, gross aspects of the reactors such as overall efficiency and major chemical output constituents. However, new and more stringent requirements concerning NOX, SOX and POX (small paticulate) production require greater understanding of process details and spatial inhomogenities, hence refinement of current models to include some greater detail is necessary; (4) further progress in the theory of single-phase turbulent flow would benefit our understanding of both combustors and gasifiers; and (5) another area in which theoretical development would be extremely useful is multi-phase flow.
Energy Science and Technology Software Center (ESTSC)
2004-06-21
CMOR comprises a set of FORTRAN 90 dunctions that can be used to produce CF-compliant netCDF files. The structure of the files created by CMOR and the metadata they contain fulfill the requirements of many of the climate community?s standard model experiments (which are referred to here as "MIPS", which stands for "model intercomparison project", including, for example, AMIP, CMIP, CFMIP, PMIP, APE, and IPCC scenario runs), CMOR was not designed to serve as anmore »all-purpose wfiter of CF-compliant netCDF files, but simply to reduce the effort required to prepare and manage MIP data. Although MIPs encourage systematic analysis of results across models, this is only easy to do if the model output is written in a common format with files structured similarly and with sufficient metadata uniformly stored according to a common standard. Individual modeling groups store their data in different ways. but if a group can read its own data with FORTRAN, then it should easily be able to transform the data, using CMOR, into the common format required by the MIPs, The adoption of CMOR as a standard code for exchanging climate data will facilitate participation in MIPs because after learning how to satisfy the output requirements of one MIP, it will be easy to prepare output for the other MIPs.« less
Using the Model Coupling Toolkit to couple earth system models
Warner, J.C.; Perlin, N.; Skyllingstad, E.D.
2008-01-01
Continued advances in computational resources are providing the opportunity to operate more sophisticated numerical models. Additionally, there is an increasing demand for multidisciplinary studies that include interactions between different physical processes. Therefore there is a strong desire to develop coupled modeling systems that utilize existing models and allow efficient data exchange and model control. The basic system would entail model "1" running on "M" processors and model "2" running on "N" processors, with efficient exchange of model fields at predetermined synchronization intervals. Here we demonstrate two coupled systems: the coupling of the ocean circulation model Regional Ocean Modeling System (ROMS) to the surface wave model Simulating WAves Nearshore (SWAN), and the coupling of ROMS to the atmospheric model Coupled Ocean Atmosphere Prediction System (COAMPS). Both coupled systems use the Model Coupling Toolkit (MCT) as a mechanism for operation control and inter-model distributed memory transfer of model variables. In this paper we describe requirements and other options for model coupling, explain the MCT library, ROMS, SWAN and COAMPS models, methods for grid decomposition and sparse matrix interpolation, and provide an example from each coupled system. Methods presented in this paper are clearly applicable for coupling of other types of models. ?? 2008 Elsevier Ltd. All rights reserved.
Business models of information aggregators
Hu, Jiangxia, S.M. Massachusetts Institute of Technology
2008-01-01
This thesis identifies the specific characteristics of information aggregators, and proposes nine business models appropriate for information aggregators. These nine models are: advertising, brokerage, subscription, ...
Simple ocean carbon cycle models
Caldeira, K.; Hoffert, M.I.; Siegenthaler, U.
1994-02-01
Simple ocean carbon cycle models can be used to calculate the rate at which the oceans are likely to absorb CO{sub 2} from the atmosphere. For problems involving steady-state ocean circulation, well calibrated ocean models produce results that are very similar to results obtained using general circulation models. Hence, simple ocean carbon cycle models may be appropriate for use in studies in which the time or expense of running large scale general circulation models would be prohibitive. Simple ocean models have the advantage of being based on a small number of explicit assumptions. The simplicity of these ocean models facilitates the understanding of model results.
Modeling microbial growth and dynamics.
Esser, Daniel S; Leveau, Johan H J; Meyer, Katrin M
2015-11-01
Modeling has become an important tool for widening our understanding of microbial growth in the context of applied microbiology and related to such processes as safe food production, wastewater treatment, bioremediation, or microbe-mediated mining. Various modeling techniques, such as primary, secondary and tertiary mathematical models, phenomenological models, mechanistic or kinetic models, reactive transport models, Bayesian network models, artificial neural networks, as well as agent-, individual-, and particle-based models have been applied to model microbial growth and activity in many applied fields. In this mini-review, we summarize the basic concepts of these models using examples and applications from food safety and wastewater treatment systems. We further review recent developments in other applied fields focusing on models that explicitly include spatial relationships. Using these examples, we point out the conceptual similarities across fields of application and encourage the combined use of different modeling techniques in hybrid models as well as their cross-disciplinary exchange. For instance, pattern-oriented modeling has its origin in ecology but may be employed to parameterize microbial growth models when experimental data are scarce. Models could also be used as virtual laboratories to optimize experimental design analogous to the virtual ecologist approach. Future microbial growth models will likely become more complex to benefit from the rich toolbox that is now available to microbial growth modelers. PMID:26298697
Modelling intelligent behavior
NASA Technical Reports Server (NTRS)
Green, H. S.; Triffet, T.
1993-01-01
An introductory discussion of the related concepts of intelligence and consciousness suggests criteria to be met in the modeling of intelligence and the development of intelligent materials. Methods for the modeling of actual structure and activity of the animal cortex have been found, based on present knowledge of the ionic and cellular constitution of the nervous system. These have led to the development of a realistic neural network model, which has been used to study the formation of memory and the process of learning. An account is given of experiments with simple materials which exhibit almost all properties of biological synapses and suggest the possibility of a new type of computer architecture to implement an advanced type of artificial intelligence.
2-Stage Classification Modeling
Energy Science and Technology Software Center (ESTSC)
1994-11-01
CIRCUIT2.4 is used to design optimum two-stage classification configurations and operating conditions for energy conservation. It permits simulation of five basic grinding-classification circuits, including one single-stage and four two-stage classification arrangements. Hydrocyclones, spiral classifiers, and sieve band screens can be simulated, and the user may choose the combination of devices for the flowsheet simulation. In addition, the user may select from four classification modeling methods to achieve the goals of a simulation project using themore »most familiar concepts. Circuit performance is modeled based on classification parameters or equipment operating conditions. A modular approach was taken in designing the program, which allows future addition of other models with relatively minor changes.« less
Progress in Initiator Modeling
Hrousis, C A; Christensen, J S
2009-05-04
There is great interest in applying magnetohydrodynamic (MHD) simulation techniques to the designs of electrical high explosive (HE) initiators, for the purpose of better understanding a design's sensitivities, optimizing its performance, and/or predicting its useful lifetime. Two MHD-capable LLNL codes, CALE and ALE3D, are being used to simulate the process of ohmic heating, vaporization, and plasma formation in the bridge of an initiator, be it an exploding bridgewire (EBW), exploding bridgefoil (EBF) or slapper type initiator. The initiation of the HE is simulated using Tarver Ignition & Growth reactive flow models. 1-D, 2-D and 3-D models have been constructed and studied. The models provide some intuitive explanation of the initiation process and are useful for evaluating the potential impact of identified aging mechanisms (such as the growth of intermetallic compounds or powder sintering). The end product of this work is a simulation capability for evaluating margin in proposed, modified or aged initiation system designs.
Astrochemistry: Synthesis and Modelling
Wakelam, Valentine; Herbst, Eric
2013-01-01
We discuss models that astrochemists have developed to study the chemical composition of the interstellar medium. These models aim at computing the evolution of the chemical composition of a mixture of gas and dust under as- trophysical conditions. These conditions, as well as the geometry and the physical dynamics, have to be adapted to the objects being studied because different classes of objects have very different characteristics (temperatures, densities, UV radia- tion fields, geometry, history etc); e.g., proto-planetary disks do not have the same characteristics as protostellar envelopes. Chemical models are being improved continually thanks to comparisons with observations but also thanks to laboratory and theoretical work in which the individual processes are studied.
NASA Astrophysics Data System (ADS)
Masuda, Naoki; Gibert, N.; Redner, S.
2010-07-01
We introduce the heterogeneous voter model (HVM), in which each agent has its own intrinsic rate to change state, reflective of the heterogeneity of real people, and the partisan voter model (PVM), in which each agent has an innate and fixed preference for one of two possible opinion states. For the HVM, the time until consensus is reached is much longer than in the classic voter model. For the PVM in the mean-field limit, a population evolves to a preference-based state, where each agent tends to be aligned with its internal preference. For finite populations, discrete fluctuations ultimately lead to consensus being reached in a time that scales exponentially with population size.
NASA Astrophysics Data System (ADS)
Kanai, Yasuhiro; Abe, Keiji; Seki, Yoichi
2015-06-01
We propose a price percolation model to reproduce the price distribution of components used in industrial finished goods. The intent is to show, using the price percolation model and a component category as an example, that percolation behaviors, which exist in the matter system, the ecosystem, and human society, also exist in abstract, random phenomena satisfying the power law. First, we discretize the total potential demand for a component category, considering it a random field. Second, we assume that the discretized potential demand corresponding to a function of a finished good turns into actual demand if the difficulty of function realization is less than the maximum difficulty of the realization. The simulations using this model suggest that changes in a component category's price distribution are due to changes in the total potential demand corresponding to the lattice size and the maximum difficulty of realization, which is an occupation probability. The results are verified using electronic components' sales data.
Pulsed Plasma Accelerator Modeling
NASA Technical Reports Server (NTRS)
Goodman, M.; Kazeminezhad, F.; Owens, T.
2009-01-01
This report presents the main results of the modeling task of the PPA project. The objective of this task is to make major progress towards developing a new computational tool with new capabilities for simulating cylindrically symmetric 2.5 dimensional (2.5 D) PPA's. This tool may be used for designing, optimizing, and understanding the operation of PPA s and other pulsed power devices. The foundation for this task is the 2-D, cylindrically symmetric, magnetohydrodynamic (MHD) code PCAPPS (Princeton Code for Advanced Plasma Propulsion Simulation). PCAPPS was originally developed by Sankaran (2001, 2005) to model Lithium Lorentz Force Accelerators (LLFA's), which are electrode based devices, and are typically operated in continuous magnetic field to the model, and implementing a first principles, self-consistent algorithm to couple the plasma and power circuit that drives the plasma dynamics.
NASA Astrophysics Data System (ADS)
Kruglov, S. I.
2015-08-01
We propose a new model of modified gravity theory with the function . Constant curvature solutions corresponding to the flat and de Sitter spacetime are obtained. The Jordan and Einstein frames are considered; the potential and the mass of the scalar degree of freedom are found. We show that the flat spacetime is stable and the de Sitter spacetime is unstable. The slow-roll parameters , , and the -fold number of the model are evaluated in the Einstein frame. The index of the scalar spectrum power-law and the tensor-to-scalar ratio are calculated. Critical points of autonomous equations for the de Sitter phase and the matter dominated epoch are found and studied. We obtain the approximate solution of equations of motion which is the deviation from the de Sitter phase in the Jordan frame. It is demonstrated that the model passes the matter stability test.
Critical Infrastructure Modeling System
Energy Science and Technology Software Center (ESTSC)
2004-10-01
The Critical Infrastructure Modeling System (CIMS) is a 3D modeling and simulation environment designed to assist users in the analysis of dependencies within individual infrastructure and also interdependencies between multiple infrastructures. Through visual cuing and textual displays, a use can evaluate the effect of system perturbation and identify the emergent patterns that evolve. These patterns include possible outage areas from a loss of power, denial of service or access, and disruption of operations. Method ofmore »Solution: CIMS allows the user to model a system, create an overlay of information, and create 3D representative images to illustrate key infrastructure elements. A geo-referenced scene, satellite, aerial images or technical drawings can be incorporated into the scene. Scenarios of events can be scripted, and the user can also interact during run time to alter system characteristics. CIMS operates as a discrete event simulation engine feeding a 3D visualization.« less
NASA Astrophysics Data System (ADS)
Heibron, John
2011-04-01
Rutherford's nuclear model originally was a theory of scattering that represented both the incoming alpha particles and their targets as point charges. The assumption that the apha particle, which Rutherford knew to be a doubly ionized helium atom, was a bare nucleus, and the associated assumption that the electronic structure of the atom played no significant role in large-angle scattering, had immediate and profound consequences well beyond the special problem for which Rutherford introduced them. The group around him in Manchester in 1911/12, which included Niels Bohr, Charles Darwin, Georg von Hevesy, and Henry Moseley, worked out some of these consequences. Their elucidation of radioactivity, isotopy, atomic number, and quantization marked an epoch in microphysics. Rutherford's nuclear model was exemplary not only for its fertility and picturability, but also for its radical simplicity. The lecturer will not undertake to answer the baffling question why such simple models work.
Apul, D.; Gardner, K.; Eighmy, T.
2005-09-30
Computer modeling can be an instructive tool to evaluate potential environmental impacts of coal combustion byproducts and other secondary materials used in road and embankment construction. Results from the HYDRUS2D model coupled with an uncertainty analysis suggest that the cadmium fluxes will be significantly less than the output from simpler models with worst case scenarios. Two dimensional analysis of the leaching from the base layer also suggest that concentrations leaching ground water will not be significant for metals unless the pavement is completely damaged and built on sandy soils. Development and verification of these types of tools may lead the way to more informed decision with respect to beneficial use of coal combustion byproducts and other secondary materials. 5 figs., 1 tab.
NASA Technical Reports Server (NTRS)
Lucas, Michael J.; Marcolini, Michael A.
1997-01-01
The Rotorcraft Noise Model (RNM) is an aircraft noise impact modeling computer program being developed for NASA-Langley Research Center which calculates sound levels at receiver positions either on a uniform grid or at specific defined locations. The basic computational model calculates a variety of metria. Acoustic properties of the noise source are defined by two sets of sound pressure hemispheres, each hemisphere being centered on a noise source of the aircraft. One set of sound hemispheres provides the broadband data in the form of one-third octave band sound levels. The other set of sound hemispheres provides narrowband data in the form of pure-tone sound pressure levels and phase. Noise contours on the ground are output graphically or in tabular format, and are suitable for inclusion in Environmental Impact Statements or Environmental Assessments.
Kate's Model Verification Tools
NASA Technical Reports Server (NTRS)
Morgan, Steve
1991-01-01
Kennedy Space Center's Knowledge-based Autonomous Test Engineer (KATE) is capable of monitoring electromechanical systems, diagnosing their errors, and even repairing them when they crash. A survey of KATE's developer/modelers revealed that they were already using a sophisticated set of productivity enhancing tools. They did request five more, however, and those make up the body of the information presented here: (1) a transfer function code fitter; (2) a FORTRAN-Lisp translator; (3) three existing structural consistency checkers to aid in syntax checking their modeled device frames; (4) an automated procedure for calibrating knowledge base admittances to protect KATE's hardware mockups from inadvertent hand valve twiddling; and (5) three alternatives for the 'pseudo object', a programming patch that currently apprises KATE's modeling devices of their operational environments.
Parametric Explosion Spectral Model
Ford, S R; Walter, W R
2012-01-19
Small underground nuclear explosions need to be confidently detected, identified, and characterized in regions of the world where they have never before occurred. We develop a parametric model of the nuclear explosion seismic source spectrum derived from regional phases that is compatible with earthquake-based geometrical spreading and attenuation. Earthquake spectra are fit with a generalized version of the Brune spectrum, which is a three-parameter model that describes the long-period level, corner-frequency, and spectral slope at high-frequencies. Explosion spectra can be fit with similar spectral models whose parameters are then correlated with near-source geology and containment conditions. We observe a correlation of high gas-porosity (low-strength) with increased spectral slope. The relationship between the parametric equations and the geologic and containment conditions will assist in our physical understanding of the nuclear explosion source.
Modeling the Entrepeñas Reservoir.
Wiese, Bernd U; Palancar, María C; Aragón, José M; Sánchez, Fernando; Gil, Roberto
2006-08-01
The Entrepeñas Reservoir is a monomictic reservoir located in River Tagus (Spain). The aim of this work is to establish a feasible model to predict the depth of the thermocline that is developed in the reservoir during the period of natural thermal stratification. Entrainment, eddy diffusion, inflow of external energy, and other factors are considered to calibrate the parameters of the model. The methodology involves the measure of actual temperature and electrical conductivity profiles, use of meteorological data and reservoir parameters, and selection and application of several models from the literature. The calculations and predictions are integrated to a software packet that is able to predict thermocline depth and water temperature profile during a 1-year period on a day-by-day basis. In the thermocline depth, the prediction error, on the basis of real data, is less than 6% and, in the water temperature, it is 2 degrees C. PMID:17059129
Burinskii, Alexander
2015-01-01
As is known, the gravitational and electromagnetic (EM) field of the Dirac electron is described by an over-extremal Kerr-Newman (KN) black hole (BH) solution which has the naked singular ring and two-sheeted topology. This space is regulated by the formation of a regular source based on the Higgs mechanism of broken symmetry. This source shares much in common with the known MIT- and SLAC-bag models, but has the important advantage, of being in accordance with gravitational and electromagnetic field of the external KN solution. The KN bag model is flexible. At rotations, it takes the shape of a thin disk, and similar to other bag models, under deformations it creates a string-like structure which is positioned along the sharp border of the disk.
Alexander Burinskii
2015-04-30
As is known, the gravitational and electromagnetic (EM) field of the Dirac electron is described by an over-extremal Kerr-Newman (KN) black hole (BH) solution which has the naked singular ring and two-sheeted topology. This space is regulated by the formation of a regular source based on the Higgs mechanism of broken symmetry. This source shares much in common with the known MIT- and SLAC-bag models, but has the important advantage, of being in accordance with gravitational and electromagnetic field of the external KN solution. The KN bag model is flexible. At rotations, it takes the shape of a thin disk, and similar to other bag models, under deformations it creates a string-like structure which is positioned along the sharp border of the disk.
Luís Tarrataca; Andreas Wichert
2015-02-06
The production system is a theoretical model of computation relevant to the artificial intelligence field allowing for problem solving procedures such as hierarchical tree search. In this work we explore some of the connections between artificial intelligence and quantum computation by presenting a model for a quantum production system. Our approach focuses on initially developing a model for a reversible production system which is a simple mapping of Bennett's reversible Turing machine. We then expand on this result in order to accommodate for the requirements of quantum computation. We present the details of how our proposition can be used alongside Grover's algorithm in order to yield a speedup comparatively to its classical counterpart. We discuss the requirements associated with such a speedup and how it compares against a similar quantum hierarchical search approach.
Modeling complexity in biology
NASA Astrophysics Data System (ADS)
Louzoun, Yoram; Solomon, Sorin; Atlan, Henri; Cohen, Irun. R.
2001-08-01
Biological systems, unlike physical or chemical systems, are characterized by the very inhomogeneous distribution of their components. The immune system, in particular, is notable for self-organizing its structure. Classically, the dynamics of natural systems have been described using differential equations. But, differential equation models fail to account for the emergence of large-scale inhomogeneities and for the influence of inhomogeneity on the overall dynamics of biological systems. Here, we show that a microscopic simulation methodology enables us to model the emergence of large-scale objects and to extend the scope of mathematical modeling in biology. We take a simple example from immunology and illustrate that the methods of classical differential equations and microscopic simulation generate contradictory results. Microscopic simulations generate a more faithful approximation of the reality of the immune system.
Ocean General Circulation Models
Yoon, Jin-Ho; Ma, Po-Lun
2012-09-30
1. Definition of Subject The purpose of this text is to provide an introduction to aspects of oceanic general circulation models (OGCMs), an important component of Climate System or Earth System Model (ESM). The role of the ocean in ESMs is described in Chapter XX (EDITOR: PLEASE FIND THE COUPLED CLIMATE or EARTH SYSTEM MODELING CHAPTERS). The emerging need for understanding the Earth’s climate system and especially projecting its future evolution has encouraged scientists to explore the dynamical, physical, and biogeochemical processes in the ocean. Understanding the role of these processes in the climate system is an interesting and challenging scientific subject. For example, a research question how much extra heat or CO2 generated by anthropogenic activities can be stored in the deep ocean is not only scientifically interesting but also important in projecting future climate of the earth. Thus, OGCMs have been developed and applied to investigate the various oceanic processes and their role in the climate system.
Gelman, Andrew
is checking the t of models to observed data and, when appropriate, altering and generalizing the models as a creative tool for uncovering aspects of reality that are imperfectly captured by the existing model
Animal models of schizophrenia
Jones, CA; Watson, DJG; Fone, KCF
2011-01-01
Developing reliable, predictive animal models for complex psychiatric disorders, such as schizophrenia, is essential to increase our understanding of the neurobiological basis of the disorder and for the development of novel drugs with improved therapeutic efficacy. All available animal models of schizophrenia fit into four different induction categories: developmental, drug-induced, lesion or genetic manipulation, and the best characterized examples of each type are reviewed herein. Most rodent models have behavioural phenotype changes that resemble ‘positive-like’ symptoms of schizophrenia, probably reflecting altered mesolimbic dopamine function, but fewer models also show altered social interaction, and learning and memory impairment, analogous to negative and cognitive symptoms of schizophrenia respectively. The negative and cognitive impairments in schizophrenia are resistant to treatment with current antipsychotics, even after remission of the psychosis, which limits their therapeutic efficacy. The MATRICS initiative developed a consensus on the core cognitive deficits of schizophrenic patients, and recommended a standardized test battery to evaluate them. More recently, work has begun to identify specific rodent behavioural tasks with translational relevance to specific cognitive domains affected in schizophrenia, and where available this review focuses on reporting the effect of current and potential antipsychotics on these tasks. The review also highlights the need to develop more comprehensive animal models that more adequately replicate deficits in negative and cognitive symptoms. Increasing information on the neurochemical and structural CNS changes accompanying each model will also help assess treatments that prevent the development of schizophrenia rather than treating the symptoms, another pivotal change required to enable new more effective therapeutic strategies to be developed. LINKED ARTICLES This article is part of a themed issue on Translational Neuropharmacology. To view the other articles in this issue visit http://dx.doi.org/10.1111/bph.2011.164.issue-4 PMID:21449915
Towards Behavioral Reflexion Models
NASA Technical Reports Server (NTRS)
Ackermann, Christopher; Lindvall, Mikael; Cleaveland, Rance
2009-01-01
Software architecture has become essential in the struggle to manage today s increasingly large and complex systems. Software architecture views are created to capture important system characteristics on an abstract and, thus, comprehensible level. As the system is implemented and later maintained, it often deviates from the original design specification. Such deviations can have implication for the quality of the system, such as reliability, security, and maintainability. Software architecture compliance checking approaches, such as the reflexion model technique, have been proposed to address this issue by comparing the implementation to a model of the systems architecture design. However, architecture compliance checking approaches focus solely on structural characteristics and ignore behavioral conformance. This is especially an issue in Systems-of- Systems. Systems-of-Systems (SoS) are decompositions of large systems, into smaller systems for the sake of flexibility. Deviations of the implementation to its behavioral design often reduce the reliability of the entire SoS. An approach is needed that supports the reasoning about behavioral conformance on architecture level. In order to address this issue, we have developed an approach for comparing the implementation of a SoS to an architecture model of its behavioral design. The approach follows the idea of reflexion models and adopts it to support the compliance checking of behaviors. In this paper, we focus on sequencing properties as they play an important role in many SoS. Sequencing deviations potentially have a severe impact on the SoS correctness and qualities. The desired behavioral specification is defined in UML sequence diagram notation and behaviors are extracted from the SoS implementation. The behaviors are then mapped to the model of the desired behavior and the two are compared. Finally, a reflexion model is constructed that shows the deviations between behavioral design and implementation. This paper discusses the approach and shows how it can be applied to investigate reliability issues in SoS.
NASA Technical Reports Server (NTRS)
Dwek, Eli
2004-01-01
A viable interstellar dust model - characterized by the composition, morphology, and size distribution of the dust grains and by the abundance of the different elements locked up in the dust - should fit all observational constraints arising primarily from the interactions of the dust with incident radiation or the ambient gas. As a minimum, these should include the average interstellar extinction, the infrared emission from the diffuse interstellar medium (ISM), and the observed interstellar abundances of the various refractory elements. The last constraint has been largely ignored, resulting in dust models that require more elements to be in the dust phase than available in the ISM. In this talk I will describe the most recent advances towards the construction of a comprehensive dust model made by Zubko, Dwek, and Arendt, who, for the first time, included the interstellar abundances as explicit constraints in the construction of interstellar dust models. The results showed the existence of many distinct models that satisfy the basic set of observational constraints, including bare spherical silicate and graphite particles, PAHs, as well as spherical composite particles containing silicate, organic refractories, water ice, and voids. Recently, a new interstellar dust constituent has emerged, consisting of metallic needles. These needles constitute a very small fraction of the interstellar dust abundance, and their existence is primarily manifested in the 4 to 8 micron wavelength region, where they dominate the interstellar extinction. Preliminary studies show that these models may be distinguished by their X-ray halos, which are produced primarily by small angle scattering off large dust particles along the line of sight to bright X-ray sources, and probe dust properties largely inaccessible at other wavelengths.
Foss, A.; Cree, I.; Dolin, P.; Hungerford, J.
1999-01-01
BACKGROUND/AIM—There has been no consistent pattern reported on how mortality for uveal melanoma varies with age. This information can be useful to model the complexity of the disease. The authors have examined ocular cancer trends, as an indirect measure for uveal melanoma mortality, to see how rates vary with age and to compare the results with their other studies on predicting metastatic disease.?METHODS—Age specific mortality was examined for England and Wales, the USA, and Canada. A log-log model was fitted to the data. The slopes of the log-log plots were used as measure of disease complexity and compared with the results of previous work on predicting metastatic disease.?RESULTS—The log-log model provided a good fit for the US and Canadian data, but the observed rates deviated for England and Wales among people over the age of 65 years. The log-log model for mortality data suggests that the underlying process depends upon four rate limiting steps, while a similar model for the incidence data suggests between three and four rate limiting steps. Further analysis of previous data on predicting metastatic disease on the basis of tumour size and blood vessel density would indicate a single rate limiting step between developing the primary tumour and developing metastatic disease.?CONCLUSIONS—There is significant underreporting or underdiagnosis of ocular melanoma for England and Wales in those over the age of 65 years. In those under the age of 65, a model is presented for ocular melanoma oncogenesis requiring three rate limiting steps to develop the primary tumour and a fourth rate limiting step to develop metastatic disease. The three steps in the generation of the primary tumour involve two key processes—namely, growth and angiogenesis within the primary tumour. The step from development of the primary to development of metastatic disease is likely to involve a single rate limiting process.?? PMID:10216060
NASA Technical Reports Server (NTRS)
Fields, Christina M.
2013-01-01
The Spaceport Command and Control System (SCCS) Simulation Computer Software Configuration Item (CSCI) is,. responsible for providing simulations to support test and verification of SCCS hardware and software. The Universal Coolant Transporter System (UCTS) is a Space Shuttle Orbiter support piece of the Ground Servicing Equipment (GSE). The purpose of the UCTS is to provide two support services to the Space Shuttle Orbiter immediately after landing at the Shuttle Landing Facility. The Simulation uses GSE Models to stand in for the actual systems to support testing of SCCS systems s:luring their development. As an intern at KSC, my assignment was to develop a model component for the UCTS. I was given a fluid component (drier) to model in Matlab. The drier was a Catch All replaceable core type filter-drier. The filter-drier provides maximum protection for the thermostatic expansion valve and solenoid valve from dirt that may be in the system. The filter-drier also protects the valves from freezing up. I researched fluid dynamics to understand the function of my component. I completed training for UNIX and Simulink to help aid in my assignment. The filter-drier was modeled by determining affects it has on the pressure, velocity and temperature of the system. I used Bernoulli's Equation to calculate the pressure and velocity differential through the dryer. I created my model filter-drier in Simulink and wrote the test script to test the component. I completed component testing and captured test data. The finalized model was sent for peer review for any improvements.
NASA Technical Reports Server (NTRS)
Fields, Christina M.
2013-01-01
The Spaceport Command and Control System (SCCS) Simulation Computer Software Configuration Item (CSCI) is responsible for providing simulations to support test and verification of SCCS hardware and software. The Universal Coolant Transporter System (UCTS) was a Space Shuttle Orbiter support piece of the Ground Servicing Equipment (GSE). The initial purpose of the UCTS was to provide two support services to the Space Shuttle Orbiter immediately after landing at the Shuttle Landing Facility. The UCTS is designed with the capability of servicing future space vehicles; including all Space Station Requirements necessary for the MPLM Modules. The Simulation uses GSE Models to stand in for the actual systems to support testing of SCCS systems during their development. As an intern at Kennedy Space Center (KSC), my assignment was to develop a model component for the UCTS. I was given a fluid component (dryer) to model in Simulink. I completed training for UNIX and Simulink. The dryer is a Catch All replaceable core type filter-dryer. The filter-dryer provides maximum protection for the thermostatic expansion valve and solenoid valve from dirt that may be in the system. The filter-dryer also protects the valves from freezing up. I researched fluid dynamics to understand the function of my component. The filter-dryer was modeled by determining affects it has on the pressure and velocity of the system. I used Bernoulli's Equation to calculate the pressure and velocity differential through the dryer. I created my filter-dryer model in Simulink and wrote the test script to test the component. I completed component testing and captured test data. The finalized model was sent for peer review for any improvements. I participated in Simulation meetings and was involved in the subsystem design process and team collaborations. I gained valuable work experience and insight into a career path as an engineer.
Polyimidazopyrrolone model compounds.
NASA Technical Reports Server (NTRS)
Young, P. R.
1972-01-01
The model reactions between phthalic anhydride and o-phenylenediamine were studied under conditions analogous to the polymerization and post-cyclization of dianhydrides with bis(o-diamines) to form polyimidazopyrrolones (Pyrrones). The route from the initial amide-acid-amine to the tetracyclic Pyrrone model when the reactions are conducted in aprotic solvents is highly competitive between isolatable benzimidazole-acid and imide-amine intermediates. Solid-state thermal conversion of the amide-acid-amine affords a unique dimeric species containing amide, imide, and benzimidazole functions. It was confirmed that melt techniques lead to disproportionation products. The application of these findings to related polymer synthesis is discussed.
Updating Standard Solar Models
F. Ciacio; S. Degl'Innocenti; B. Ricci
1996-05-25
We present an updated version of our standard solar model (SSM) where helium and heavy elements diffusion is included and the improved OPAL equation of state (Rogers 1994, Rogers Swenson \\& Iglesias 1996) is used. In such a way the EOS is consistent with the adopted opacity tables, from the same Livermore group, an occurrence which should further enhance the reliability of the model. The results for the physical characteristics and the neutrino production of our SSM are discussed and compared with previous works on the matter.
Testing agile requirements models.
Botaschanjan, Jewgenij; Pister, Markus; Rumpe, Bernhard
2004-05-01
This paper discusses a model-based approach to validate software requirements in agile development processes by simulation and in particular automated testing. The use of models as central development artifact needs to be added to the portfolio of software engineering techniques, to further increase efficiency and flexibility of the development beginning already early in the requirements definition phase. Testing requirements are some of the most important techniques to give feedback and to increase the quality of the result. Therefore testing of artifacts should be introduced as early as possible, even in the requirements definition phase. PMID:15083546
Southworth, Frank; Garrow, Dr. Laurie
2011-01-01
This chapter describes the principal types of both passenger and freight demand models in use today, providing a brief history of model development supported by references to a number of popular texts on the subject, and directing the reader to papers covering some of the more recent technical developments in the area. Over the past half century a variety of methods have been used to estimate and forecast travel demands, drawing concepts from economic/utility maximization theory, transportation system optimization and spatial interaction theory, using and often combining solution techniques as varied as Box-Jenkins methods, non-linear multivariate regression, non-linear mathematical programming, and agent-based microsimulation.
NASA Astrophysics Data System (ADS)
Rong, Shu-Jun; Liu, Qiu-Yu
2012-04-01
The puma model on the basis of the Lorentz and CPT violation may bring an economical interpretation to the conventional neutrinos oscillation and part of the anomalous oscillations. We study the effect of the perturbation to the puma model. In the case of the first-order perturbation which keeps the (23) interchange symmetry, the mixing matrix element Ue3 is always zero. The nonzero mixing matrix element Ue3 is obtained in the second-order perturbation that breaks the (23) interchange symmetry.
Petrov, S.
2000-08-20
An information system is reflexive if it stores a description of its current structure in the body of stored information and is acting on the base of this information. A data model is reflexive, if its language is meta-closed and can be used to build such a system. The need for reflexive data models in new areas of information technology applications is argued. An attempt to express basic notions related to information systems is made in the case when the system supports and uses meta-closed representation of the data.
NASA Astrophysics Data System (ADS)
Koshelev, Alexey S.
2010-11-01
We consider the appearance of multiple scalar fields in SFT inspired non-local models with a single scalar field at late times. In this regime all the scalar fields are free. This system minimally coupled to gravity is mainly analyzed in this note. We build one exact solution to the equations of motion. We consider an exactly solvable model which obeys a simple exact solution in the cosmological context for the Friedmann equations and that reproduces the behavior expected from SFT in the asymptotic regime.
Fitzpatrick, B. A.; Gangadhar, K.
1992-01-01
mlbs AVG. DAILY PRDN. _ ZZZZ mlbs DAY CFG PRDN. 600 II 240 II -....... . TG Credit mlbs. mlblh mlblh kW 1 2 3 31 TOTAl Figure 4 Spreadsheet Architecture 21 ENERGY STANDARDS FOR 1992 CFG.1 CFG.2 A B A B 600 II 2
NASA Astrophysics Data System (ADS)
Floría, L. M.; Baesens, C.; Gómez-Gardeñes, J.
In the preface to his monograph on the structure of Evolutionary Theory [1], the late professor Stephen Jay Gould attributes to the philosopher Immanuel Kant the following aphorism in Science Philosophy: "Percepts without concepts are blind; concepts without percepts are empty". Using with a bit of freedom these Kantian terms, one would say that a scientific model is a framework (or network) of interrelated concepts and percepts where experts build up scientific consistent explanations of a given set of observations. Good models are those which are both, conceptually simple and universal in their perceptions. Let us illustrate with examples the meaning of this statement.
Zakir F. Seidov
2004-07-08
Some exact analytical formulas are presented for the generalized Roche model of rotating star. The gravitational field of the central core is described by the model of two equal-mass point centers placed symmetrically at rotation axis with the pure imaginary z coordinates. The all basic parameters of the critical figure of the rotating massless envelope are presented in analytical form. The existence of the concave form of the uniformly rotating liquid is shown for a large enough angular velocity of the rotation.